id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2309.08338
Liquid-Gas phase transition for Gibbs point process with Quermass interaction
We prove the existence of a liquid-gas phase transition for continuous Gibbs point process in $\mathbb{R}^d$ with Quermass interaction. The Hamiltonian we consider is a linear combination of the volume $\mathcal{V}$, the surface measure $\mathcal{S}$ and the Euler-Poincar\'e characteristic $\chi$ of a halo of particles (i.e. an union of balls centred at the positions of particles). We show the non-uniqueness of infinite volume Gibbs measures for special values of activity and temperature, provided that the temperature is low enough. Moreover we show the non-differentiability of the pressure at these critical points. Our main tool is an adaptation of the Pirogov-Sina\"i-Zahradnik theory for continuous systems with interaction exhibiting a saturation property.
David Dereudre, Christopher Renaud-Chan
2023-09-15T11:46:40Z
http://arxiv.org/abs/2309.08338v1
# Liquid-Gas phase transition for Gibbs point process with Quermass interaction ###### Abstract We prove the existence of a liquid-gas phase transition for continuous Gibbs point process in \(\mathbb{R}^{d}\) with Quermass interaction. The Hamiltonian we consider is a linear combination of the volume \(\mathcal{V}\), the surface measure \(S\) and the Euler-Poincare characteristic \(\chi\) of a halo of particles (i.e. an union of balls centred at the positions of particles). We show the non-uniqueness of infinite volume Gibbs measures for special values of activity and temperature, provided that the temperature is low enough. Moreover we show the non-differentiability of the pressure at these critical points. Our main tool is an adaptation of the Pirogov-Sinai-Zahradnik theory for continuous systems with interaction exhibiting a saturation property. _Keywords--_ Gibbs measure, DLR equations, Widom-Rowlinson model, Pirogov-Sinai-Zahradnik theory, cluster expansion ## 1 Introduction Spatial point processes with interaction are models describing the locations of objects, particles in a domain and for which the interaction can be of different nature; attractive, repulsive or a mix of both at different scales of distance between the points. The most popular point process is surely the Poisson point process [13] which describes random objects without interaction between each other. There are different models for point processes with interaction as for instance Cox point processes [13], determinantal point processes [16], zeros of random polynomials or analytic functions [11], Gibbs point processes [3, 22], etc. Among the field of applications of such models we have plant ecology, telecommunication, astronomy, data science and statistical physics. A large class of point process coming from statistical physics is the family of the Gibbs point processes. The finite volume Gibbs point process on a bounded set \(\Delta\subset\mathbb{R}^{d}\) is defined via an unnormalized density, with respect to the Poisson point process in \(\Delta\), of the form \(z^{N_{\Delta}}e^{-\beta H}\). The parameters \(z\) and \(\beta\) are positive numbers (respectively called the activity and the inverse temperature), \(N_{\Delta}\) is the number of points inside \(\Delta\) and \(H\) an energy function (also called the Hamiltonian). By taking the thermodynamic limit (i.e \(\Delta\to\mathbb{R}^{d}\)) we obtain the infinite volume Gibbs point processes which are characterized by the equilibrium equations also known as the Dobrushin-Landford-Ruelle (DLR) equations. The type of interaction we study in this paper is the Quermass interaction. The energy function is defined as a linear combination of the \(d+1\) Minkowski functionals of the halo of the configuration, which is the union of closed balls centred at the position of the particles with random radii. This type of interaction is a natural extension of the Widom-Rowlinson interaction for penetrable spheres [24]. Hadwiger's characterization Theorem [9] ensures that any functional \(F\) on finite union of convex compact spaces, continuous for the Haussdorff topology, invariant under isometric transformations and additive (i.e. \(F(A\cup B)=F(A)+F(B)-F(A\cap B)\)) can be written as a linear combination of Minkowski functionals. This representation justifies the study of the Quermass interaction for modelling a large class of morphological interactions. Furthermore, this family of energy function is used to describe the physics of micro-emulsions and complex fluids [15, 17, 18]. The stability condition in the sense of Ruelle has been established in [12] and the existence of the infinite volume Gibbs measures with random radii has been tackled in [4]. In general a phase transition phenomenon occurs when for some special value of \(\beta\) and \(z\), the boundary conditions at infinity influence some macroscopic statistics in the bulk of the system. A phase transition phenomenon can also be defined by uniqueness/non-uniqueness of solutions to the DLR equilibrium equations. More specifically we call a liquid-gas phase transition of first order when several solutions have different intensities. The Gibbs measure with the lowest density is associated to the distribution of a pure gas phase and in contrary the highest density to the distribution of a pure liquid phase. If we consider models of particles with different spins, there exists an abundance of results on phase transition mainly based on the existence of a dominant spin. We can cite for instance the phase transition results on the continuous Potts model [8] and the non symmetrical multiple colour Widom-Rowlinson model [1]. Without spin, we need to rely only on the self arrangement, the geometry and the density of particles for proving the phase transition. It is more difficult and there exist only few known results in this setting. The first result of phase transition without spin has been proved for the Widom-Rowlinson model. It is a Gibbs point process where the Hamiltonian is given by \(H(\omega)=\mathcal{V}(\cup_{x\in\omega}B(x,R))\) where \(\mathcal{V}\) if the volume functional. The first proofs are due to Widom-Rowlinson [24] and Ruelle [21] and use extensively the symmetry of the associated two-colour Widom-Rowlinson process. They prove the existence of a critical activity \(z_{c}\) such that for \(z>z_{c}\) and \(\beta=z\) the liquid-gas phase transition occurs. Later another proof using Fortuin-Kastelyn representation has been given [2]. Recently, the full phase diagram have been almost entirely mapped by proving an OSSS inequality [5]. Outside of a region around the critical point \((z_{c},z_{c})\), the phase transition occurs if and only if \(\beta=z\) for \(z>z_{c}\). Numerical studies have shown that the unmapped region is actually small [10]. Among continuous models without any form of special symmetry, there is the beautiful result of liquid gas phase transition for an attractive pair and repulsive four body potential of the Kac type [14]. They manage to prove, using the Pirogov-Sinai-Zahradnik theory (PSZ), the existence of a phase transition for finite but long range interaction compared to typical distance between particles. Indeed the finite range interaction is obtained as a perturbation of the mean field interaction. In a more recent work using a similar strategy, it has been proven that the liquid gas phase transitions persists if a hard-core interaction is considered [23]. Until now, these results were the only proofs of phase transition for continuum systems without spins. In the present paper, we are interested in the phase transition phenomenon for the Quermass interaction with the particular form \(H(\omega)=\mathcal{V}(\cup_{x\in\omega}B(x,R))+\theta_{1}S(\cup_{x\in\omega}B( x,R))-\theta_{2}\chi(\cup_{x\in\omega}B(x,R))\) where \(S\) is the measure of the surface and \(\chi\) is the Euler-Poincare characteristic. Contrary to the Widom Rowlinson model we cannot benefit from any symmetry of a two spin process due to the contribution of the surface measure and Euler-Poincare characteristic. Moreover, in our work we do not use mean-field approximation and the range of interaction is at the same order than the distance between particles. We achieve to prove, for some parameter \(\theta_{1}\) and \(\theta_{2}\), the existence of two infinite Gibbs point processes with distinct intensities when \(\beta\) is sufficiently large (i.e. at low temperature) and the activity \(z\) equals to a critical activity \(z_{\beta}^{c}\). The main tool of our proof is an adaptation of the Pirogov-Sinai-Zahradnik (PSZ) theory [6, 19, 20] for continuous systems with interaction satisfying a saturation property. For a modern presentation of the PSZ theory we advise the lecture of Chapter 7 of [7]. Let us describe succinctly what the saturation property is. First we consider a discretisation of the space \(\mathbb{R}^{d}\) with cubic tiles. Then we introduce two types of "ground states", which are not usual ground states as minimisers of the energy, but rather as extreme idealizations of the point process. One ground state is the empty state and the other corresponds to a dense and homogeneous distribution of particles (i.e. at least one particle in any tile). These states have the property to provide explicit and tractable energy for an extra point emerging in a ground state. That is what we call saturation property. Using the PSZ theory we are able to show that for \(\beta\) large enough and at the critical activity \(z_{\beta}^{c}\) the pressure of the model is not differentiable with respect to \(z\). Furthermore we construct two different Gibbs measures with the "ground states" as boundary conditions and we make the relation between the intensity of these Gibbs measures and the left and right derivatives of the pressure. We believe that our method is robust enough to deal with other interactions with similar saturation property. Our paper is organized as follows. In Section 2, we introduce the notations, the Quermass interaction and the associated Gibbs point processes. In Section 3, we present the main results of the paper. In Section 4, we develop the tools and we prove the main results. Annex A contains results on Cluster Expansion and Annex B contains the technical proof of Proposition 4 at the heart of the PSZ theory. ###### Contents * 1 Introduction * 2 Quermass interaction model * 2.1 State spaces and notations * 2.2 Interaction * 2.3 Gibbs measures * 3 Results * 4 Proof * 4.1 Coarse Graining decomposition of the energy * 4.2 Contours * 4.3 Partition function and boundary condition * 4.4 Energy and Peierls condition * 4.5 Truncated weights and pressure * 4.6 Proof of Proposition 1 * 5 Annex A : Cluster Expansion * 6 Annex B : Proof of Proposition 4 ## 2 Quermass interaction model ### State spaces and notations We denote by \(B_{b}(\mathbb{R}^{d})\) the set of bounded Borel sets of \(\mathbb{R}^{d}\) with positive Lebesgue measure. For any sets \(A\) and \(B\) in \(B_{b}(\mathbb{R}^{d})\), \(A\oplus B\) stands for the Minkowski sum of these sets. Let \(R_{0}\), \(R_{1}\) be two real numbers such that \(0<R_{0}\leq R_{1}\). We denote by \(E\) the state space of a single marked point defined as \(\mathbb{R}^{d}\times[R_{0},\,R_{1}]\). For any \((x,\,R)\in\mathbf{E}\), the first coordinate \(x\) is for the location of the point and the second coordinate \(R\) is the mark representing the radius of a ball. For any set \(\Delta\in B_{b}(\mathbb{R}^{d})\), \(E_{\Delta}\) is the local state space \(\Delta\times[R_{0},\,R_{1}]\). A configuration of marked points \(\omega\) is a locally finite set in \(E\); i.e. \(N_{\Delta}(\omega):=\#(\omega\cap E_{\Delta})\) is finite for any \(\Delta\in B_{b}(\mathbb{R}^{d})\). We denote by \(\Omega\) the set of all marked point configurations and by \(\Omega_{f}\) its restriction to finite configurations. For any \(\omega\in\Omega\), its projection in \(\Delta\subset\mathbb{R}^{d}\) is defined by \(o_{\Delta}:=\omega\cap E_{\Delta}\). As usual we equipped the state space \(\Omega\) with the \(\sigma\)-algebra \(\mathcal{F}\) generated by the counting functions on \(E\). The halo of a configuration \(\omega\in\Omega\) is defined as \[L(\omega)=\bigcup_{(x,R)\in\omega}B(x,R) \tag{1}\] where \(B(x,R)\) is the closed ball centred at \(x\) with radius \(R\). ### Interaction Let us introduce the Quermass interaction as in [12]. Usually it is defined as a linear combination of \(d+1\) Minkowski functionals of the halo \(L(\omega)\). Here we consider only the volume \(\mathcal{V}\), the surface \(\mathcal{S}\) and the Euler-Poincare characteristic \(\chi\) (in dimension \(d=2\)). This restriction is due to statistical physics considerations since we need the stability of the energy. **Definition 1**.: _Let \(\theta_{1}\in\mathbb{R}\) and \(\theta_{2}\geq 0\). The energy of a finite configuration \(\omega\in\Omega_{f}\) is given by_ \[H(\omega) =\mathcal{V}(L(\omega))+\theta_{1}S(L(\omega))-\theta_{2}\chi(L( \omega)) (d=2)\] \[=\mathcal{V}(L(\omega))+\theta_{1}S(L(\omega)) (d\geq 3),\] _where \(\mathcal{V}(L(\omega))\) is the volume of \(L(\omega)\) defined as the Lebesgue measure of \(L(\omega)\), \(S(L(\omega))\) is the surface of \(L(\omega)\) defined as the \(d-1\)-dimensional Hausdorff measure of the boundary \(\partial L(\omega)\) and \(\chi(L(\omega))\) is the Euler-Poincare characteristic of \(L(\omega)\) defined as the difference between the number of connected components and the number of holes in \(L(\omega)\) (in dimension \(d=2\))._ The energy is parametrized with two parameters \(\theta_{1}\) and \(\theta_{2}\). We discuss below why we impose \(\theta_{2}\) to be non negative. Note that we do not introduce a third parameter in the front of the volume \(\mathcal{V}(L(\omega))\) since it is indirectly given by the inverse temperature \(\beta\geq 0\) in the Definition 3 of Gibbs measures. With this choice of parameters the energy is stable which means that there exists a constant \(C\geq 0\) such that for any finite configuration \(\omega\in\Omega_{f}\), \[H(\omega)\geq-CN(\omega).\] The volume and the surface are clearly stable since the radii are uniformly bounded. The Euler-Poincare characteristic is more delicate to study. In dimension \(2\), it is shown by Kendall et al. [12] that for the union of N closed balls, the number of holes is bounded above by \(2N-5\), and the number of connected components is clearly bounded by \(N\). Therefore the Euler-Poincare characteristic is stable for any parameter \(\theta_{2}\in\mathbb{R}\). In higher dimension \(d\geq 3\), for some configurations, the maximum number of holes is of order \(N^{2}\) and thus the Euler-Poincare characteristic is not stable if \(\theta_{2}<0\). More generally, the stability of this statistic is not obvious even if \(\theta_{2}\) is strictly positive. Therefore the existence of the infinite volume Gibbs point process is not well established. It is for this reason that we impose \(\theta_{2}=0\) in the case \(d\geq 3\). Let us turn to the definition of the local energy which provides the cost of energy we need to introduce points in a domain given the configuration outside this domain. **Definition 2**.: _Let \(\Delta\in B_{b}(\mathbb{R}^{d})\) and \(\omega\in\Omega_{f}\) be a finite configuration. We define the local energy of \(\omega\) in \(\Delta\) as_ \[H_{\Delta}(\omega):=H(\omega)-H(\omega_{\Delta^{c}}).\] _From the additivity of Minkowski functionals, we observe a finite range property. Indeed we have that \(H_{\Delta}(\omega)=H_{\Delta}\big{(}\omega_{\Delta\oplus B(0,2R_{1})}\big{)}\). Therefore, we can extend the definition of the local energy to any configuration \(\omega\) in \(\Omega\) by_ \[H_{\Delta}(\omega):=H_{\Delta}\left(\omega_{\Delta\oplus B(0,2R_{1})}\right). \tag{2}\] ### Gibbs measures Let \(Q\) be a reference measure on \([R_{0},\,R_{1}]\) for the distribution of the radii and let \(z\) be a non-negative real number called the activity parameter. We denote by \(\lambda\) the Lebesgue measure on \(\mathbb{R}^{d}\) and by \(\Pi^{z}\) the distribution of a Poisson point process on \(E\) with intensity measure \(z\lambda\bigotimes Q\)[13]. Similarly, for any \(\Delta\in B_{b}(\mathbb{R}^{d})\), \(\Pi^{z}_{\Delta}\) stands for the Poisson point process on \(E_{\Delta}\) with intensity \(z\lambda_{\Delta}\bigotimes Q\), where \(\lambda_{\Delta}\) is the Lebesgue measure on \(\Delta\). **Definition 3**.: _A probability measure \(P\) on \(\Omega\) is a Gibbs measure for the Quermass interaction with parameters \(\theta_{1},\theta_{2}\), the activity \(z>0\) and the inverse temperature \(\beta\geq 0\) if \(P\) is stationary in space and if for any \(\Delta\in B_{b}(\mathbb{R}^{d})\) and any bounded positive measurable function \(f\,:\Omega\to\mathbb{R}\),_ \[\int f(\omega)P(d\omega)=\int\int\frac{1}{Z_{\Delta}(\omega_{\Delta^{c}})}f( \omega_{\Delta}^{\prime}\cup\omega_{\Delta^{c}})e^{-\beta H_{\Delta}(\omega_{ \Delta}^{\prime}\cup\omega_{\Delta^{c}})}\Pi^{z}_{\Delta}(d\omega_{\Delta}^{ \prime})P(d\omega) \tag{3}\] _where \(Z_{\Delta}(\omega_{\Delta^{c}})\) is the partition function given the outer configuration \(\omega_{\Delta^{c}}\)_ \[Z_{\Delta}(\omega_{\Delta^{c}})=\int e^{-\beta H_{\Delta}(\omega_{\Delta}^{ \prime}\cup\omega_{\Delta^{c}})}\Pi^{z}_{\Delta}(d\omega^{\prime}).\] These equations are called DLR equations (for Dobrushin, Lanford and Ruelle). It is equivalent to the following conditional probability definition: For all \(\Delta\in B(\mathbb{R}^{d})\) the distribution of \(\omega_{\Delta}\) under \(P\) given the outer configuration \(\omega_{\Delta^{c}}\) is absolutely continuous with respect to \(\Pi^{z}_{\Delta}\) with the following density \[P(d\omega_{\Delta}^{\prime}|\omega_{\Delta^{c}})=\frac{1}{Z_{\Delta}(\omega_{ \Delta^{c}})}e^{-\beta H_{\Delta}(\omega_{\Delta}^{\prime}\cup\omega_{\Delta^{ c}})}\Pi^{z}_{\Delta}(d\omega_{\Delta}^{\prime}).\] For the collection of parameters \(\theta_{1},\theta_{2},\,\beta\) and \(z\), we denote by \(\mathcal{G}(\theta_{1},\theta_{2},\,\beta,z)\) the set of all Gibbs measures for these parameters. The existence, the uniqueness or non-uniqueness (phase transition) of such Gibbs point processes are old and difficult questions in statistical physics. In this present setting, since the interaction is finite range and stable, the existence is a direct application of Theorem 1 in [3]. In other words, for any parameters \(\theta_{1}\in\mathbb{R}\,,\,\theta_{2}\geq 0,\,\beta\geq 0\) and \(z>0\), the set \(\mathcal{G}(\theta_{1},\theta_{2},\,\beta,z)\) is not empty. ## 3 Results We say that a liquid-gas phase transition occurs at \(\theta_{1},\theta_{2},\,\beta\) and \(z\) when the corresponding set of Gibbs measures \(\mathcal{G}(\theta_{1},\theta_{2},\,\beta,z)\) contains at least two Gibbs measures \(P,Q\) with different intensities, i.e. \[\rho(P)\,:=\mathbf{E}_{P}(N_{[0,1]^{d}})>\rho(Q). \tag{4}\] In particular the set \(\mathcal{G}(\theta_{1},\theta_{2},\,\beta,z)\) is not reduced to a singleton. This phenomenon is also called first order of phase transition since the non-uniqueness of Gibbs measures is coupled with a discontinuity of the intensity. Other kind of phase transition are possible. Our main result states that such liquid-gas phase transition occurs for a large range of parameters. We denote by \(\theta_{1}^{*}\) the following constant \(R_{0}\frac{\mathcal{V}(B(0,1))}{S(B(0,1))}=\frac{R_{0}}{d}\). In dimension \(d=2\), for any \(\theta_{1}>-\theta_{1}^{*}\) we define the constant \(\theta_{2}^{*}(\theta_{1})>0\) as in the equation (13) below. At a first glance, the explicit value for \(\theta_{2}^{*}(\theta_{1})\) is not so important and its existence with a non-null value is already interesting in the next theorem. **Theorem 1**.: _Let \(\theta_{1},\theta_{2}\) be two parameters such that \(\theta_{1}>-\theta_{1}^{*}\) and \(0\leq\theta_{2}<\theta_{2}^{*}(\theta_{1})\) (recall that \(\theta_{2}=0\) if \(d\geq 3\)). Then there exists \(\beta_{c}(\theta_{1},\theta_{2})>0\) such that for all \(\beta>\beta_{c}(\theta_{1},\theta_{2})\), there exists \(z_{\beta}^{c}>0\) for which a liquid-gas phase transition occurs: i.e. there exist \(P,Q\in\mathcal{G}(\theta_{1},\theta_{2},\,\beta,z_{\beta}^{c})\) with \(\rho(P)>\rho(Q)\). The critical activity \(z_{\beta}^{c}\) is not explicit but we know that \(|z_{\beta}^{c}-\beta|\) tends to zero exponentially fast when \(\beta\) tends to infinity._ It is important to note that the liquid-gas phase transition is proved only for a special value \(z_{\beta}^{c}>0\), the other parameters \(\theta_{1},\theta_{2},\beta\) being fixed. We are not able to prove that \(z_{\beta}^{c}\) is the only one value for which the phase transition occurs; it is probably not true in general. But, as a corollary of the Pirogov-Sinai-Zahradnik technology we used, in a neighbourhood of \(z_{\beta}^{c}\) it is the case. This local uniqueness is typical for a first order liquid-gas phase transition. Note that in the setting of Widom-Rowlinson model (i.e \(R_{0}=R_{1}\) and \(\theta_{1}=\theta_{2}=0\)), the critical activity \(z_{\beta}^{c}\) is unique and equal to \(\beta\)[21, 24]. The proof of Theorem 1 relies on the study of the regularity of the so-called pressure \(\psi\) defined by the following thermodynamic limit \[\psi(z,\beta)\,:=\lim_{n\to+\infty}\frac{1}{\beta|\Delta_{n}|}\ln(Z_{\Delta_{ n}}), \tag{5}\] where \((\Delta_{n})\) is the sequence of the following boxes \([-n,n]^{d}\) and \(Z_{\Delta_{n}}\) is the partition function with free boundary condition (i.e. \(Z_{\Delta_{n}}(\emptyset)\)). This limit always exists as a consequence of sub-additivity of the sequence \((\ln Z_{\Delta_{n}})_{n\in\mathbb{N}}\). As a corollary of the Pirogov-Sinai-Zahradnik technology we used, we obtain the non regularity of the pressure at the critical point. **Proposition 1**.: _Under the same assumptions as Theorem 1, for all \(\theta_{1},\theta_{2},\beta\) such that \(\theta_{1}>-\theta_{1}^{*}\), \(0\leq\theta_{2}<\theta_{2}^{*}(\theta_{1})\) (recall that \(\theta_{2}=0\) if \(d\geq 3\)) and \(\beta>\beta_{c}(\theta_{1},\theta_{2})\)_ \[\frac{\partial\psi}{\partial z^{+}}(z_{\beta}^{c},\beta)>\frac{\partial\psi} {\partial z^{-}}(z_{\beta}^{c},\beta).\] _Furthermore, there exist two Gibbs measures \(P^{+},P^{-}\in\mathcal{G}(\theta_{1},\theta_{2},\beta,z_{\beta}^{c})\) such that_ \[\rho(P^{-})=z+z\beta\frac{\partial\psi}{\partial z^{-}}(z_{\beta}^{c},\beta) \quad\text{and}\quad\rho(P^{+})=z+z\beta\frac{\partial\psi}{\partial z^{+}}( z_{\beta}^{c},\beta). \tag{6}\] ## 4 Proof We start by giving the sketch of proof and the plan of the development. Our aim is to prove Proposition 1, as it is clear that this result implies Theorem 1. In section 4.1, we do a coarse graining of the Hamiltonian. We do a spatial decomposition of the energy function along the lines of tiles that pave \(\mathbb{R}^{d}\) such that we have \[H=\sum_{i\in\mathbb{Z}^{d}}H_{i}.\] We can observe that for small tiles the presence of at least one particle inside a tile \(i\in\mathbb{Z}^{d}\) would imply that the spatial energy \(H_{i}=\delta^{d}\). We call this property the saturation of the energy as no matter how much points we put in it this value remains constant. On the other hand, the spatial energy of an empty tile, when the other tiles around are empty, is equal to \(0\). Therefore, when the status of the tiles are homogeneous it becomes easy to compute the energy of a configuration. Hence the study of the behaviour in the mixing areas is necessary. In section 4.2, we will properly define the contours, \(\Gamma=\{\gamma_{1},\cdots,\gamma_{n}\}\), in order to match with the intuition of the interface areas between homogeneous regions. In section 4.3 we prove that the partition function with boundary condition associated to either state ( \(\#=1\) wired, \(\#=0\) free) can be written as a polymer development, i.e. \[Z_{\Lambda}^{\#}=\sum_{\Gamma}\prod_{\gamma\in\Gamma}w_{\gamma}^{\#}\] where \(w_{\gamma}^{\#}\) are weights associated to each contours. In section 4.4, for a large set of parameters \((\theta_{1},\theta_{2})\) we prove that the spatial energy in any contour \(\gamma\) and any configuration \(\omega\) achieving the contour verifies \[H_{\gamma}(\omega)\geq|\gamma_{1}|\delta^{d}+\rho_{0}|\gamma|\] where \(\gamma_{1}\subset\gamma\) corresponds to the tiles of the contours that contain a particle and \(\rho_{0}\) is a strictly positive constant. This type of result, is in the same spirit of Peierls condition for contours on the lattice and comes from our spatial decomposition of the energy. If the size of the tiles are well tuned some empty tiles would have the same spatial energy as a non-empty tile. We call this property the energy from vacuum. Using this inequality we will prove, in section 4.5, that for \(\beta\) large enough and for \(z\) in the vicinity of \(\beta\) the weights are \(\tau\)-stable, i.e. \[w_{\gamma}^{\#}\leq e^{-\tau|\gamma|}\] where \(\tau>0\) depends on the value of \(\beta\) and can be larger as we want provided that \(\beta\) is large. We prove this using truncated weights and pressure as usual with the PSZ theory. This exponential decay of the weights is needed to use results on cluster expansion and write the partition function as follow \[Z_{\Lambda}^{\#}=e^{\psi|\Lambda|+\Delta_{\Lambda}}\] where \(\psi\) is the pressure and \(\Delta_{\Lambda}\) is a perturbation term that is of the order of the surface. Finally in section 4.6 we deduce the non-differentiability of the pressure for a critical activity and we deduce the liquid-gas phase transition as it is stated in Proposition 1. * Coarse Graining decomposition of the energy * Contours * Partition function and boundary condition * Energy and Peierls condition * Truncated weights and pressure * Proof of Proposition 1 ### Coarse Graining decomposition of the energy Let \(\delta>0\) and for all integers \(i\in\mathbb{Z}^{d}\), we call a tile \[T_{i}:=\tau_{i\delta}\left([-\frac{\delta}{2},\frac{\delta}{2}[^{d}\right),\] where \(\tau_{i\delta}\) is the translation by vector \(i\delta\). For any \(\Lambda\subset\mathbb{Z}^{d}\) we denote by \(\widehat{\Lambda}:=\bigcup_{i\in\Lambda}T_{i}\). We call a facet \(F\) any non-empty intersection between two closed tiles in \((\overline{T}_{i})_{i\in\mathbb{Z}}\). Clearly the dimension of a facet can be any integer between \(0\) and \(d\). We denote by \(\mathcal{F}\) the set of all facets. The energy of a tile \(H_{i}\) is given by \[\forall\omega\in\Omega_{f},\quad H_{i}(\omega)=\mathcal{V}(L(\omega)\cap T_{i} )+\theta_{1}S_{i}(L(\omega))+\theta_{2}\chi_{i}(L(\omega)),\] where \[S_{i}(A)=\sum_{k=d-1}^{d}\sum_{\begin{subarray}{c}F\in F\\ \dim(F)=k\cdot F\cap T_{i}\neq\emptyset\end{subarray}}(-1)^{d-k}S(A\cap F)\] \[\chi_{i}(A)=\sum_{k=0}^{d}\sum_{\begin{subarray}{c}F\in F\\ \dim(F)=k\cdot F\cap T_{i}\neq\emptyset\end{subarray}}(-1)^{d-k}\chi(A\cap F).\] Furthermore, we can observe that \(S_{i}(\overline{T}_{i})=0\) and \(\chi_{i}(\overline{T}_{i})=0\). This is due to the fact that for a \(d-1\)-dimensional facet \(F\), we have \(S(F)=2\lambda^{(d-1)}(F)\) where \(\lambda^{(d-1)}\) is the \((d-1)\) Lebesgue measure. Therefore, for any configuration \(\omega\in C_{f}\) such that \(T_{i}\subset L(\omega)\) we have \(H_{i}(\omega)=\mathcal{V}(T_{i})=\delta^{d}\). **Lemma 2**.: _For every finite configuration \(\omega\in\Omega_{f}\)_ \[H(\omega)=\sum_{i\in\mathbb{Z}^{d}}H_{i}(\omega).\] The proof is a direct consequence of the additivity of the Minkowski functionals. For every \(\Lambda\subset\mathbb{Z}^{d}\), we use the following notation \[H_{\Lambda}\,:=\sum_{i\in\Lambda}H_{i}.\] ### Contours Let us consider the lattice \(\mathbb{Z}^{d}\) underlying the tiles \((T_{i})_{i\in\mathbb{Z}^{d}}\), where two sites \(i,j\in\mathbb{Z}^{d}\) are connected if \(\left\|i-j\right\|_{\infty}=1\). We call the spin configuration the application \[\sigma\,: \Omega_{f}\times\mathbb{Z}^{d}\to\{0,1\}\] \[(\omega,i)\mapsto\begin{cases}0&\text{if }\omega_{T_{i}}= \emptyset\\ 1&\text{otherwise}\end{cases}.\] In the following we use the notation \(\#\) for either \(0\) or \(1\). **Definition 4**.: _Let \(L>0\) and \(\omega\in\Omega\), a site \(i\in\mathbb{Z}^{d}\) is said to be \(\#\)-correct if for all sites \(j\) such that \(\left\|i-j\right\|\leq L\), we have \(\sigma(\omega,j)=\#\). A site \(i\) is non-correct when it fails to be \(\#\)-correct for \(\#\in\{0,1\}\). The set of all non-correct sites is denoted by \(\overline{\Gamma}\). We can partition \(\overline{\Gamma}\) into its maximum connected components that we denote by \(\overline{\gamma}\) and we call it contour without types._ Since we are considering only finite configurations, the number of connected components is finite and for any \(\overline{\gamma}\), the complementary set has a finite amount of maximum connected components that we denote by \(A\) and in particular we have only one unbounded connected component and we call it the exterior of \(\overline{\gamma}\) that we denote by \(ext(\overline{\gamma})\). **Definition 5**.: _Let \(\Lambda\subset\mathbb{Z}^{d}\) and \(L>0\), we define the exterior boundary \(\partial_{ext}\Lambda\) and the interior boundary \(\partial_{int}\Lambda\) of \(\Lambda\) as_ \[\partial_{ext}\Lambda=\{j\in\Lambda^{c},d_{2}(j,\Lambda)\leq L\}\] \[\partial_{int}\Lambda=\{i\in\Lambda,d_{2}(i,\Lambda^{c})\leq L+1\},\] _where \(d_{2}\) is the Euclidean distance in \(\mathbb{R}^{d}\)._ **Lemma 3**.: _Let \(\omega\in\Omega_{f}\) be a finite configuration and \(\overline{\gamma}\) any associated contour without type. Let \(A\) be a maximum connected component of \(\overline{\gamma}^{c}\), then there is an unique \(\#\in\{0,1\}\) such that for all \(i\in\partial_{ext}A\cup\partial_{int}A,\sigma(\omega,i)=\#\). The value of the spin in the boundary is called the label of \(A\) and is denoted by \(\operatorname{Label}(A)\)._ The proof of this lemma is classical and it corresponds to Lemma 7.23 in [7]. It relies on the fact that each set \(\partial_{int}A\) and \(\partial_{ext}A\) are connected and that the sites directly in contact with the contours are correct. Therefore there can be only one spin \(\#\in\{0,1\}\) otherwise we would have two correct sites of opposite spin directly connected. **Definition 6**.: _Let \(\omega\in\Omega_{f}\), we call a contour \(\gamma\) the pair \((\overline{\gamma},(\#_{i})_{i\in\overline{\gamma}})\) where for all sites \(i\in\overline{\gamma}\), \(\sigma(\omega,i)=\#_{i}\). We denote by \(\Gamma(\omega)\) the set of all contours that appear with the configuration \(\omega\)._ Furthermore for a contour \(\gamma=(\overline{\gamma},(\#_{j})_{j\in\overline{\gamma}})\) we call the type of \(\gamma\) the label of \(ext(\overline{\gamma})\), \(\operatorname{Type}(\gamma):=\operatorname{Label}(ext(\overline{\gamma}))\). And we call the interiors of a contour \(\gamma\) the sets \[\operatorname{Int}_{\#}\gamma=\bigcup_{\begin{subarray}{c}A\neq ext(\overline{ \gamma})\\ \operatorname{Label}(A)=\#\end{subarray}}A\quad\text{ and }\quad\operatorname{Int} \gamma=\operatorname{Int}_{0}\gamma\cup\operatorname{Int}_{1}\gamma.\] Let \(\omega\in\Omega_{f}\) be a finite configuration, a contour \(\gamma\in\Gamma(\omega)\) is said to be external when for any other contour \(\gamma^{\prime}\in\Gamma(\omega)\), \(\overline{\gamma}\subset ext(\overline{\gamma}^{\prime})\). We denote by \(\Gamma_{ext}\) the subset of \(\Gamma\) comprised only of external contours. Until now we have only considered collection of contours that can be achieved by a finite configuration of points. But classically in the Pirogov-Sinai-Zahradnik theory we need to introduce abstract collection of contours which are not achievable by any configuration. This is due to the cluster expansion development of the partition function using geometrically compatible collection of contours. **Definition 7**.: _An abstract set of contours is a set of contours \(\{\gamma_{i}=(\overline{\gamma}_{i},(\#_{j})_{j\in\overline{\gamma}_{i}}),i \in I\subset\mathbb{N}^{*}\}\) for which each contour \(\gamma_{i}\) is achievable for some configuration \(\omega_{i}\). We do not assume the global achievability. We denote by \(\Gamma\) such set of contours. Moreover this set \(\Gamma\) is called geometrically compatible if for all \(\{i,j\}\subset I\), \(d_{\infty}(\gamma_{i},\gamma_{j})>1\) and they all have the same type. Let \(\Lambda\in\mathbb{Z}^{d}\) we denote by \(C^{\#}(\Lambda)\) the collection of geometrically compatible sets of contours of the type \(\#\) such that \(d_{\infty}(\gamma_{i},\Lambda^{c})>1\)._ We allow the set \(\Gamma=\{(\emptyset,\emptyset)\}\) to belong to the collection \(C^{\#}(\Lambda)\) for any \(\Lambda\) which corresponds to the event where not a single contour appears in \(\Lambda\). There are several interesting sub-collections of \(C^{\#}(\Lambda)\), one of them being the collection of sets such that all contours are external. **Definition 8**.: _Let \(\Lambda\in\mathbb{Z}^{d}\) finite, we denote by \(C^{\#}_{ext}(\Lambda)\subset C^{\#}(\Lambda)\) the sub-collection of sets \(\Gamma\) where any contours \(\gamma\in\Gamma\) is external._ In a way, in the collection \(C^{\#}_{ext}(\Lambda)\) we are considering sets of contours where we have only one layer. In general, if we take a geometrically compatible abstract sets of contours \(\Gamma\), a particular contour in this set can be encapsulated in the interior of another creating layers upon layers of contours. One method of exploration of the contours is by proceeding from the external layer and peel each layer to discover the other contours hidden under. Another sub-collection of \(C^{\#}(\Lambda)\) is the collection of sets such that for all contours the size of the interior is bounded. **Definition 9**.: _A contour \(\gamma\) is of the class \(k\in\mathbb{N}\) when \(|\operatorname{Int}\gamma|=k\). Let \(n\in\mathbb{N}\) and \(\Lambda\subset\mathbb{Z}^{d}\) finite, we denote by \(C^{\#}_{n}(\Lambda)\subset C^{\#}(\Lambda)\) the collection of contours \(\Gamma\) such that \(\forall\gamma\in\Gamma\), \(\gamma\) is of the class \(k\leq n\)._ ### Partition function and boundary condition We consider two Quermass point process with a free boundary and a wired boundary conditions. The probability measures \(P^{\#}_{\Lambda}\) associated to each one of them is given by \[P^{\#}_{\Lambda}(d\omega)=\frac{1}{Z^{\#}_{\Lambda}}e^{-\beta H_{\Lambda}( \omega)}\mathbb{1}_{\{\forall i\in\partial_{\text{int}}\Lambda,\sigma(\omega,i )=\#\}}\Pi^{z}_{\widetilde{\Lambda}}(d\omega) \tag{7}\] where \[Z^{\#}_{\Lambda}=\int e^{-\beta H_{\Lambda}(\omega)}\mathbb{1}_{\{\forall i\in \partial_{\text{int}}\Lambda,\sigma(\omega,i)=\#\}}\Pi^{z}_{\widetilde{\Lambda }}(d\omega). \tag{8}\] We prove that the two infinite Gibbs measure obtained by taking the thermodynamic limit for each boundary condition yield different intensity. First we have a standard lemma which ensures that the pressure does not depend on the boundary conditions. **Lemma 4**.: _For \(\delta\leq\nicefrac{{R_{0}}}{{2}}\sqrt{d}\), \(L\geq\nicefrac{{2R_{1}}}{{\delta}}\) and any \(\#\in\{0,1\}\) we have \(\psi=\psi^{\#}\) where \(\psi^{\#}:=\lim\limits_{n\to\infty}\frac{\ln Z_{\Lambda_{n}}^{\Lambda_{n}}}{ \theta|\Lambda_{n}|\delta^{d}}\) and \(\Lambda_{n}=[\![-n,n]\!]^{d}\)._ Proof.: For any configuration \(\omega\in\Omega_{f}\) such that for all sites \(i\in\partial_{int}\Lambda_{n},\sigma(\omega,i)=1\) and \(\omega_{\Lambda_{n}^{c}}=\emptyset\) we know that no holes are created by the halo outside of \(\Lambda_{n}\) and therefore \[H_{\partial_{ext}\Lambda_{n}}(\omega)\leq\begin{cases}|\partial_{ext}\Lambda_ {n}|\delta^{d}&\text{ when }\theta_{1}\leq 0\\ |\partial_{ext}\Lambda_{n}|\delta^{d}+\theta_{1}S_{\partial_{ext}\Lambda_{n}} (L(\omega))&\text{ when }\theta_{1}>0\end{cases}.\] The boundary of the halo \(L(\omega)\) outside \(\Lambda_{n}\), appearing in the computation of \(S_{\partial_{ext}\Lambda_{n}}(L(\omega))\), is the union of spherical caps built via some marked points \((x_{1},r_{1}),\ldots,(x_{m},r_{m})\in\omega\). We denote by \(a_{i}\in[0,1]\) the ratio of the surface of the \(i\)th spherical cap with respect to the total surface of its sphere. We have \[S_{\partial_{ext}\Lambda_{n}}(L(\omega)) =\sum_{i=1}^{m}a_{i}S(B(x_{i},r_{i}))\] \[=\sum_{i=1}^{m}a_{i}\frac{S(B(x_{i},r_{i}))}{\mathcal{V}(B(x_{i}, r_{i}))}\mathcal{V}(B(x_{i},r_{i}))\] \[\leq\frac{S(B(0,1))}{R_{0}\mathcal{V}(B(0,1))}\sum_{i=1}^{m}a_{i} \mathcal{V}(B(x_{i},r_{i}))\] \[\leq\frac{S(B(0,1))}{R_{0}\mathcal{V}(B(0,1))}|\partial_{int} \Lambda_{n}\cup\partial_{ext}\Lambda_{n}|\delta^{d}.\] As a consequence there exists \(c>0\) such that we have \[Z_{\widehat{\Lambda}_{n}} \geq\int e^{-\beta H_{\partial_{ext}\Lambda_{n}}(\omega)}e^{-\beta H _{\Lambda_{n}}(\omega)}\mathbb{1}_{\{\forall i\in\partial_{int}\Lambda_{n}, \sigma(\omega,i)=1\}}\Pi_{\widehat{\Lambda}_{n}}^{z}(d\omega)\] \[\geq e^{-c|\partial_{ext}\Lambda_{n}\cup\partial_{int}\Lambda_{n} |\delta^{d}}Z_{\Lambda_{n}}^{(1)}.\] Therefore we have \(\psi\geq\psi^{(1)}\). Let us consider the following event \[E_{n}=\{\omega\in\Omega_{f},\forall i\in\Lambda_{n},\,L<d_{2}(i,\Lambda_{n}^{ c})\leq 2L,\sigma(\omega,i)=0\}.\] Since \(\delta\leq\nicefrac{{R_{0}}}{{2}}\sqrt{d}\) and \(L\geq\frac{2R_{1}}{\delta}\) we have \[Z_{\Lambda_{n}}^{(1)}\geq\int e^{-\beta(|\partial_{int}\Lambda_{n}|+H_{\partial _{int}\Lambda_{n-L}}(\omega_{\widehat{\partial_{int}\Lambda_{n}}}))}\mathbb{1} _{\{\forall i\in\partial_{int}\Lambda_{n},\sigma(\omega,i)=1\}}e^{-\beta H_{ \Lambda_{n-L}}(\omega_{\widehat{\Lambda}_{n-L}})}\mathbb{1}_{E_{n}}\Pi_{ \widehat{\Lambda}_{n}}^{z}(d\omega).\] Similarly as in the previous case we can find \(c>0\) such that \[H_{\partial_{int}\Lambda_{n-L}}(\omega_{\widehat{\partial_{int}\Lambda_{n}}}) \leq c|\partial_{int}\Lambda_{n}\cup\partial_{ext}\Lambda_{n}|\delta^{d}\] and therefore we get \[Z_{\Lambda_{n}}^{(1)}\geq g_{1}^{|\partial_{int}\Lambda_{n}|}e^{-\beta c| \partial_{int}\Lambda_{n}\cup\partial_{ext}\Lambda_{n}|\delta^{d}}Z_{\Lambda_ {n-L}}^{(0)}.\] This implies that \(\psi^{(1)}\geq\psi^{(0)}\). Finally we simply have that \[Z_{\Lambda_{n}}^{(0)}\geq g_{0}^{|\partial_{int}\Lambda_{n}|}Z_{\Lambda_{n-L}}.\] Therefore \(\psi^{(0)}\geq\psi\) and this finishes the proof of the lemma. We have also a crucial proposition which provides a representation of the partition function as a polymer development. **Proposition 2** (Polymer development).: _Let \(R_{0}\geq 2\delta\sqrt{d}>0\) and \(\delta L>2R_{1}\), for all \(\Lambda\subset\mathbb{Z}^{d}\) finite we have_ \[\Phi_{\Lambda}^{\#}:=g_{\#}^{-|\Lambda|}Z_{\Lambda}^{\#}=\sum_{\Gamma\in C^{ \#}(\Lambda)}\prod_{\gamma\in\Gamma}w_{\gamma}^{\#}\] _where_ * \(g_{0}=e^{-z\delta^{d}}\) _and_ \(g_{1}=e^{-\beta\delta^{d}}(1-e^{-z\delta^{d}})\)__ * \(w_{\gamma}^{\#}=g_{\#}^{-|\overline{\gamma}|}I_{\gamma}\frac{Z_{\text{Int}_{ \#}\gamma}^{\#}}{Z_{\text{Int}_{\#}\gamma}^{\#}}\) _and it's called the weight of the contour_ \(\gamma\)_,_ * \(I_{\gamma}=\int e^{-\beta H_{\overline{\gamma}}(\omega)}\mathbb{1}(\forall i \in\overline{\gamma},\sigma(\omega,i)=\sigma_{i})\Pi_{\overline{\gamma}}^{z} (d\omega)\)_._ Proof.: We follow a similar development done in Chapter 7 in [7] with an adaptation to the setting of our model where the main difference is that the states of sites are random and have to be integrated under the Poisson measure. We can decompose the partition function \(Z_{\Lambda}^{\#}\) according to the external contours \(C_{ext}^{\#}(\Lambda)\) and we have \[Z_{\Lambda}^{\#}=\sum_{\Gamma\in C_{ext}^{\#}(\Lambda)}\int e^{-\beta H_{ \Lambda}(\omega)}\mathbb{1}_{\{\forall i\in\partial_{int}\Lambda,\sigma( \omega,i)=\#\}}\mathbb{1}_{\{\Gamma_{ext}(\omega)=\Gamma\}}\Pi_{\overline{ \Lambda}}^{z}(d\omega).\] For any \(\Gamma\in C_{ext}^{\#}(\Lambda)\) we can do a partition of \(\Lambda\) in the following way \[\Lambda=\Lambda_{ext}\bigcup_{\gamma\in\Gamma}\left(\gamma\cup\text{Int}_{0} \gamma\cup\text{Int}_{1}\,\gamma\right)\] where \(\Lambda_{ext}=\bigcap_{\gamma\in\Gamma}ext(\overline{\gamma})\cap\Lambda\). For any finite configuration \(\omega\in\Omega_{f}\) such that \(\Gamma_{ext}(\omega)=\Gamma\), we have \[H_{\Lambda_{ext}}(\omega)=H_{\Lambda_{ext}}(\omega_{\widehat{ \Lambda}_{ext}})=\mathcal{V}(\widehat{\Lambda}_{ext})\mathbb{1}_{(\#=1)} \tag{9}\] \[H_{\overline{\gamma}}(\omega)=H_{\overline{\gamma}}(\omega_{ \widehat{\gamma}})\] (10) \[H_{\text{Int}_{\#}\gamma}(\omega)=H_{\text{Int}_{\#}\gamma}( \omega_{\widehat{\text{Int}_{\#}\gamma}}). \tag{11}\] By construction, we know that for all \(i\in\Lambda_{ext},\sigma(\omega,i)=\#\) and \(i\) is \(\#\)-correct. First case \(\#=1\), since \(R_{0}\geq\delta\sqrt{d}\) if \(\omega_{T_{i}}\neq\emptyset\) implies that \(H_{i}(\omega)=H_{i}(\omega_{T_{i}})=\mathcal{V}(T_{i})\). Second case \(\#=0\), since the sites \(i\) in \(\Lambda_{ext}\) are \(0\)-correct and that \(\delta L>2R_{1}\) then \(H_{i}(\omega)=H_{i}(\omega_{T_{i}})=0\). Furthermore, we know that a tile adjacent to an external contour \(\gamma\) is correct. For \(A=\text{Int}_{0}\,\gamma\) or \(A=\Lambda_{ext}\) when \(\#=0\), because of the previous fact we know that \(d_{2}(\omega_{\widehat{\gamma}},\widehat{\gamma})>2R_{1}\) and \(d_{2}(\omega_{\widehat{\gamma}},\widehat{A})>2R_{1}\). Therefore we have \(L(\omega_{\widehat{\gamma}})\cap\widehat{A}=\emptyset\) and \(L(\omega_{\widehat{A}})\cap\widehat{\gamma}=\emptyset\). And if we consider \(B=\text{Int}_{1}\,\gamma\) or \(B=\Lambda_{ext}\) when \(\#=1\), the energy of the tiles in \(\partial_{int}B\) and \(\partial_{ext}B\) are determined since we have a point in those tiles and that \(R_{0}\geq\delta\sqrt{d}\). Furthermore, since \(\delta L>2R_{1}\) what is happening in the tiles \(\overline{\gamma}\backslash\partial_{ext}B\) doesn't have any consequence on \(B\) and vice versa what is happening in \(B\backslash\partial_{int}B\) doesn't affect the tiles in \(\overline{\gamma}\). Using these observations on the way the energy behaves according the contours and using the independence of Poisson point process in disjoint areas we have \[Z_{\Lambda}^{\#} =\sum_{\Gamma\in\mathcal{C}_{ext}^{\#}(\Lambda)}\left(\int e^{-\beta H _{\Lambda_{ext}}(\omega)}\mathbbm{1}_{\{\forall i\in\Lambda_{ext},\sigma(\omega _{i})=\#\}}\Pi_{\widehat{\Lambda}_{ext}}^{z}(d\omega)\right)\] \[\quad\times\prod_{\gamma\in\Gamma}\left(\int e^{-\beta H_{\gamma }(\omega)}\mathbbm{1}_{\{\forall i\in\widetilde{\gamma},\sigma(\omega_{i})= \sigma_{i}\}}\Pi_{\widehat{\gamma}}^{z}(d\omega)\right)\] \[\quad\times\prod_{\#^{*}}\left(\int e^{-\beta H_{\text{Int}_{\#^{* }}\gamma(\omega)}}\mathbbm{1}_{\{\forall i\in\partial_{int}\text{Int}_{\#^{* }}\gamma,\sigma(\omega_{i})=\#^{*}\}}\Pi_{\overline{\text{Int}_{\#^{*}}} \gamma}^{z}(d\omega)\right). \tag{12}\] In this development we recognize the partition function \(Z_{\text{Int}_{\#^{*}}\gamma}^{\#^{*}}\) and the term \(I_{\gamma}\). At the end, we have by independence of Poisson point process in disjoint areas \[\int e^{-\beta H_{\Lambda_{ext}}(\omega)}\mathbbm{1}_{\{\forall i\in\Lambda_ {ext},\sigma(\omega_{i})=\#\}}\Pi_{\widehat{\Lambda}_{ext}}^{z}(d\omega)= \left(\int e^{-\beta H_{0}(\omega)}\mathbbm{1}_{\{\sigma(\omega_{0})=\#\}}\Pi_ {T_{0}}^{z}(d\omega)\right)^{|\Lambda_{ext}|}.\] According to the value \(\#\in\{0,1\}\) we have \[\int e^{-\beta H_{0}(\omega)}\mathbbm{1}_{\{\sigma(\omega_{0})=1 \}}\Pi_{T_{0}}^{z}(d\omega)=e^{-\beta\delta^{d}}(1-e^{-z\delta^{d}})=g_{1}\] \[\int e^{-\beta H_{0}(\omega)}\mathbbm{1}_{\{\sigma(\omega_{0})=0 \}}\Pi_{T_{0}}^{z}(d\omega)=e^{-z\delta^{d}}=g_{0}.\] In summary, we have the following \[Z_{\Lambda}^{\#} =\sum_{\Gamma\in\mathcal{C}_{ext}^{\#}(\Lambda)}g_{\#}^{|\Lambda _{ext}|}\prod_{\gamma\in\Gamma}I_{\gamma}Z_{\text{Int}_{\#}\gamma}^{\#}Z_{ \text{Int}_{\#^{*}}\gamma}^{\#^{*}}\] \[=\sum_{\Gamma\in\mathcal{C}_{ext}^{\#}(\Lambda)}g_{\#}^{|\Lambda _{ext}|}\prod_{\gamma\in\Gamma}I_{\gamma}\frac{Z_{\text{Int}_{\#^{*}}\gamma}^ {\#^{*}}}{Z_{\text{Int}_{\#^{*}}\gamma}^{\#}}Z_{\text{Int}_{\#^{*}}\gamma}^{ \#}\] \[=\sum_{\Gamma\in\mathcal{C}_{ext}^{\#}(\Lambda)}g_{\#}^{|\Lambda _{ext}|}\prod_{\gamma\in\Gamma}g_{\#}^{[\overline{\gamma}]}w_{\gamma}^{\#}Z_ {\text{Int}_{\#}\gamma}^{\#}Z_{\text{Int}_{\#^{*}}\gamma}^{\#}.\] From the properties our energy, we know that \(0<Z_{\text{Int}_{\#^{*}}\gamma}^{\#}<+\infty\) and thus the quotient introduced above is allowed. In regards of the quantity \(\Phi_{\Lambda}^{\#}\) we have \[\Phi_{\Lambda}^{\#}=\sum_{\Gamma\in\mathcal{C}_{ext}^{\#}(\Lambda)}\prod_{ \gamma\in\Gamma}w_{\gamma}^{\#}\Phi_{\text{Int}_{\#}\gamma}^{\#}\Phi_{\text{ Int}_{\#^{*}}\gamma}^{\#}.\] We can continue to iterate the same computation for \(\Phi_{\text{Int}_{\#}\gamma}\) and for \(\Phi_{\text{Int}_{\#^{*}}\gamma}\) until we empty the interior of the contours and thus we have \[\Phi_{\Lambda}^{\#}=\sum_{\Gamma\in\mathcal{C}^{\#}(\Lambda)}\prod_{\gamma\in \Gamma}w_{\gamma}^{\#}.\] ### Energy and Peierls condition The weights of any contours \(\gamma\) is a difficult object to evaluate and our goal is to have a good exponential bound with respect the volume of the contour. Recall the expression of the weight \[w_{\gamma}^{\#}=g_{\#}^{-|\overline{\gamma}|}I_{\gamma}\frac{Z_{\text{Int}_{\# }\gamma}^{\#^{*}}}{Z_{\text{Int}_{\#}\gamma}^{\#}}.\] **Definition 10**.: _Let \(\tau>0\), the weight of a contour \(\gamma\) is said to be \(\tau\)-stable if_ \[w_{\gamma}^{\#}\leq e^{-\tau\{|\overline{\tau}|\}}.\] The ratio of the partition functions is the most difficult part to handle at this stage and will be further developed in the next section. For the moment, we want to control the quantity \(I_{\gamma}\) where \(\ln I_{\gamma}\) can be interpreted as the mean energy of the contour. The aim is to display some sort of Peierls condition on the mean energy of the contour and afterwards to prove the \(\tau\)-stability of the weights. **Proposition 3**.: _For any \(\theta_{1}>-\theta_{1}^{*}:=-R_{0}\frac{\gamma(B(0,1))}{S(B(0,1))}\) (and, in dimension \(d=2\), any \(\theta_{2}\) such that for \(0\leq\theta_{2}<\theta_{2}^{*}(\theta_{1})\), where \(\theta_{2}^{*}>0\) is defined in (13)), there exist \(\delta\in\big{\lvert}0,\nicefrac{{R_{0}}}{{2}}\sqrt{d}\big{\lvert}\), \(K>0\) and \(\rho_{0}>0\) such that for all contour \(\gamma\) and \(\beta>0\)_ \[I_{\gamma}\leq g_{0}^{[\overline{\gamma}|0}g_{1}^{|\overline{\gamma}|1}e^{- \beta\rho_{0}|\overline{\gamma}|}\] _and_ \[\left\lvert\frac{\partial I_{\gamma}}{\partial z}\right\rvert\leq\left(1+ \frac{K}{1-e^{-z\delta^{d}}}\right)|\overline{\gamma}|\delta^{d}g_{0}^{| \overline{\gamma}|0}g_{1}^{|\overline{\gamma}|1}e^{-\beta\rho_{0}|\overline{ \gamma}|}.\] Before going to the proof of Proposition 3, we need some intermediate geometrical results given in the four next lemmas. We start with the following observation. For any configuration \(\omega\in\Omega\), if there exist \((i,j)\in(\mathbb{Z}^{d})^{2}\) where \(d_{\infty}(i,j)=1\) such that \(\sigma(\omega,i)=1\) and \(\sigma(\omega,j)=0\), we call this pair a domino. We assume that \(R_{0}\geq 2\delta\sqrt{d}>0\). Therefore for any configuration \(\omega\) and any domino \((i,j)\) we have \(H_{i}(\omega)=H_{j}(\omega)=\delta^{d}\) because of the presence of a point inside the tile \(T_{i}\) ensuring that the tiles \(T_{i}\) and \(T_{j}\) are covered. In particular the energy of this empty tile is positive and we call this property the energy from vacuum. We need to show that the proportion of such pair of tiles in the contour is at least proportional to the volume of the contour. **Lemma 5**.: _There exists \(r_{0}>0\) such that for any contour \(\gamma\), the set of dominoes_ \[D(\gamma):=\{(i,j)\in\overline{\gamma}^{2},d_{\infty}(i,j)=1,\#_{i}=1,\#_{j}=0\}\] _satisfies_ \[|D(\gamma)|\geq r_{0}|\overline{\gamma}|.\] Proof.: We start by choosing randomly in a contour \(\gamma\) a site \(k\) such that \(\#_{k}=1\). Since it is in a contour, it is non-correct, meaning that there is a site \(j\in\gamma\), where \(\#_{j}=0\) and \(d_{2}(k,j)\leq L\). We choose such \(j\) such as it is the closest to \(k\). Forcibly we have a site \(i\) adjacent to \(j\) such that \(\#_{i}=1\) ( at least in the direction of \(k\)). And we assign \(S_{1}=\{k\}\) and \(D_{1}=\{(i,j)\}\). We repeat the process to build \(S_{n+1}\) and \(D_{n+1}\) by choosing the points inside \(\gamma\setminus\bigcup_{k\in S_{n}}B(k,4L)\). There is \(p\in\mathbb{N}\), the number of step until the process stops because there is a finite number of sites with the spin equal to \(1\) in a contour. We define \(S(\gamma)=S_{p}\) and \(D(\gamma)=D_{p}\). We know that at this point that \[\overline{\gamma}_{1}\,:=\,\{i\in\overline{\gamma},\#_{i}=1\}\subset\bigcup_{k \in S(\gamma)}B(k,4L)\cap\mathbb{Z}^{d}\] and that by non-correctness of sites with spin \(0\) in the contour we have \[\overline{\gamma}_{0}\,:=\,\{i\in\overline{\gamma},\#_{i}=0\}\subset\bigcup_{ k\in S(\gamma)}B(k,5L)\cap\mathbb{Z}^{d}.\] In summary we have \[\overline{\gamma}\subset\bigcup_{k\in S}B(k,5L)\cap\mathbb{Z}^{d}.\] Therefore the cardinals verify the following inequalities \[|\overline{\gamma}|\leq|S||B(0,5L)\cap\mathbb{Z}^{d}|.\] By construction we have \(|S(\gamma)|=|D(\gamma)|.\) Hence the inequality we are interested in \[|D(\gamma)|\geq r_{0}|\overline{\gamma}|\quad\text{where}\,\,\,r_{0}=\frac{1} {|B(0,5L)\cap\mathbb{Z}^{d}|}.\] For any contour \(\gamma\) and any configuration \(\omega\) that achieves this contour, using the dominoes we are able to find a non-negligible amount of empty tiles that is covered by the halo of the configuration. Another way to find such tiles in the contour is by counting the tiles covered by the halo which are close to the boundary of the halo. Indeed those tiles are guaranteed to be empty otherwise the boundary would be further away. **Lemma 6**.: _Let \(R_{0}\geq 2\delta\sqrt{d}>0\), and let us define \(\theta_{1}^{\delta}\) such as_ \[\theta_{1}^{\delta}:=\inf_{\begin{subarray}{c}\omega\in\Omega_{f}\\ \gamma:S_{\overline{\gamma}}(L(\omega))>0\end{subarray}}\left\{\frac{V_{ \omega,\gamma,\delta}}{S_{\overline{\gamma}}(L(\omega))}\right\}\] _and_ \[V_{\omega,\gamma,\delta}=\max\left\{\mathcal{V}(T_{I}),I\subset\overline{ \gamma},\forall i\in I,T_{i}\subset\partial L(\omega)\oplus B(0,R_{0})\cap L (\omega)\right\}.\] _We have \(\theta_{1}^{\delta}>0\) and \(\theta_{1}^{\delta}\underset{\delta\to 0}{\rightarrow}\theta_{1}^{\delta}\) where \(\theta_{1}^{*}=R_{0}\frac{\mathcal{V}(B(0,1))}{S(B(0,1))}.\)_ Proof.: For any finite configuration \(\omega\in\Omega_{f}\) and \(\gamma\) a contour that is created by this configuration such that \(S_{\overline{\gamma}}(L(\omega))>0\), we have the following inequalities \[V_{\omega,\gamma,\delta}^{+}\geq V_{\omega,\gamma,\delta}\geq V_{\omega, \gamma,\delta}^{-}\] where \[V_{\omega,\gamma,\delta}^{+} =\mathcal{V}(\partial L(\omega)\oplus B(0,R_{0})\cap L(\omega) \cap\widehat{\gamma})\] \[V_{\omega,\gamma,\delta}^{-} =\mathcal{V}((\partial L(\omega)\oplus B(0,R_{0}-\delta))\backslash (\partial L(\omega)\oplus B(0,\delta))\cap L(\omega)\cap\widehat{\gamma}).\] The boundary of the halo \(L(\omega)\) inside \(\overline{\gamma}\), appearing in the computation of \(S_{\overline{\gamma}}(L(\omega))\), is the union of spherical caps built via some marked points \((x_{1},r_{1}),\ldots,(x_{m^{\prime}},r_{m})\in\omega\). We denote by \(\alpha_{i}\in[0,1]\) the ratio of the surface of the \(i\)th spherical cap with respect to the total surface of its sphere. Therefore, by a simple geometrical argument \[\frac{V^{-}_{\omega,\gamma,\delta}}{S_{\overline{\gamma}}(L(\omega ))} \geq\frac{\mathcal{V}(B(0,1))}{S(B(0,1))}\left(\frac{\sum_{i=1}^{m }\alpha_{i}(r_{i}^{d}-(r_{i}-R_{0})^{d})}{\sum_{i=1}^{m}\alpha_{i}r_{i}^{d-1}}- \epsilon_{\omega,\gamma}(\delta)\right)\] \[\geq\frac{\mathcal{V}(B(0,1))}{S(B(0,1))}(R_{0}-\epsilon_{\omega, \gamma}(\delta))\] where \[\epsilon_{\omega,\gamma}(\delta)=\frac{\sum_{i=1}^{m}\alpha_{i}(r_{i}^{d}-(r_{ i}-\delta)^{d}+(r_{i}-R_{o}+\delta)^{d}-(r_{i}-R_{0})^{d})}{\sum_{i=1}^{m} \alpha_{i}r_{i}^{d-1}}.\] Therefore for all \(\omega\in\Omega_{f}\) and the associated contour \(\gamma\) we have \[\liminf_{\delta\to 0}\frac{V_{\omega,\gamma,\delta}}{S_{\overline{\gamma}}(L( \omega))}\geq\frac{R_{0}\mathcal{V}(B(0,1))}{S(B(0,1))}\ \Rightarrow\ \liminf_{\delta\to 0}\theta_{1}^{ \delta}\geq\theta_{1}^{*}.\] Note that for \(\omega=\{(0,R_{0})\}\) we have \(\frac{V^{+}_{\omega,\delta}}{S_{\overline{\gamma}}(L(\omega))}=\frac{R_{0} \mathcal{V}(B(0,1))}{S(B(0,1))}\). So \[\frac{R_{0}\mathcal{V}(B(0,1))}{S(B(0,1))}\geq\inf_{\begin{subarray}{c}\omega \in\Omega_{f}\\ \gamma:S_{\overline{\gamma}}(L(\omega))>0\end{subarray}}\left\{\frac{V^{+}_{ \omega,\gamma,\delta}}{S_{\overline{\gamma}}(L(\omega))}\right\}\] which implies that \(\theta_{1}^{*}\geq\limsup_{\delta\to 0}\theta_{1}^{\delta}\). **Lemma 7**.: _Let \(\delta L\geq 2R_{1}\) and \(d=2\). For any contour \(\gamma\) and any configuration \(\omega\) that achieves this contour we have_ \[\chi_{\overline{\gamma}}(\omega)\leq\frac{|\overline{\gamma}|\delta^{d}}{R_{0 }^{d}\mathcal{V}(B(0,1))}.\] Proof.: By definition of contours and the conditions on \(\delta\) and \(L\) we have \[\chi_{\overline{\gamma}}(L(\omega))=\chi_{\overline{\gamma}}(L(\omega_{ \overline{\gamma}}))\leq N_{cc}(L(\omega_{\overline{\gamma}})).\] Now the aim is to find for each connected components \(C\) of \(L(\omega_{\overline{\gamma}})\) a single point \((x,R)\in\omega_{\widehat{\gamma}}\) such that the ball \(B(x,R)\subset C\cap\widehat{\gamma}\). Note that a ball \(B(x,R)\subset C\) is not necessary included inside the contour. If there exists such a ball not including in the contour, then there is \(A\subset\overline{\gamma}^{c}\) a connected component such that \(B(x,R)\cap\widehat{A}\neq\emptyset\). It gives the information that the site \(i\in\mathbb{Z}^{d}\) such that \(x\in T_{i}\) is included inside \(\partial_{ext}A\) and by Lemma 3 for all \(j\in\partial_{ext}A\) we have \(\sigma(j,\omega_{\omega_{\overline{\gamma}}})=1\). Therefore all balls that are in the tiles corresponding to \(\partial_{ext}A\) belong to the same connected component of the halo. We choose a site \(j\in\partial_{ext}A\) such that \(d_{2}(j,A)=\lceil\frac{2R_{1}}{\delta}\rceil\) and so there exists \((y,R^{\prime})\in\omega_{T_{j}}\) such that \(B(y,R^{\prime})\subset\widehat{\gamma}\). Therefore we can replace the original representative of the connected component with one that is more suitable. With this procedure we have now built, for each connected components \(C\) of \(L(\omega_{\widehat{\gamma}})\), a single point \((x,R)\in\omega_{\widehat{\gamma}}\) such that the ball \(B(x,R)\subset C\cap\widehat{\gamma}\). We define \(I(\omega_{\widehat{\gamma}})\) as the set of all these points which represent the connected components of \(L(\omega_{\widehat{\gamma}})\). By construction for any \((x,R)\neq(y,R^{\prime})\in I(\omega_{\widehat{\gamma}})\), \(B(x,R)\cap B(y,R^{\prime})=\emptyset\) and therefore we have \[N_{ce}(\omega_{\widehat{\gamma}})=|I(\omega_{\widehat{\gamma}})|\leq\frac{| \overline{\gamma}|\delta^{d}}{\mathcal{V}(B(0,R_{0}))}.\] **Lemma 8**.: _For any \(\#\in\{0,1\}\) and any contour \(\gamma\) we have \(r_{1}|\overline{\gamma}|\leq|\overline{\gamma}_{\#}|\leq(1-r_{1})|\overline{ \gamma}|\) where_ \[r_{1}=\frac{1}{|B(0,2L)\cap\mathbb{Z}^{d}|}\quad\text{and}\quad\overline{ \gamma}_{\#}=\{i\in\overline{\gamma},\#_{i}=\#\}.\] Proof.: Let us set a contour \(\gamma\). Let us define \(\phi:\overline{\gamma}_{1}\to\overline{\gamma}_{0}\) such that for all \(i\in\overline{\gamma}_{1}\), \(\phi(i)=j\) where \(j\) is the closest site in \(\overline{\gamma}_{0}\) from \(i\). A lexicographical procedure is used if equality. \[\alpha_{\gamma}^{1}:=\frac{|\overline{\gamma}_{1}|}{|\overline{\gamma}|} \iff\frac{\alpha_{\gamma}^{1}}{1-\alpha_{\gamma}^{1}}=\frac{|\overline{\gamma }_{1}|}{|\overline{\gamma}_{0}|}=\frac{\sum_{j\in\overline{\gamma}_{0}}|\phi^{ -1}(j)|}{|\overline{\gamma}_{0}|}\leq\max|\phi^{-1}(j)|.\] It is easy to see that \(\max|\phi^{-1}(j)|\leq|B(0,2L)\cap\mathbb{Z}^{d}|-1\) and using the fact that \(\alpha_{\gamma}^{1}+\alpha_{\gamma}^{0}=1\) we have that \[\alpha_{\gamma}^{1} \leq 1-\frac{1}{|B(0,2L)\cap\mathbb{Z}^{d}|}\] \[\alpha_{\gamma}^{0} \geq\frac{1}{|B(0,2L)\cap\mathbb{Z}^{d}|}.\] By symmetry of the roles we can exchange the value of the spin. We have now all the lemmas for proving the Proposition 3. Proof.: We detail the proof of Proposition 3 in dimension \(d=2\). In higher dimension, the proof works in the same manner but we assume that \(\theta_{2}=0\). Let us set \(L=\left\lceil\frac{2R_{1}}{\delta}\right\rceil\), therefore the constant \(r_{0}\) in Lemma 5 has the following expression \[r_{0}(\delta)=\frac{1}{\left|B\left(0,5\left\lceil\frac{2R_{1}}{\delta} \right\rceil\right)\cap\mathbb{Z}^{d}\right|}.\] Let us consider \(\theta_{1}>-\theta_{1}^{*}\), we define \(\theta_{2}^{*}(\theta_{1})\) and \(\theta_{2}^{\delta}(\theta_{1})\) as \[\theta_{2}^{*}(\theta_{1}) =\begin{cases}r_{0}\left(\frac{R_{0}}{2\sqrt{d}}\right)\mathcal{ V}(B(0,R_{0}))&\text{when }\theta_{1}\geq 0\\ \sup_{\delta\in\left]0,\frac{R_{0}}{2\sqrt{d}}\right[:\theta_{1}>-\theta_{1}^{ \delta}}\left\{r_{0}(\delta)\mathcal{V}(B(0,R_{0}))(1+\frac{\theta_{1}}{ \theta_{1}^{2}})\right\}&\text{when }\theta_{1}<0\end{cases} \tag{13}\] \[\theta_{2}^{\delta}(\theta_{1}) =\begin{cases}r_{0}\left(\frac{R_{0}}{2\sqrt{d}}\right)\mathcal{ V}(B(0,R_{0}))&\text{when }\theta_{1}\geq 0\\ \left\{r_{0}(\delta)\mathcal{V}(B(0,R_{0}))(1+\frac{\theta_{1}}{\theta_{1}^{ \delta}})\right\}&\text{when }\theta_{1}<0\end{cases}.\] We know by Lemma 6 that for sufficiently small \(\delta\) we have \(\theta_{1}\geq-\theta_{1}^{\delta}>-\theta_{1}^{*}\) and \(\theta_{2}\leq\theta_{2}^{\delta}(\theta_{1})<\theta_{2}^{*}(\theta_{1})\). In the case where \(\theta_{1}<0\) we need to consider a threshold t such that \[\frac{\theta_{2}}{\theta_{1}^{\delta}+\theta_{1}}\frac{\delta^{d}}{\mathcal{V} (B(0,R_{0}))}<t<\frac{1}{\theta_{1}}\left(\frac{\theta_{2}\delta^{d}}{\mathcal{ V}(B(0,R_{0}))}-r_{0}(\delta)\delta^{d}\right).\] We define the quantity \(\rho_{0}\) as such \[\rho_{0}=\begin{cases}r_{0}(\delta)\delta^{d}-\frac{\theta_{2}\delta^{d}}{ \mathcal{V}(B(0,R_{0}))}&\text{ when }\theta_{1}\geq 0\\ \min\left\{(\theta_{1}^{\delta}+\theta_{1})t-\frac{\theta_{2}\delta^{d}}{ \mathcal{V}(B(0,R_{0}))},\ r_{0}(\delta)\delta^{d}+\theta_{1}t-\frac{\theta_{ 2}\delta^{d}}{\mathcal{V}(B(0,R_{0}))}\right\}&\text{ when }\theta_{1}<0\end{cases}.\] With the conditions on \(\delta\) and \(t\), it guarantees that \(\rho_{0}>0\). Under our assumptions, for any configuration \(\omega\in\Omega_{f}\) that achieves the contour \(\gamma\) we claim that \[H_{\overline{\gamma}}(\omega)\geq|\overline{\gamma}_{1}|\delta^{d}+\rho_{0}| \overline{\gamma}|. \tag{14}\] First let us prove the claim in the case where \(\theta_{1}<0\). With the condition \(R_{0}>2\delta\sqrt{d}\) we know that we have a non negligible amount of empty tiles that are completely covered by the halo. Therefore using Lemmas 5 and 6 we have the following two lower bounds on the energy of the contour \[H_{\overline{\gamma}}(\omega) \geq\begin{cases}|\overline{\gamma}_{1}|\delta^{d}+r_{0}|\overline {\gamma}|\delta^{d}+\theta_{1}S_{\overline{\gamma}}(L(\omega))-\theta_{2} \chi_{\overline{\gamma}}(L(\omega))\\ |\overline{\gamma}_{1}|\delta^{d}+(\theta_{1}^{\delta}+\theta_{1})S_{\overline {\gamma}}(L(\omega))-\theta_{2}\chi_{\overline{\gamma}}(L(\omega))\end{cases}\] \[\geq\begin{cases}|\overline{\gamma}_{1}|\delta^{d}+r_{0}|\overline {\gamma}|\delta^{d}+\theta_{1}S_{\overline{\gamma}}(L(\omega))-\theta_{2} \frac{|\overline{\gamma}|\delta^{d}}{\mathcal{V}(B(0,R_{0}))}&\text{ by Lemma \ref{lem:energy}.}\\ |\overline{\gamma}_{1}|\delta^{d}+(\theta_{1}^{\delta}+\theta_{1})S_{\overline {\gamma}}(L(\omega))-\theta_{2}\frac{|\overline{\gamma}|\delta^{d}}{\mathcal{ V}(B(0,R_{0}))}&\text{ by Lemma \ref{lem:energy}.}\end{cases}.\] Depending on the value of the surface inside the contour, one lower bound will be more preferable than the other. Since \(\theta_{1}^{\delta}+\theta_{1}>0\) and given the threshold \(t\) that verifies our assumption we have \[H_{\overline{\gamma}}(\omega)\geq\begin{cases}|\overline{\gamma}_{1}|\delta^ {d}+\left(r_{0}\delta^{d}+\theta_{1}t-\frac{\theta_{2}\delta^{d}}{\mathcal{V}( B(0,R_{0}))}\right)|\overline{\gamma}|&\text{ if }S_{\overline{\gamma}}(L(\omega))\leq t|\overline{\gamma}|\\ |\overline{\gamma}_{1}|\delta^{d}+(\theta_{1}^{\delta}+\theta_{1})t|\overline{ \gamma}|-\theta_{2}\frac{|\overline{\gamma}|\delta^{d}}{\mathcal{V}(B(0,R_{0} ))}&\text{ if }S_{\overline{\gamma}}(L(\omega))>t|\overline{\gamma}|\end{cases}.\] In either cases, we have the desired lower boundary on the energy of a contour. Let us turn to the second case where \(\theta_{1}\geq 0\). It is even easier since we can simply drop the contribution of the surface in the energy and therefore we have \[H_{\overline{\gamma}}(\omega)\geq|\overline{\gamma}_{1}|\delta^{d}+\left(r_{0} \delta^{d}-\frac{\theta_{2}\delta^{d}}{\mathcal{V}(B(0,R_{0}))}\right)| \overline{\gamma}|=|\overline{\gamma}_{1}|\delta^{d}+\rho_{0}|\overline{\gamma}|.\] This finishes the proof of the claim. We can now use inequality (14) to get the following upper bound \[I_{\gamma} \leq e^{-\beta(\delta^{d}(\overline{\gamma}_{1}|+\rho_{0}| \overline{\gamma}|)}\int\mathbb{1}_{(\forall i\in\overline{\gamma},\sigma( \omega,l)=\sigma_{i})}\Pi_{\overline{\gamma}}^{2}(d\omega)\] \[\leq e^{-\beta(\delta^{d}(\overline{\gamma}_{1}|+\rho_{0}| \overline{\gamma}|)}(1-e^{2\delta^{d}})^{|\overline{\gamma}_{1}|}e^{2\delta^{d }|\overline{\gamma}_{0}|}\] \[\leq g_{0}^{|\overline{\gamma}_{0}|}g_{1}^{|\overline{\gamma}_{1}|} e^{-\beta\rho_{0}|\overline{\gamma}|}.\] By a direct computation we have \[\frac{\partial I_{\gamma}}{\partial z}=-|\overline{\gamma}|\delta^{d}I_{\gamma}+ \frac{1}{z}\int N_{\widehat{\gamma}}(\omega)e^{-\beta H_{\overline{\gamma}}( \omega)}\mathbb{1}_{\{\forall i\in\overline{\gamma},\sigma(\omega,i)=\sigma_{ i}\}}\Pi_{\widehat{\gamma}}^{z}(d\omega).\] Using the bound on \(I_{\gamma}\) we have the desired control on the first term. For the second term we need to control the mean number of points in the contour. By using the lower bound on the energy in a contour (14) and Lemma 8 we have \[\int N_{\widehat{\gamma}}(\omega)e^{-\beta H_{\overline{\gamma}} (\omega)}\mathbb{1}_{\{\forall i\in\overline{\gamma},\sigma(\omega,i)=\sigma_{ i}\}}\Pi_{\widehat{\gamma}}^{z}(d\omega) \leq e^{-\beta(\rho_{0}|\overline{\gamma}|+|\overline{\gamma}_{1}| \delta^{d})}\int N_{\widehat{\gamma}}(\omega)\mathbb{1}_{\{\forall i\in \overline{\gamma},\sigma(\omega,i)=\sigma_{i}\}}\Pi_{\widehat{\gamma}}^{z}(d\omega)\] \[\leq e^{-\beta(\rho_{0}|\overline{\gamma}|+|\overline{\gamma}_{1 }|\delta^{d})}\sum_{i\in\overline{\gamma}_{1}}\int N_{T_{i}}(\omega)\mathbb{1 }_{\{\forall i\in\overline{\gamma},\sigma(\omega,i)=\sigma_{i}\}}\Pi_{ \widehat{\gamma}}^{z}(d\omega)\] \[\leq e^{-\beta(\rho_{0}|\overline{\gamma}|+|\overline{\gamma}_{1 }|\delta^{d})}g_{0}^{\overline{[\gamma}_{0}|}|\overline{\gamma}_{1}|z\delta^{d }(1-e^{-z\delta^{d}})^{|\overline{\gamma}_{1}|-1}\] \[\leq\frac{(1-r_{1})z\delta^{d}}{1-e^{-z\delta^{d}}}|\overline{ \gamma}|g_{0}^{\overline{[\gamma}_{0}|}g_{1}^{\overline{[\gamma}_{1}|}e^{- \beta\rho_{0}|\overline{\gamma}|}.\] By combining those inequalities we get \[\left|\frac{\partial I_{\gamma}}{\partial z}\right|\leq\left(1+\frac{1-r_{1}}{ 1-e^{-z\delta^{d}}}\right)|\overline{\gamma}|\delta^{d}g_{0}^{\overline{[ \gamma}_{0}|}g_{1}^{\overline{[\gamma}_{1}|}e^{-\beta\rho_{0}|\overline{ \gamma}|}.\] ### Truncated weights and pressure In order to have \(\tau\)-stable contours, it is needed that the ratio of partition functions of the interior has to be small. This ratio is more likely to be large when the volume of the interior is large. Therefore depending on the parameters \(z\), \(\beta\) the weights of some contours might be unstable because the volume of the interior is too large. In order to deal with those instability we will truncate the weights. We follow mainly the ideas and the presentation given in Chapter 7 of [7]. We introduce an arbitrary cut-off function \(\kappa\,:\,\mathbb{R}\to[0,1]\) that satisfy the following properties : \(\kappa(s)=1\) if \(s\leq\frac{\rho_{0}}{8}\), \(\kappa(s)=0\) if \(s\geq\frac{\rho_{0}}{4}\) and \(\kappa\) is \(C^{1}\). Therefore such cut-off function \(\kappa\) satisfies \(\|\kappa^{\prime}\|=\sup_{\mathbb{R}}|\kappa^{\prime}(s)|<+\infty\). We construct step by step the truncated weights and pressure according to the volume of the interior. Contours with no interior does not have a ratio of partition functions introduced in the expression of its weight, thus those contours are stable. We can define the truncated quantities associated to \(n=0\) starting by the truncated pressure \[\widehat{\psi}_{0}^{\#}\,:=\frac{\ln(g_{\#})}{\beta\delta^{d}}.\] We define the truncated weights for contour \(\gamma\) of class \(0\) as \[\widehat{\upsilon}_{\gamma}^{\#}=w_{\gamma}^{\#}=g_{\#}^{-|\overline{\gamma} |}I_{\gamma}.\] Now we suppose that the truncated weights are well defined for contours \(\gamma\) of class \(k\leq n\). We define \(\widehat{\Phi}_{n}^{\#}\) with the following polymer development \[\widehat{\Phi}_{n}^{\#}(\Lambda)\,:=\sum_{\Gamma\in C_{n}^{\#}(\Lambda)}\prod _{\gamma\in\mathbb{I}}\widehat{\upsilon}_{\gamma}^{\#}.\] We can define the truncated partition function as \[\hat{Z}_{n}^{\#}(\Lambda)\,:=g_{\#}^{|\Lambda|}\hat{\Phi}_{n}^{\#}(\Lambda).\] With \(\Lambda_{k}=[-k/2,k/2]^{d}\), the truncated pressure at rank n is given by \[\widehat{\psi}_{n}^{\#} :=\lim_{k\to+\infty}\frac{1}{\beta\delta^{d}\left|\Lambda_{k} \right|}\ln(\hat{Z}_{n}^{\#}(\Lambda_{k}))\] \[=\widehat{\psi}_{0}^{\#}+\lim_{k\to+\infty}\frac{1}{\beta\delta^ {d}\left|\Lambda_{k}\right|}\ln(\hat{\Phi}_{n}^{\#}(\Lambda_{k})).\] This limit exists as it is shown in Section 7.4.3 of [7]. We can also note that \(\widehat{\psi}_{n}^{\#}\in[\widehat{\psi}_{0}^{\#},\psi]\) and that the sequence of truncated pressure \((\widehat{\psi}_{n}^{\#})_{n\in\mathrm{N}}\) is increasing. **Definition 11**.: _Given the truncated weight of contours \(\gamma\) of class \(k\) smaller than \(n\) (and so the truncated pressure \(\widehat{\psi}_{n}^{\#}\)), the truncated weight of a contour \(\gamma\) of class \(n+1\) is defined by_ \[\widehat{\psi}_{\gamma}^{\#}=g_{\#}^{-[\overline{\gamma}]}I_{\gamma}\; \kappa\left((\widehat{\psi}_{n}^{\#^{*}}-\widehat{\psi}_{n}^{\#})\delta^{d} \left|\right.\mathrm{Int}_{\#^{*}}\;\gamma\left|\right.^{\frac{1}{d}}\right) \frac{Z_{\mathrm{Int}_{\#^{*}}\gamma}^{\#}}{Z_{\mathrm{Int}_{\#^{*}}\gamma}^{ \#}}.\] Intuitively, the reason we do this cut-off is to remove contours that are unstable. Since the cut-off function verify \(0\leq\kappa\leq 1\) we have \(\widehat{\omega}_{\gamma}^{\#}\leq\mu_{\gamma}^{\#}\). Other quantities that are of interest for what will come later are, \(\widehat{\psi}_{n}\,:=\max(\widehat{\psi}_{n}^{0},\widehat{\psi}_{n}^{1})\) and \(a_{n}^{\#}\,:=\widehat{\psi}_{n}-\widehat{\psi}_{n}^{\#}\). By definition, we have \(a_{n}^{\#}\geq 0\) and for all contour \(\gamma\) of class \(n+1\) we have the following implication \[a_{n}^{\#}\delta^{d}(n+1)^{\frac{1}{d}}\leq\frac{\rho_{0}}{8}\;\;\Rightarrow \;\;\widehat{\omega}_{\gamma}^{\#}=\omega_{\gamma}^{\#}.\] Before we go further we will proceed into a re-parametrisation of the model for fixed \(\theta_{1}\) and \(\theta_{2}\) that verify the assumption of Proposition 3. We change the parameters from \((\beta,z)\) to \((\beta,s)\) where \(s=z/\beta\). We set \(a=\min\{\frac{2}{1-\tau_{1}},e^{-c\beta}\}\) where \(r_{1}\) is given in Lemma 8 and \(c>0\). For all \(\beta>0\) we set \[s_{\beta}=\frac{\ln(1+e^{\beta\delta^{d}})}{\beta\delta^{d}}\] such that we have \(g_{0}=g_{1}\). Furthermore, we define the following open interval \[U_{\beta}=\left(\frac{\ln(1+e^{\beta\delta^{d}-a})}{\beta\delta^{d}},\frac{ \ln(1+e^{\beta\delta^{d}+a})}{\beta\delta^{d}}\right)\] such that for all \(s\in U_{\beta}\) we have \(e^{-a}\leq\frac{g_{0}}{g_{1}}\leq e^{a}\). So, according to Proposition 3, for any \(\beta>0\) and \(s\in U_{\beta}\) we have \[w_{\gamma}^{\#}\leq e^{-(\beta\rho_{0}-2)\left|\overline{\gamma}\right|}\frac {Z_{\mathrm{Int}_{\#^{*}}\gamma}^{\#}}{Z_{\mathrm{Int}_{\#^{*}}\gamma}^{\#}}.\] We can already see that for contours of class \(0\) the weights are \(\tau\)-stable as long as \(\beta\rho_{0}>2\). In the following proposition we will show that truncated weights are \(\tau\)-stable and that we have a good bound on the derivative of the truncated weights that are useful in the study of the regularity of the pressure. **Proposition 4**.: _Let \(\tau:=\frac{1}{2}\beta\rho_{0}-8\). There exists \(D\geq 1\) and \(0<\beta_{0}<\infty\) such that all \(\beta>\beta_{0}\) there exist \(C_{1}>0\) and \(C_{2}>0\) where the following statements hold for any \(\#\) and \(n\geq 0\)._ 1. _(Bounds on the truncated weights) For all_ \(k\leq n\)_, the truncated weights of each contours_ \(\gamma\) _of class_ \(k\) _is_ \(\tau\)_-stable uniformly on_ \(U_{\beta}\) _:_ \[\widehat{\omega}_{\gamma}^{\#}\leq e^{-\tau|\widehat{\tau}|}\] (15) _and_ \[a_{n}^{\#}\delta^{d}\left|\operatorname{Int}\gamma\right|^{\frac{1}{d}}\leq \frac{\rho_{0}}{16}\ \Rightarrow\ \widehat{\omega}_{\gamma}^{\#}=\omega_{\gamma}^{\#}.\] (16) _Moreover,_ \(s\mapsto\widehat{\omega}_{\gamma}^{\#}\) _is_ \(C^{1}\) _and uniformly on_ \(U_{\beta}\) _it verifies_ \[\left|\frac{\partial\widehat{\omega}_{\gamma}^{\#}}{\partial s}\right|\leq D| \overline{\gamma}|^{\frac{d}{d-1}}e^{-\tau|\widehat{\tau}|}.\] (17) 2. _(Bounds on the partition functions) Let us assume that_ \(\Lambda\subset\mathbb{Z}^{d}\) _and_ \(|\Lambda|\leq n+1\)_. Then uniformly on_ \(U_{\beta}\) _we have_ \[Z_{\Lambda}^{\#} \leq e^{\theta\delta^{d}\widehat{\phi}_{n}\left|\Lambda\right|+2 \left|\partial_{ext}\Lambda\right|},\] (18) \[\left|\frac{\partial Z_{\Lambda}^{\#}}{\partial s}\right| \leq\left(C_{1}\beta\delta^{d}\left|\Lambda\right|+C_{2}|\partial _{ext}\Lambda|\right)\,e^{\theta\delta^{d}\widehat{\phi}_{n}\left|\Lambda \right|+2\left|\partial_{ext}\Lambda\right|}.\] (19) This proposition is the key tool for applying the PSZ theory and is very similar to Proposition 7.34 in [7] with some adaptations to the present setting. Due to these small adaptations and also to be self-contained, the proof of the proposition is given in the Annex B. ### Proof of Proposition 1 According to Proposition 4 we know that the weights are \(\tau\)-stable with \(\tau\) as large as we want provided that \(\beta\) is large enough. So for \(\beta\) large enough, for any \(\Lambda\subset\mathbb{Z}^{d}\) we can define \[\widehat{\Phi}^{\#}(\Lambda)\,:=\sum_{\Gamma\in C^{\#}(\Lambda)}\prod_{\gamma \in\Gamma}\widehat{\omega}_{\gamma}^{\#}\] and based on standard cluster expansion results, recalled in Annex A, it is also possible to have a polymer development of \(\log(\widehat{\Phi}^{\#}(\Lambda))\). In particular by Theorem 11 we have \[\widehat{\Phi}^{\#}(\Lambda)=e^{\theta\delta^{d}f^{\#}|\Lambda|+\Delta^{\#} (\Lambda)}\] where \(f^{\#}\) and \(\Delta_{\Lambda}^{\#}\) are \(C^{1}\) in \(U_{\beta}\). Furthermore, according to Theorem 11 we also have \[|f^{\#}|\leq\eta(\tau,l_{0}), |\Delta_{\Lambda}^{\#}|\leq\eta(\tau,l_{0})|\partial_{ext}\Lambda|\] \[\left|\frac{\partial f^{\#}}{\partial s}\right|\leq D\eta(\tau,l_ {0}), \left|\frac{\partial\Delta_{\Lambda}^{\#}}{\partial s}\right|\leq D \eta(\tau,l_{0})|\partial_{ext}\Lambda|,\] where \(D>0\) comes from Proposition 4 and the constant \(\eta(\tau,l_{0})\), used intensively in Annex A, is equal to \(2e^{-\tau l_{0}/3}\), where \(l_{0}\) is the smallest size of a non-empty contour (it is a positive integer). Therefore the truncated partition function with boundary \(\#\) is given by \[\widehat{Z}_{\Lambda}^{\#}\,:=\,g_{\#}^{|\Lambda|}\widehat{\Phi}^{\#}(\Lambda )=e^{\beta\delta^{d}(\widehat{\phi}_{0}^{\#}+f^{\#})|\Lambda|+\Delta^{\#}( \Lambda)} \tag{20}\] and the truncated pressure associated to the boundary \(\#\) is \[\widehat{\psi}^{\#}\,:=\widehat{\psi}_{0}^{\#}+f^{\#}.\] We can also obtain \(\widehat{\psi}^{\#}\) as the limit of the sequence of truncated pressure \((\widehat{\psi}_{n}^{\#})_{n\in\mathbb{N}}\) when \(n\) is going to infinity. So we define the truncated pressure as \(\widehat{\psi}\,:=\max\{\widehat{\psi}^{(0)},\widehat{\psi}^{(1)}\}\) and we have that \(a^{\#}\,:=\widehat{\psi}-\widehat{\psi}^{\#}=\lim_{n\to\infty}a_{n}^{\#}\). Similarly to the statement (16) in Proposition 4, for any contour \(\gamma\) we have the following implication \[a^{\#}\delta^{d}\,|\,\operatorname{Int}\gamma|^{\frac{1}{d}}\leq\frac{\rho_{0} }{16}\,\,\Rightarrow\,\,\widehat{\omega}_{\gamma}^{\#}=\omega_{\gamma}^{\#}.\] Therefore when \(a^{\#}=0\) all the truncated weights are equal to the actual weight of the model and thus for all \(\Lambda\subset\mathbb{Z}^{d}\) we have \(\widehat{Z}_{\Lambda}^{\#}=Z_{\Lambda}^{\#}\) and with Lemma 4 we have \[\psi=\widehat{\psi}\,=\max\{\widehat{\psi}^{(1)},\widehat{\psi}^{(0)}\}.\] Therefore we can extract properties of the pressure from the study of the truncated pressures. For \(s\in U_{\beta}\) let us define the gap \(G(s)\) between the two truncated pressures \[G(s)\,:=\widehat{\psi}^{(1)}-\widehat{\psi}^{(0)}=(s-1)+\frac{\ln(1-e^{-s \beta\delta^{d}})}{\beta\delta^{d}}+f^{(1)}-f^{(0)}.\] Previously, we have set \(a=\min\{\frac{2}{1-r_{1}},e^{-c\beta}\}\) with the condition that \(c>0\). We are going to be more precise and take \(c<\frac{1}{6}\rho_{0}l_{0}\). Hence for sufficiently large \(\beta\) we have \[G(s_{\beta}^{-})=-\frac{a}{\delta^{d}}+f^{(1)}-f^{(0)}\leq-\frac {a}{\delta^{d}}+2\eta(\tau,l_{0})<0\] \[G(s_{\beta}^{+})=\frac{a}{\delta^{d}}+f^{(1)}-f^{(0)}\geq\frac{a }{\delta^{d}}-2\eta(\tau,l_{0})>0\] where \(s_{\beta}^{-}=\frac{\ln(1+e^{\beta\delta^{d}-a})}{\beta\delta^{d}}\) and \(s_{\beta}^{+}=\frac{\ln(1+e^{\beta\delta^{d}+a})}{\beta\delta^{d}}\). Furthermore, for \(s\in U_{\beta}\) and for sufficiently large \(\beta\) we have \[\frac{\partial G}{\partial s}(s)=\frac{1}{e^{s\beta\delta^{d}}-1}+\frac{ \partial f^{(1)}}{\partial s}-\frac{\partial f^{(0)}}{\partial s}>1-2D\eta( \tau,l_{0})>0.\] This ensures the existence of an unique \(s_{\beta}^{c}\in U_{\beta}\) such that \[\widehat{\psi}=\begin{cases}\widehat{\psi}^{(0)}&\text{ when }s\in[s_{\beta}^{-},s_{ \beta}^{c}]\\ \widehat{\psi}^{(1)}&\text{ when }s\in[s_{\beta}^{c},s_{\beta}^{+}]\end{cases}\] and also for all \(s\in U_{\beta}\) (and thus also for \(s_{\beta}^{c}\)) we have \[\frac{\partial\widehat{\psi}^{(1)}}{\partial s}(s)>\frac{\partial\widehat{\psi }^{(0)}}{\partial s}(s). \tag{21}\] Furthermore, we can observe that \[z_{\beta}^{-}-\beta\,:=\beta s_{\beta}^{-}-\beta=-\frac{a}{ \delta^{d}}+\frac{1}{\delta^{d}}\ln(1+e^{-\beta\delta^{d}+a})=-\frac{a}{ \delta^{d}}+o(a(\beta))\] \[z_{\beta}^{+}-\beta\,:=\beta s_{\beta}^{+}-\beta=\frac{a}{\delta ^{d}}+\frac{1}{\delta^{d}}\ln(1+e^{-\beta\delta^{d}-a})=\frac{a}{\delta^{d}} +o(a(\beta))\] Therefore we have that \(|z_{\beta}^{c}-\beta|=O(e^{-c\beta})\) when \(\beta\) tends to infinity where \(z_{\beta}^{c}:=\beta s_{\beta}^{c}\) and \(0<c<\kappa_{0}l_{0}/6\). This last part proves the statement in Theorem 1 about the exponential decay of the difference \(|\beta-z_{\beta}^{c}|\). For all \(n\geq 1\), we define the empirical field \(\overline{P}_{\Lambda_{n}}^{\#}\) as the probability measure on \(\Omega\) such that for any positive test function \(f\) \[\int f(\omega)\overline{P}_{\Lambda_{n}}^{\#}(d\omega)=\frac{1}{|\widehat{ \Lambda}_{n}|}\int_{\widehat{\Lambda}_{n}}\int f(\tau_{u}(\omega))\widehat{P} _{\Lambda_{n}}^{\#}(d\omega)du\quad\text{where }\widehat{P}_{\Lambda_{n}}^{\#}= \bigotimes_{i\in\mathbb{Z}^{d}}P_{\tau_{2nd}(\Lambda_{n})}^{\#}.\] The empirical field can be seen as a stationarization of \(P_{\Lambda_{n}}^{\#}\) and therefore any accumulation point of the sequence \((\overline{P}_{\Lambda_{n}}^{\#})_{n\in\mathbb{N}}\) is necessarily stationary. Furthermore, we know that the Quermass interaction is stable and has a finite range interaction. Therefore there exists a sub-sequence of the empirical field \((\overline{P}_{\Lambda_{n}}^{\#})_{n\in\mathbb{N}}\) that converges to \(P^{\#}\) for the local convergence topology defined as the smallest topology on the space of probability measures on \(\Omega\) such that for all tame functions \(f\) (i.e. \(f\,:\Omega\to\mathbb{R}\) such that \(\exists\Delta\subset\mathbb{R}^{d}\) bounded, \(\exists A\geq 0,\forall\omega\in\Omega,f(\omega)=f(\omega_{\Delta})\) and \(|f|\leq A(1+N_{\Delta})\)) the map \(P\to\int f\,d\,P\) is continuous. Moreover this limit is a Gibbs measure as it satisfies the DLR equations. The proof of these statements is similar to the proof of Theorem 1 in [3]. For all \(\beta\geq\beta_{0}\) and for \(z=z_{\beta}^{c}\) we have by a direct computation \[\frac{\partial}{\partial z}\log\widehat{Z}_{\Lambda_{n}}^{\#}=\frac{\partial} {\partial z}\log Z_{\Lambda_{n}}^{\#}=-\delta^{d}|\Lambda_{n}|+\frac{1}{z}E_{ P_{\Lambda_{n}}^{\#}}(N_{\widehat{\Lambda}_{n}}). \tag{22}\] On the other hand we know using (20) that \[\frac{\partial}{\partial z}\log\widehat{Z}_{\Lambda_{n}}^{\#}=\beta|\Lambda_ {n}|\delta^{d}\frac{\partial\widehat{\Psi}^{\#}}{\partial z}+\frac{\partial \Delta_{\Lambda_{n}}^{\#}}{\partial z}. \tag{23}\] By combining (22) and (23) we get the following relationship \[\frac{E_{P_{\Lambda_{n}}^{\#}}(N_{\widehat{\Lambda}_{n}})}{|\Lambda_{n}| \delta^{d}}=z+z\beta\frac{\partial\psi^{\#}}{\partial z}(z_{\beta}^{c})+\frac {z}{|\Lambda_{n}|\delta^{d}}\frac{\partial\Delta_{\Lambda_{n}}^{\#}}{\partial z}.\] Furthermore, using Theorem 11 we know that \[\left|\frac{\partial\Delta_{\Lambda_{n}}^{\#}}{\partial z}\right|\leq D\eta( \tau,l_{0})|\partial_{ext}\Lambda_{n}|\,\,\Rightarrow\,\,\frac{1}{|\Lambda_{ n}|}\frac{\partial\Delta_{\Lambda_{n}}^{\#}}{\partial z}\underset{n\to\infty}{ \rightarrow}0.\] The empirical field is stationary and using the local convergence we have \[\frac{E_{\overline{P}_{\Lambda_{n}}^{\#}}(N_{\widehat{\Lambda}_{n}})}{| \Lambda_{n}|\delta^{d}}=E_{\overline{P}_{\Lambda_{n}}^{\#}}(N_{\{0,1\}^{d}}) \underset{n\to\infty}{\rightarrow}\rho(P^{\#}).\] On the other hand, we have by construction of the empirical field that \[E_{\overline{P}_{\Lambda_{n}}^{\#}}(N_{\widehat{\Lambda}_{n}})=E_{P_{\Lambda_ {n}}^{\#}}(N_{\widehat{\Lambda}_{n}}).\] In summary, when we take limit when \(n\) goes to infinity we have that \[\rho(P^{\#})=z+z\beta\frac{\partial\psi^{\#}}{\partial z}(z_{\beta}^{c})\] and therefore using (21) we have that \(\rho(P^{(1)})>\rho(P^{(0)})\). ## Acknowledgement This work was supported in part by the Labex CEMPI (ANR-11-LABX-0007-01), the ANR projects PPPP (ANR-16-CE40-0016) and RANDOM (ANR-19-CE24-0014) and by the CNRS GdR 3477 GeoSto. ## 5 Annex A : Cluster Expansion We consider a collection of contours \(C\) and for all \(\Lambda\subset\mathbb{Z}^{d}\), \(C(\Lambda)\) a sub-collection of contours \(\gamma\) such that \(\overline{\gamma}\subset\Lambda\). For each contour \(\gamma\) in such collection there are weights \(w_{\gamma}\) invariant by translation. We set \(l_{0}=\min\{|\overline{\gamma}|,\gamma\in C\}\) and \(\eta(\tau,l_{0})=2\exp(-\tau^{l_{0}}/3)\). A set of contours \(\Gamma=\{\gamma_{1},\ldots,\gamma_{n}\}\in C\), is said to be geomtrically compatible if for all \(i,j\in\{1,\ldots,n\},i\neq j\) we have \(d_{\infty}(\gamma_{i},\gamma_{j})\geq 1\). We define the polymer development associated to those weights as, for all \(\Lambda\subset\mathbb{Z}^{d}\) \[\Phi(\Lambda)=\sum_{\begin{subarray}{c}\Gamma\in C(\Lambda)\\ \text{geometrically compatible}\end{subarray}}\prod_{\gamma\in\Gamma}w_{ \gamma}.\] A collection \(C=\{\gamma_{1},\cdots,\gamma_{n}\}\) is said to be decomposable if the support \(\overline{C}=\bigcup_{\gamma\in C}\overline{\gamma}\) is not simply connected. A cluster, denoted by \(X\), is a non-decomposable finite multiset of contours such that a same contour can appear multiple times and we define \(\overline{X}\) := \(\bigcup_{\gamma\in X}\overline{\gamma}\). The cluster expansion for \(\ln\Phi(\Lambda)\), if it converges, is given by \[\ln\Phi(\Lambda)=\sum_{X:\,\overline{X}\subset\Lambda}\Psi(X)\] where \(\Psi(X):=\alpha(X)\prod_{\gamma\in X}w_{\gamma}\). We have this combinatorial term \(\alpha\) whose expression is given by \[\alpha(X)=\left\{\prod_{\gamma\in C(\Lambda)}\frac{1}{n_{X}(\gamma)!}\right\} \left\{\sum_{\begin{subarray}{c}G\in G_{n}\\ \text{connected}\end{subarray}}\prod_{\begin{subarray}{c}\{i,j\}\in G \end{subarray}}\zeta(\gamma_{i},\gamma_{j})\right\}\] where \(n_{X}(\gamma)\) is the number of times a contour \(\gamma\) appears in the cluster \(X\), \(G_{n}=(V_{n},E_{n})\) is an undirected complete graph on \(V_{n}=\{1,\cdots,n\}\) and \(\zeta\) is defined as \[\zeta(\gamma,\gamma^{\prime})=\begin{cases}0&\text{if $\gamma,\gamma^{\prime}$ are geometrically compatible}\\ -1&\text{otherwise}\end{cases}.\] Under the assumption that the weights are \(\tau\)-stable, a sufficient condition for the convergence of the cluster expansion is \[\sum_{\begin{subarray}{c}\tau\in C\\ 0\in\overline{\gamma}\end{subarray}}e^{-\tau|\overline{\gamma}|}e^{3^{d}| \overline{\gamma}|}\leq 1.\] In practice we will take a larger value for \(\tau\) such that the stronger assumption of Lemma 9 is verified. The following lemmas correspond to Lemma 7.30 and Lemma 7.31 in [7]. **Lemma 9**.: _There exists \(\tau_{0}>0\) such that when \(\tau>\tau_{0}\)_ \[\sum_{\gamma\in C:\,0\in\overline{\gamma}}|\overline{\gamma}|^{d/d-1}e^{-( \tau/2-1)|\overline{\gamma}|}e^{3^{d}|\overline{\gamma}|}\leq\eta(\tau,l_{0}) \leq 1. \tag{24}\] **Lemma 10**.: _Let us assume that the weights are \(\tau\)-stable for \(\tau>\tau_{0}\). Then for \(L\geq l_{0}\)_ \[\sum_{\begin{subarray}{c}X\colon 0\in\overline{X}\\ |\overline{X}|\geq L\end{subarray}}|\Psi(X)|\leq e^{-\frac{\tau L}{2}}. \tag{25}\] Finally we have the following theorem which plays a crucial role in the proofs of Proposition 1 and 4. **Theorem 11**.: _Assume that, for all \(\gamma\in\mathcal{C}\), the weight \(w_{\gamma}\) is \(C^{1}\) in a parameter \(s\in(a,b)\) and that uniformly on \((a,b)\),_ \[w_{\gamma}\leq e^{-\tau|\overline{\gamma}|},\qquad\left|\frac{dw_{\gamma}}{ds }\right|\leq D|\overline{\gamma}|^{d/d-1}e^{-\tau|\overline{\gamma}|}, \tag{26}\] _where \(D\geq 1\) is a constant. Then there exists \(\tau_{1}=\tau_{1}(D,d)<\infty\) such that the following holds. If \(\tau>\tau_{1}\), the pressure \(g\) is given by the following absolutely convergent series,_ \[g=\sum_{X\colon 0\in\overline{X}}\frac{1}{|\overline{X}|}\Psi(X) \tag{27}\] _where the sum is over clusters \(X\) made of contours \(\gamma\in\mathcal{C}\) and \(\overline{X}=\bigcup_{\gamma\in X}\overline{\gamma}\). Moreover,_ \[|g|\leq\eta(\tau,l_{0})\leq 1\] _and for all \(\Lambda\subset\mathbb{Z}^{d}\) finite, \(g\) provides the volume contribution to \(\log\Phi(\Lambda)\), in the sense that_ \[\Phi(\Lambda)=\exp(g|\Lambda|+\Delta_{\Lambda}) \tag{28}\] _where \(\Delta_{\Lambda}\) is a boundary term :_ \[|\Delta_{\Lambda}|\leq\eta(\tau,l_{0})|\partial_{ext}\Lambda|. \tag{29}\] _Finally, \(g\) and \(\Delta_{\Lambda}\) are also \(C^{1}\) in \(s\in(a,b)\); its derivative equals_ \[\frac{dg}{ds}=\sum_{X\colon 0\in\overline{X}}\frac{1}{|\overline{X}|}\frac{d \Psi(X)}{ds} \tag{30}\] _and_ \[\left|\frac{dg}{ds}\right|\leq D\eta(\tau,l_{0}),\qquad\left|\frac{d\Delta_{ \Lambda}}{ds}\right|\leq D\eta(\tau,l_{0})|\partial_{ext}\Lambda|. \tag{31}\] This theorem is similar to Theorem 7.29 in [7], the only difference being is that the following statement is not included \[\left|\frac{d\Delta_{\Lambda}}{ds}\right|\leq D\eta(\tau,l_{0})|\partial_{ext }\Lambda|.\] Following the computations in [7] we find \[\frac{d\Delta_{\Lambda}}{ds}=\sum_{i\in\Lambda}\sum_{X\colon i\in\overline{X} \not\in\Lambda}\frac{1}{|\overline{X}|}\frac{d\Psi}{ds}(X).\] Whenever \(i\in\overline{X}\not\subset\Lambda\) we know that \(\overline{X}\cap\partial_{ext}\Lambda\neq\emptyset\) and since the weights are invariant by translation we have \[\left|\frac{d\Delta_{\Lambda}}{ds}\right|\leq|\partial_{ext}\Lambda|\max_{j\in \partial_{ext}\Lambda}\sum_{X\colon j\in\overline{X}}\left|\frac{d\Psi}{ds}( X)\right|=|\partial_{ext}\Lambda|\sum_{X\colon 0\in\overline{X}}\left|\frac{d\Psi}{ds}(X)\right|.\] We can show that for any cluster \(X\) we have \[\left|\frac{d\Psi}{ds}(X)\right|\leq|\overline{\Psi}(X)| \tag{32}\] where \[\overline{\Psi}(X)=\alpha(X)\prod_{\gamma\in X}\overline{w}_{\gamma}\quad\text{ and}\quad\overline{w}_{\gamma}=D|\overline{\gamma}|^{d/d-1}e^{-(\tau-1)|\overline{ \gamma}|}.\] With classical results on cluster expansion we can show that \[\sum_{X\,:\,0\in\overline{X}}|\overline{\Psi}(X)|\leq\sum_{\gamma\in C\,:\,0 \in\overline{\gamma}}\overline{w}_{\gamma}e^{3d}|\overline{\gamma}|.\] Therefore with Lemma 9 we have the desired control on the derivative of the boundary term. ## 6 Annex B : Proof of Proposition 4 Before starting the proof by induction, we need to fix the some constants and in particular the quantity \(\beta\) which has to be sufficiently large. Recall that \(\eta(\tau,l_{0})\,:=\,2e^{-\frac{\tau l_{0}}{3}}\), where \(l_{0}\) is the minimum size of non empty contour. We set \(D\,:=\,(2+2K)\delta^{d}\beta+4\beta+4\delta^{d}\|\kappa^{\prime}\|+C_{1}\beta \delta^{d}+C_{2}\) where \(C_{1}(\beta)\,:=\,e+\sup_{s\in U_{\beta}}|\widehat{\psi}_{0}^{\#}/s|\) and \(C_{2}(\beta)\,:=\,\sup_{s\in U_{\beta}}|/s|\) and we choose \(\beta\geq 1\) sufficiently large such that \[\tau>\tau_{0}(d)\text{ where }\tau_{0}\text{ is defined as in Lemma \ref{lem:cond Now we start the induction. Let us assume that the statements have been proved up to \(n\). We have to prove that they hold also for \(n+1\). Since all the contours appearing in \(\widehat{\Phi}_{n}^{\#}\) are at most of class \(n\), by the induction hypothesis, all these weights are \(\tau\)-stable and satisfied the bound (17) on the derivatives. Therefore using Theorem 11 we get \[\left|\frac{\partial f_{n}^{\#}}{\partial s}\right|\leq\frac{D\eta(\tau,l_{0} )}{\rho\delta^{d}}\leq 1 \tag{38}\] where \(f_{n}^{\#}\) appears in \(\widehat{\psi}_{n}^{\#}=\widehat{\psi}_{0}^{\#}+f_{n}^{\#}\) and can also be defined as \[f_{n}^{\#}:=\lim_{k\to\infty}\frac{1}{\rho\delta^{d}\left|\Lambda_{k}\right|} \ln\Phi_{n}^{\#}(\Lambda_{k}).\] Before proceeding properly into the induction we have the following lemma. **Lemma 12**.: _For \(n\geq 1\), if all contours of class at most \(n\) the truncated weights are \(\tau\)-stable then for any \(k\leq n\)_ \[\left|\widehat{\psi}_{n}^{\#}-\widehat{\psi}_{k}^{\#}\right|\leq\frac{1}{ \rho\delta^{d}}e^{-\frac{\tau}{2}k^{d-1/d}}\quad\text{and}\quad\left|\widehat {\psi}_{n}-\widehat{\psi}_{k}\right|\leq\frac{1}{\rho\delta^{d}}e^{-\frac{ \tau}{2}k^{d-1/d}}. \tag{39}\] Proof.: We have \(\left|\widehat{\psi}_{n}^{\#}-\widehat{\psi}_{k}^{\#}\right|=\left|f_{n}^{\# }-f_{k}^{\#}\right|\). Since all contour of class at most \(n\) are \(\tau\)-stable we know that the cluster expansion for \(f_{k}^{\#}\) converges for \(k\leq n\). We can notice that the clusters \(X\) that contributes to \(f_{n}^{\#}-f_{k}^{\#}\) must have at least one contour \(\gamma\) of class greater than \(k\) and thus by the isoperimetric inequality \(\left|\overline{\gamma}\right|\geq k^{d-1/d}\) which in turn implies that \(\left|\overline{X}\right|\geq k^{d-1/d}\) and thus by Lemma 10 \[\left|\widehat{\psi}_{n}^{\#}-\widehat{\psi}_{k}^{\#}\right|\leq\frac{1}{ \rho\delta^{d}}\sum_{\begin{subarray}{c}X:0\in\overline{X}\\ \left|\overline{X}\right|\geq k^{d-1/d}\end{subarray}}\left|\widehat{\psi}^{ \#}(X)\right|\leq\frac{1}{\rho\delta^{d}}e^{-\frac{\tau}{2}k^{d-1/d}}. \tag{40}\] Now we want to have the same kind of estimate for \(\left|\widehat{\psi}_{n}-\widehat{\psi}_{k}\right|\). If \(\widehat{\psi}_{n}=\widehat{\psi}_{n}^{\#}\) and \(\widehat{\psi}_{k}=\widehat{\psi}_{k}^{\#}\), we get the same estimate since the difference is the same. In the case where \(\widehat{\psi}_{n}=\widehat{\psi}_{n}^{\#}\) and \(\widehat{\psi}_{k}=\widehat{\psi}_{k}^{\#^{*}}\), where \(\#\neq\#^{*}\), we have on one side \[\widehat{\psi}_{n}-\widehat{\psi}_{k}=\widehat{\psi}_{n}^{\#}-\widehat{\psi} _{k}^{\#}+\widehat{\psi}_{k}^{\#}-\widehat{\psi}_{k}^{\#^{*}}\leq\widehat{ \psi}_{n}^{\#}-\widehat{\psi}_{k}^{\#}\] since by definition \(\widehat{\psi}_{k}^{\#}-\widehat{\psi}_{k}^{\#^{*}}=\widehat{\psi}_{k}^{\#} -\widehat{\psi}_{k}\leq 0\). On the other side, we have \[\widehat{\psi}_{n}-\widehat{\psi}_{k}=\widehat{\psi}_{n}^{\#}-\widehat{\psi} _{n}^{\#^{*}}+\widehat{\psi}_{n}^{\#^{*}}-\widehat{\psi}_{k}^{\#^{*}}\geq 0\] since \((\widehat{\psi}_{i}^{\#^{*}})_{i\in\Sigma}\) is increasing and \(\widehat{\psi}_{n}^{\#}-\widehat{\psi}_{n}^{\#^{*}}\geq 0\) by definition. In any case, we obtain \[\left|\widehat{\psi}_{n}-\widehat{\psi}_{k}\right|\leq\frac{1}{\rho\delta^{d} }e^{-\frac{\tau}{2}k^{d-1/d}}.\] We move on the proof that (18) holds if \(\left|\Lambda\right|=n+1\). Note that any contour that appears inside of \(\Lambda\) is at most of class \(k\leq n\). We say that a contour \(\gamma\) is stable if \[a_{n}^{\#}\delta^{d}\left|\operatorname{Int}\gamma\right|^{\nicefrac{{1}}{{d} }}\leq\frac{\rho_{0}}{16}.\] This property is hereditary, in the sense that for all contours \(\gamma^{\prime}\) that can appear inside \(\operatorname{Int}\gamma\) are stable as well. Since, we know that all contours of class at most \(n\) are \(\tau\)-stable we can apply Lemma 12 and by (35) we have for any contour \(\gamma\) of class \(k\leq n\) \[a_{k}^{\#}\delta^{d}\,|\operatorname{Int}\gamma\,|^{1/d} =a_{n}^{\#}\delta^{d}\,|\operatorname{Int}\gamma\,|^{1/d}+(a_{k}^ {\#}-a_{n}^{\#})\delta^{d}\,|\operatorname{Int}\gamma\,|^{1/d}\] \[\leq a_{n}^{\#}\delta^{d}\,|\operatorname{Int}\gamma\,|^{1/d}+2 \beta^{-1}k^{1/d}e^{-\tau k^{d-1/d}/2}\] \[\leq a_{n}^{\#}\delta^{d}\,|\operatorname{Int}\gamma\,|^{1/d}+ \frac{\rho_{0}}{16}.\] Therefore, when the contours are stable it implies that \(a_{k}^{\#}\delta^{d}\,|\operatorname{Int}\gamma\,|^{1/d}\leq\nicefrac{{\rho_{0 }}}{{8}}\) and thus \(\widehat{\omega}_{\gamma}^{\#}=w_{\gamma}^{\#}\). In contrast, we would call contours that doesn't satisfy this condition unstable. The stability of a contour depends on the parameter \(s\) as it affects the value of \(a_{n}^{\#}\). Thus we have two cases to consider. The first case is \(a_{n}^{\#}=0\). Consequently all contours are stable, therefore according to Theorem 11 we have \[Z_{\Lambda}^{\#} =\widehat{Z}_{\Lambda}^{\#}=e^{\theta\delta^{d}\widehat{\varphi} _{n}^{\#}|\Lambda|\delta^{d}+\Delta}\] \[\leq e^{\theta\delta^{d}\widehat{\varphi}_{n}^{\#}|\Lambda|+| \theta_{ext}\Lambda|}\] and (18) is proved. Now let us consider \(a_{n}^{\#}>0\), in this case some contours must be unstable. Therefore we can partition the configurations that generate among the external contours those that are unstable \[Z_{\Lambda}^{\#}=\sum_{\begin{subarray}{c}\Gamma\in\mathcal{C}_{ext}^{\#}( \Lambda)\\ \text{unstable}\end{subarray}}\int e^{-\beta H_{\Lambda}(\omega)}\mathbbm{1}_ {\{\forall i\in\partial_{\text{\tiny{\it{Im}}}}\Lambda,\sigma(\omega,j)=\#\}} \mathbbm{1}_{\{\Gamma\subset\Gamma_{ext}(\omega)\}}\Pi_{\widehat{\Lambda}}^{ \delta\#}(d\omega).\] Similar to what we did in (12), we can write each integral as a product of integrals with respect to Poisson point process distribution on different domains using the properties (9), (10) and (11). The only difference is that we do consider for the moment only unstable contours and so inside \(\Lambda_{ext}\) we have to account for stable contours. Furthermore, these stable contours that cannot encircle any external unstable contour due to the hereditary property of stable contours. \[Z_{\Lambda}^{\#}=\sum_{\begin{subarray}{c}\Gamma\in\mathcal{C}_{ext}^{\#}( \Lambda)\\ \text{unstable}\end{subarray}}Z_{\Lambda_{ext},stable}^{\#}\prod_{\gamma\in \Gamma}I_{\gamma}\,Z_{\operatorname{Int}_{0}\gamma}^{(0)}\,Z_{\operatorname{ Int}_{1}\gamma}^{(1)},\] where \(Z_{\Lambda_{ext},stable}^{\#}\) denotes the partition function restricted to configurations for which all contours are stable and by construction they are of class at most \(n\). Since all those contours are of class at most \(n\) they are also \(\tau\)-stable therefore they can be studied using a convergent cluster expansion according to Theorem 11 and thus \[Z_{\Lambda_{ext},stable}^{\#} =g_{\#}^{|\Lambda_{ext}|}\widehat{\Phi}_{n,stable}^{\#}(\Lambda _{ext})\] \[\leq g_{\#}^{|\Lambda_{ext}|}e^{\theta\delta^{d}f_{n,stable}^{ \#}|\Lambda_{ext}|+|\theta_{ext}\Lambda_{ext}|}\] \[\leq e^{\theta\delta^{d}(\widehat{\varphi}_{0}^{\#}+f_{n,stable} ^{\#})|\Lambda_{ext}|+|\theta_{ext}\Lambda_{ext}|}\] where \(f_{n,stable}^{\#}=\lim_{k\to\infty}\frac{1}{\beta|\Lambda_{k}|\delta^{d}}\ln \widehat{\Phi}_{n,stable}^{\#}(\Lambda_{ext})\). According to the induction hypothesis we have that \[Z_{\operatorname{Int}_{0}\gamma}^{(0)}\,Z_{\operatorname{Int}_{1} \gamma}^{(1)} \leq e^{\theta\delta^{d}\widehat{\varphi}_{n}|\operatorname{Int} \gamma|}e^{2(|\partial_{ext}\operatorname{Int}_{1}\gamma|+|\partial_{ext} \operatorname{Int}_{0}\gamma|)}\] \[\leq e^{\theta\delta^{d}\widehat{\varphi}_{n}|\operatorname{Int} \gamma|}e^{2|\overline{\gamma}|}.\] For any \(\Gamma\in C^{\#}_{ext}(\Lambda)\) we have \(|\partial_{ext}\Lambda_{ext}|\leq|\partial_{ext}\Lambda|+\sum_{j\in\Gamma}| \overline{\gamma}|\), thus we get that \[Z^{\#}_{\Lambda} \leq\sum_{\begin{subarray}{c}\Gamma\in C^{\#}_{ext}(\Lambda)\\ \text{unstable}\end{subarray}}e^{\beta\delta^{d}(\widehat{\varphi}^{\#}_{0}+f ^{\#}_{n,stable})^{|\Lambda_{ext}|}}e^{|\alpha_{ext}\Lambda|+\sum_{j\in\Gamma}| \overline{\gamma}|}\prod_{\Gamma\in\Gamma}I_{\gamma}e^{\beta\delta^{d}\widehat {\varphi}_{n}|\operatorname{Int}\gamma|}e^{2|\overline{\gamma}|}\] \[\leq e^{\beta\delta^{d}\widehat{\varphi}_{n}|\Lambda|+|\partial_{ ext}\Lambda|}\sum_{\begin{subarray}{c}\Gamma\in C^{\#}_{ext}(\Lambda)\\ \text{unstable}\end{subarray}}e^{-\beta\delta^{d}(\widehat{\varphi}_{n}- \widehat{\varphi}^{\#}_{n,stable})^{|\Lambda_{ext}|}}\prod_{\gamma\in\Gamma}I _{\gamma}e^{(3-\beta\delta^{d}\widehat{\varphi}_{n})|\overline{\gamma}|}\] where we define \(\widehat{\psi}^{\#}_{n,stable}:=\widehat{\psi}^{\#}_{0}+f^{\#}_{n,stable}\). Furthermore when we use the following inequalities \(\widehat{\psi}_{n}\geq\widehat{\psi}^{\#}_{n}\geq\widehat{\psi}^{\#}_{0}= \frac{\ln(g_{n})}{\beta\delta^{d}}\) we get that \(I_{\gamma}e^{-\beta\delta^{d}\widehat{\varphi}_{n}|\overline{\gamma}|}\leq e ^{-\tau|\overline{\gamma}|}\). Therefore we have \[Z^{\#}_{\Lambda}\leq e^{\beta\delta^{d}\widehat{\varphi}_{n}|\Lambda|+| \partial_{ext}\Lambda|}\sum_{\begin{subarray}{c}\Gamma\in C^{\#}_{ext}( \Lambda)\\ \text{unstable}\end{subarray}}e^{-\beta\delta^{d}(\widehat{\varphi}_{n}- \widehat{\varphi}^{\#}_{n,stable})^{|\Lambda_{ext}|}}\prod_{\gamma\in\Gamma}e ^{-(\tau-3)|\overline{\gamma}|}.\] It remains to prove that the sum is bounded by \(e^{|\partial_{ext}\Lambda|}\). First we note that \(\widehat{\psi}_{n}-\widehat{\psi}^{\#}_{n,stable}=a^{\#}_{n}+f^{\#}_{n}-f^{\#} _{n,stable}\). By construction, the clusters that appear in the cluster expansion of \(f^{\#}_{n}-f^{\#}_{n,stable}\) contain at least one unstable contour \(\gamma\). Therefore \[|\overline{\gamma}|\geq|\operatorname{Int}\gamma|^{d-1/d}\geq\left(\frac{\rho _{0}}{16a^{\#}_{n}\delta^{d}}\right)^{(d-1)}\] and by Lemma 10 and (36) \[|f^{\#}_{n}-f^{\#}_{n,stable}|\leq\beta^{-1}\delta^{-d}\exp\left(-\max\{( \nicefrac{{\rho_{0}}}{{16a^{\#}_{n}\delta^{d}}})^{d-1},I_{0}\}\frac{\tau}{2} \right)\leq\frac{a^{\#}_{n}}{2}. \tag{41}\] At the end we obtain \[\widehat{\psi}_{n}-\widehat{\psi}^{\#}_{n,\text{stable}}\geq\frac{a^{\#}_{n} }{2}. \tag{42}\] Now let us define new weights \(\omega^{*}_{\gamma}\) as follow \[\omega^{*}_{\gamma}=\begin{cases}e^{-(\tau-5)|\overline{\gamma}|}&\text{if $ \gamma$ is unstable}\\ 0&\text{otherwise}.\end{cases}\] We denote by \(\Phi^{*}\) the associated polymer development and have \[g^{*}=\lim_{k\to\infty}\frac{1}{\beta\delta^{d}|\Lambda_{k}|}\ln\Phi^{*}( \Lambda_{k}).\] For sufficiently large \(\beta\) we can assure by Theorem 11 that it is a convergent cluster expansion. Since all contours that contribute to \(g^{*}\) are all unstable, we obtain an inequality similar to (41) \[|g^{*}|\leq\beta^{-1}\delta^{-d}\exp\left(-\max\{(\nicefrac{{\rho_{0}}}{{16a^ {\#}_{n}\delta^{d}}})^{d-1},I_{0}\}\frac{\tau}{2}\right)\leq\frac{a^{\#}_{n}} {2}. \tag{43}\] Therefore with (42) and (43) we have \(\widehat{\psi}_{n}-\widehat{\psi}_{n}^{\#}\geq g^{*}\) and thus \[\sum_{\begin{subarray}{c}\Gamma\in C_{ext}^{\#}(\Lambda)\\ \text{unstable}\end{subarray}}e^{-\beta\delta^{d}(\widehat{\psi}_{n}-\widehat{ \psi}_{n,stable}^{\#})|\Lambda_{ext}|}\prod_{\gamma\in\Gamma}e^{-(\tau-3)| \overline{\gamma}|} \leq\sum_{\begin{subarray}{c}\Gamma\in C_{ext}^{\#}(\Lambda)\\ \text{unstable}\end{subarray}}e^{-\beta\delta^{d}g^{*}|\Lambda_{ext}|}\prod_{ \gamma\in\Gamma}e^{-(\tau-3)|\overline{\gamma}|}\] \[\leq e^{-\beta\delta^{d}g^{*}|\Lambda|}\sum_{\begin{subarray}{c} \Gamma\in C_{ext}^{\#}(\Lambda)\\ \text{unstable}\end{subarray}}\prod_{\gamma\in\Gamma}e^{-(\tau-3)|\overline{ \gamma}|}e^{\beta\delta^{d}g^{*}(\overline{|\gamma|}+|\operatorname{Int} \gamma|)}.\] By (43) we know that \(\beta\delta^{d}g^{*}\leq 1\), and again with Theorem 11 we know that \(\Phi_{\operatorname{Int}\gamma}^{*}\geq e^{\beta\delta^{d}g^{*}|\operatorname{ Int}\gamma|-|\overline{\gamma}|}\) and so \[\sum_{\begin{subarray}{c}\Gamma\in C_{ext}^{\#}(\Lambda)\\ \text{unstable}\end{subarray}}e^{-\beta\delta^{d}(\widehat{\psi}_{n}- \widehat{\psi}_{n,stable}^{\#})|\Lambda_{ext}|}\prod_{\gamma\in\Gamma}e^{-( \tau-3)|\overline{\gamma}|} \leq e^{-\beta\delta^{d}g^{*}|\Lambda|}\sum_{\begin{subarray}{c} \Gamma\in C_{ext}^{\#}(\Lambda)\\ \text{unstable}\end{subarray}}\prod_{\gamma\in\Gamma}e^{-(\tau-5)|\overline{ \gamma}|}\Phi_{\operatorname{Int}\gamma}^{*}\] \[=e^{-\beta\delta^{d}g^{*}|\Lambda|}\Phi_{\Lambda}^{*}\] \[\leq e^{|\partial_{ext}\Lambda|}.\] In summary, we have \[Z_{\Lambda}^{\#}\leq e^{\beta\delta^{d}\widehat{\psi}_{n}|\Lambda|+2|\partial_ {ext}\Lambda|}\] which is exactly (18) in the case where \(|\Lambda|=n+1\). If \(|\Lambda|\leq n\) it is sufficient to notice that \((\widehat{\psi}_{n}^{\#})_{n\in\operatorname{N}}\) is increasing. Let us now prove that (19) holds. We start with the case \(|\Lambda|=n+1\) and by a similar argument it is true for any smaller \(\Lambda\). By a direct computation we have \[\frac{\partial Z_{\Lambda}^{\#}}{\partial z} =-|\Lambda|\delta^{d}Z_{\Lambda}^{\#}+\frac{1}{z}\int N_{ \widehat{\Lambda}}(\omega)e^{-\beta H_{\Lambda}(\omega)}\mathbb{1}_{\{\forall i \in\partial_{\text{int}}\Lambda,\varpi(\omega,i)=\#\}}\Pi_{\widehat{\Lambda}} ^{z}(\omega)\] \[=-|\Lambda|\delta^{d}Z_{\Lambda}^{\#}+\frac{1}{z}E_{P_{\Lambda}^{ \#}}(N_{\widehat{\Lambda}})Z_{\Lambda}^{\#}.\] We can observe that for any configuration \(\omega\in\Omega_{f}\) and for any contour \(\gamma\in\Gamma(\omega)\) created by this configuration we have \(H_{\overline{\gamma}}(\omega)\geq|\overline{\gamma}_{1}|\delta^{d}+\rho_{0}| \overline{\gamma}|>0\), thus \(H_{\Lambda}(\omega)\geq 0\). By Donsker-Varadhan inequality for the Kullback-Liebler divergence, denoted by \(I(\cdot|\cdot)\), we have \[E_{P_{\Lambda}^{\#}}(N_{\widehat{\Lambda}}) \leq I(P_{\Lambda}^{\#}|\Pi_{\widehat{\Lambda}}^{z})+\ln E_{\Pi_{ \widehat{\Lambda}}^{z}}(e^{N_{\widehat{\Lambda}}})\] \[\leq\int-\beta H_{\Lambda}d\,P_{\Lambda}^{\#}-\ln Z_{\Lambda}^{ \#}+(e-1)z\delta^{d}\,|\Lambda|\] \[\leq-\ln Z_{\Lambda}^{\#}+(e-1)z\delta^{d}\,|\Lambda|. \tag{44}\] Furthermore we know that the contours which appear in \(|\Lambda|\) are at most of the class \(n\). Therefore we know that their truncated weights are \(\tau\)-stable and by Theorem 11 \[Z_{\Lambda}^{\#}\geq\widehat{Z}_{\Lambda}^{\#}\geq e^{\beta\delta^{d}\widehat {\psi}_{n}^{*}|\Lambda|-|\partial_{ext}\Lambda|}. \tag{45}\] From inequalities (45) and (44) and by using the fact that \((\widehat{\psi}_{n}^{\#})_{n\in\operatorname{N}}\) is increasing we obtain \[E_{P_{\Lambda}^{\#}}(N_{\widehat{\Lambda}})\leq\left((e-1)z-\beta\widehat{ \psi}_{0}^{\#}\right)\delta^{d}\,|\Lambda|+|\partial_{ext}\Lambda|.\] At the end we have \[\left|\frac{\partial Z_{\Lambda}^{\#}}{\partial s}\right|\leq\left[\left(e+ \left|\frac{\widehat{\psi}_{0}^{\#}}{s}\right|\right)\beta\delta^{d}\,|\Lambda |+\frac{1}{s}|\partial_{ext}\Lambda|\right]\,Z_{\Lambda}^{\#}.\] Now using (18) we obtain for \(s\in U_{\beta}\) \[\left|\frac{\partial Z^{\#}_{\Lambda}}{\partial s}\right|\leq\left(C_{1}\beta \delta^{d}\left|\Lambda\right|+C_{2}|\partial_{ext}\Lambda|\right)\,e^{\beta \delta^{d}\widehat{\varphi}_{n}\left|\Lambda\right|+2|\partial_{ext}\Lambda|}\] and (19) is proved. Now let us prove (15) which is the \(\tau\)-stability of truncated weights for contours of class \(n+1\). We consider a contour \(\gamma\) of class \(n+1\). First of all, we can observe that \(\widehat{\omega}^{\#}_{\gamma}=0\) whenever \((\widehat{\psi}^{\#^{*}}_{n}-\widehat{\psi}^{\#}_{n})\delta^{d}\left|\text{ Int}_{\#^{*}}\,\gamma\right|^{1/d}>\nicefrac{{\rho_{0}}}{{4}}\). So we can assume that \[(\widehat{\psi}^{\#^{*}}_{n}-\widehat{\psi}^{\#}_{n})\delta^{d}\left|\text{ Int}_{\#^{*}}\,\gamma\right|^{1/d}\leq\frac{\rho_{0}}{4}. \tag{46}\] Since \(\left|\text{ Int}\,\gamma\right|=n+1\) we can apply the induction hypothesis on the partition functions that appears in the truncated weights particularly we can use (18) and have \[Z^{\#^{*}}_{\text{Int}_{\#^{*}}\,\gamma}\leq e^{\beta\delta^{d}\widehat{ \varphi}_{n}\left|\text{ Int}_{\#^{*}}\,\gamma\right|+2|\partial_{ext}\text{ Int}_{\#^{*}}\,\gamma\right|}. \tag{47}\] By combining the previous inequalities (47) and (45) we have \[\frac{Z^{\#^{*}}_{\text{Int}_{\#^{*}}\,\gamma}}{Z^{\#}_{\text{ Int}_{\#^{*}}\,\gamma}} \leq e^{\beta\delta^{d}(\widehat{\varphi}_{n}-\widehat{\varphi}^{ \#}_{n})\,\text{Int}_{\#^{*}}\,\gamma\left|+3|\partial_{ext}\text{ Int}_{\#^{*}}\,\gamma\right|}\] \[\leq e^{\beta\delta^{d}(\widehat{\varphi}_{n}-\widehat{\varphi}^{ \#}_{n})\,\text{Int}_{\#^{*}}\,\gamma\left|^{1/d}\,\text{Int}_{\#^{*}}\,\gamma \right|^{(d-1)/d}+3|\partial_{ext}\text{ Int}_{\#^{*}}\,\gamma\right|}.\] Furthermore, applying hypothesis (46) and the isoperimetric inequality we have \[\frac{Z^{\#^{*}}_{\text{Int}_{\#^{*}}\,\gamma}}{Z^{\#}_{\text{ Int}_{\#^{*}}\,\gamma}} \leq e^{\beta\nicefrac{{\rho_{0}}}{{4}}\left|\text{ Int}_{\#^{*}}\,\gamma\right|(d-1)/d+3|\partial_{ext}\text{ Int}_{\#^{*}}\,\gamma\left|\right.\] \[\leq e^{(\frac{1}{d}\beta\rho_{0}+3)|\partial_{ext}\text{ Int}_{\#^{*}}\,\gamma\left|\right.}. \tag{48}\] Therefore with Proposition 3, the upper bound on the ratio of partition functions in inequality (48) and the fact that \(|\partial_{ext}\text{ Int}_{\#^{*}}\,\gamma|\leq|\overline{\gamma}|\) we have \[\widehat{\omega}^{\#}_{\gamma} \leq e^{-(\beta\rho_{0}-2)|\overline{\gamma}|}e^{(\frac{1}{d}\beta \rho_{0}+3)|\overline{\gamma}|}\] \[\leq e^{-(\frac{3}{d}\beta\rho_{0}-5)|\overline{\gamma}|}\] \[\leq e^{-\tau|\overline{\gamma}|}\] and the weight \(\widehat{\omega}^{\#}_{\gamma}\) is \(\tau\)-stable. Let us now show that (17) holds for a contour \(\gamma\) of class \(n+1\). Similar to the proof of (15) we consider only the case when \((\widehat{\psi}^{\#^{*}}_{n}-\widehat{\psi}^{\#}_{n})\delta^{d}\left|\text{ Int}_{\#^{*}}\,\gamma\right|^{1/d}\leq\nicefrac{{\rho_{0}}}{{4}}\). By a direct computation \[\frac{\partial\widehat{t}\widehat{\omega}^{\#}_{\gamma}}{\partial z}=\left(- \frac{|\overline{\gamma}|}{g_{\#}}\frac{\partial g_{\#}}{\partial z}\,I_{ \gamma}+\frac{\partial I_{\gamma}}{\partial z}\right)\left(g_{\#}^{-| \overline{\gamma}|}\kappa\frac{Z^{\#^{*}}_{\text{Int}_{\#^{*}}\,\gamma}}{Z^{ \#}_{\text{Int}_{\#^{*}}\,\gamma}}\right)+\frac{\partial}{\partial z}\left( \kappa\frac{Z^{\#^{*}}_{\text{Int}_{\#^{*}}\,\gamma}}{Z^{\#}_{\text{Int}_{\#^ {*}}\,\gamma}}\right)g_{\#}^{-|\overline{\gamma}|}I_{\gamma}.\] The first term of the derivative can be bounded in a similar fashion as in (37) using the facts that \(s\in U_{\beta}\) and by Proposition 3 \[\left|-\frac{|\overline{\gamma}|}{g_{\#}}\frac{\partial g_{\#}}{\partial z}I_{ \gamma}+\frac{\partial I_{\gamma}}{\partial z}\right|\left(g_{\#}^{-| \overline{\gamma}|}\kappa\frac{Z^{\#^{*}}_{\text{Int}_{\#^{*}}\,\gamma}}{Z^{ \#}_{\text{Int}_{\#^{*}}\,\gamma}}\right)\leq(2+2K)\delta^{d}\left|\overline{ \gamma}\,|e^{-\tau|\overline{\gamma}|}. \tag{49}\] For the other term, the derivative of the cut-off function satisfies \[\left|\frac{\partial\kappa}{\partial z}\right|\leq\left(\left|\frac{\partial \widehat{\varphi}_{n}^{\#}}{\partial z}\right|+\left|\frac{\partial\widehat{ \varphi}_{n}^{\#^{*}}}{\partial z}\right|\right)\delta^{d}|\operatorname{Int}_{ \#^{*}}\gamma|^{\frac{1}{d}}\|\kappa^{\prime}\|.\] For \(s\in U_{\beta}\) and by (38) we have \[\frac{\partial\psi_{n}^{\#}}{\partial z}=\frac{1}{\beta\delta^{d}}g_{\#} \frac{\partial g_{\#}}{\partial z}+\frac{1}{\beta}\frac{\partial f_{n}^{\#}}{ \partial s}\ \Rightarrow\ \left|\frac{\partial\psi_{n}^{\#}}{\partial z}\right|\leq \frac{2}{\beta}.\] Therefore with the isoperimetric inequality we have \[\left|\frac{\partial\kappa}{\partial z}\right|\leq\frac{4}{\beta}\delta^{d} \left|\overline{\gamma}\right|\|\kappa^{\prime}\|. \tag{50}\] For the derivative of the ratio of partition functions \[\left|\frac{\partial}{\partial z}\frac{Z_{\operatorname{Int}_{\#^{*}}}^{\#^{ *}}}{Z_{\operatorname{Int}_{\#^{*}}\gamma}^{\#}}\right|\leq\left|\frac{ \partial Z_{\operatorname{Int}_{\#^{*}}\gamma}^{\#}/\partial z}{Z_{ \operatorname{Int}_{\#^{*}}\gamma}^{\#}}\right|+\left|\frac{\partial Z_{ \operatorname{Int}_{\#^{*}}\gamma}^{\#}/\partial z}{Z_{\operatorname{Int}_{ \#^{*}}\gamma}^{\#}}\right|\frac{Z_{\operatorname{Int}_{\#^{*}}\gamma}^{\#^{* }}}{Z_{\operatorname{Int}_{\#^{*}}\gamma}^{\#}}.\] Similarly to how we bounded the ratio of partition function and using (19) we have for \(\#^{*}\in\{0,1\}\) \[\left|\frac{\partial Z_{\operatorname{Int}_{\#^{*}}\gamma}^{\#^{*}}/\partial z }{Z_{\operatorname{Int}_{\#^{*}}\gamma}^{\#}}\right|\leq\left(C_{1}\delta^{d} |\operatorname{Int}_{\#^{*}}\gamma|+\frac{C_{2}}{\beta}|\partial_{ext} \operatorname{Int}_{\#^{*}}\gamma|\right)e^{(1/4\beta\rho_{0}+3)\left| \overline{\gamma}\right|}.\] Therefore using the isoperimetric inequality and the fact that \(|\partial_{ext}\operatorname{Int}_{\#^{*}}\gamma|\leq\left|\overline{\gamma}\right|\) we have \[\left|\frac{\partial}{\partial z}\frac{Z_{\operatorname{Int}_{\#^{*}}\gamma}^ {\#^{*}}}{Z_{\operatorname{Int}_{\#^{*}}\gamma}^{\#}}\right|\leq\left(C_{1} \delta^{d}+\frac{C_{2}}{\beta}\right)\left|\overline{\gamma}\right|^{d/(d-1)} e^{(1/2\beta\rho_{0}+6)\left|\overline{\gamma}\right|}. \tag{51}\] When combining inequalities (49), (50), (51) and Proposition 3 we have \[\left|\frac{\partial\widehat{\upsilon}_{\gamma}^{\#}}{\partial s}\right|\leq \left((2+2K)\delta^{d}\beta+4\delta^{d}\|\kappa^{\prime}\|+C_{1}\beta\delta^{ d}+C_{2}\right)\left|\overline{\gamma}\right|^{d/(d-1)}\!\!e^{-\tau\left| \overline{\gamma}\right|}.\] To finish the proof let us show that (16) holds at the order \(n+1\). Since we have proven this far that the truncated weights of class at most \(n+1\) are \(\tau\)-stable we can apply use Lemma 12. Let \(\gamma\) of class \(n+1\) if we have \(a_{n+1}^{\#}\delta^{d}|\operatorname{Int}\gamma|^{1/d}\leq\frac{\rho_{0}}{16}\) then by definition of truncated weights we would have \(\widehat{\upsilon}_{\gamma}^{\#}=\upsilon_{\gamma}^{\#}\). Now let's consider a contour \(\gamma\) of class \(k\leq n\), according to Lemma 12 and (35) we have \[a_{k}^{\#}\delta^{d}|\operatorname{Int}\gamma|^{1/d} =a_{n+1}^{\#}\delta^{d}|\operatorname{Int}\gamma|^{1/d}+(a_{k}^{ \#}-a_{n+1}^{\#})\delta^{d}|\operatorname{Int}\gamma|^{1/d}\] \[\leq a_{n+1}^{\#}\delta^{d}|\operatorname{Int}\gamma|^{1/d}+2 \beta^{-1}k^{1/d}e^{-\kappa^{d-1/d}/2}\] \[\leq a_{n+1}^{\#}\delta^{d}|\operatorname{Int}\gamma|^{1/d}+\frac{ \rho_{0}}{16}.\] Therefore if \(a_{n+1}^{\#}\delta^{d}|\operatorname{Int}\gamma|^{1/d}\leq\nicefrac{{\rho_{0}}} {{16}}\) it implies that \(a_{k}^{\#}\delta^{d}|\operatorname{Int}\gamma|^{1/d}\leq\nicefrac{{\rho_{0}}} {{16}}\) which in turn would imply \(\widehat{\upsilon}_{\gamma}^{\#}=w_{\gamma}^{\#}\) by definition of the truncated weights.
2309.09026
Ehrhart Polynomials of Generic Orthotopes
A generic orthotope is an orthogonal polytope whose tangent cones are described by read-once Boolean functions. The purpose of this note is to develop a theory ofEhrhart polynomials for integral generic orthotopes. The most remarkable part of this theory is a relation between the number of lattice points in an integral generic orthotope $P$ and the number of unit cubes in $P$ of various floral types. This formula is facilitated through the introduction of a set of "local polynomials" defined for every read-once Boolean function.
David Richter
2023-09-16T15:38:50Z
http://arxiv.org/abs/2309.09026v1
# Ehrhart Polynomials of Generic Orthotopes ###### Abstract A generic orthotope is an orthogonal polytope whose tangent cones are described by read-once Boolean functions. The purpose of this note is to develop a theory of Ehrhart polynomials for integral generic orthotopes. The most remarkable part of this theory is a relation between the number of lattice points in an integral generic orthotope \(P\) and the number of unit cubes in \(P\) of various floral types. This formula is facilitated through the introduction of a set of "local polynomials" defined for every read-once Boolean function. 2020 AMS Mathematics Subject Classification: 51M20, 52B11, 52B70. ## 1 Introduction This author introduced a theory of generic orthotopes in [7]. The goal of this note is to establish a theory of Ehrhart polynomials specialized to integral generic orthotopes, extending the results of [7]. The most prominent result here is a formula, stated as Theorem 5.1, which demonstrates that counting lattice points in a generic orthotope \(P\) is equivalent to counting integral unit cubes in \(P\) while keeping track of their "floral types". We demonstrate this formula by introducing an algebra \(\mathcal{F}\) generated by a set of equivalence classes of read-once boolean functions (essentially series-parallel diagrams) and a couple of useful bases \(\{h(e_{\alpha})\}\) and \(\{h^{-1}(e_{\alpha})\}\) for this algebra. This analysis gives rise to a plethora of identities concerning integral generic orthotopes. It also showcases the calculus of series-parallel diagrams used for studying generic orthotopes as introduced in [7]. A major theme of this work (including [7]) is that while many workers have studied aspects of orthogonal polytopes in a fixed dimension, usually \(d=2\) or \(d=3\), relatively few have considered common structural questions among orthogonal polytopes irrespective of dimension. As one may notice, we may prove all of the results shown here and in [7] in an elementary manner. Thus, the results shown here appear among the lowest-hanging fruit in the deep, dark, and largely unexplored forest of general orthogonal polytopes. On another point, this author wants to persuade the reader that generic orthotopes in particular possess a distinctive intrinsic charm and hopes that they will notice this throughout this theory; for example, this author is impressed at the way that ideas and results tend to flow as soon as one demands that every vertex of an orthogonal polytope be floral. (On this point, it should be obvious that this author is biased!) We have attempted to be as pedantic as possible in this note. Thus, as we develop our adaptation of Ehrhart polynomials to generic orthotopes, we concurrently walk the reader through a running example in 2 dimensions. The main reason for this is that we wish to highlight as brightly as possible the distinctiveness of the Ehrhart theory as it pertains to generic orthotopes in particular. Indeed, the underlying algebra \(\mathcal{F}\) and the polynomials \(h(e_{\alpha})\) or \(h^{-1}(e_{\alpha})\) which are instrumental in expressing our results have been not found among any extant works about Ehrhart theory and lattice point enumeration. (This should not be surprising because the introduction of read-once Boolean functions in the study of orthogonal polytopes appears to be novel. For that matter, we have not found these notions among the extant literature on Boolean functions or Boolean algebras.) The example serves to display the motivation of the theory but also to better understand the notation and concepts used throughout. We also offer an example in 3 dimensions in a separate section, should the reader wish to see something more illustrative or more substantial. Also, this author feels that the examples also serve to better understand the foundational article [7]. ### Some notions and conventions This subsection summarizes some notation conventions and notions used throughout. The reader should refer to [7] for supplemental details. We use the symbol \(d\) to denote the dimension of the ambient space, and in most cases assume \(d\) is fixed; certainly \(d\) is fixed when we speak of a particular orthogonal polytope. Accordingly, we denote \([d]:=\{1,2,3,...,d\}\). In most cases, we use the symbol \(P\) for an arbitrary \(d\)-dimensional bounded integral generic orthotope. A "congruence class" may refer to a floral arrangement congruence class or a floral vertex congruence class. In all cases, we use "congruence" to mean Euclidean-geometric congruence. As we explained in [7], the Coxeter group \(BC_{d}\) acts on floral arrangements, so we say that \(\alpha\) is congruent to \(\alpha^{\prime}\) if there is a group element \(g\in BC_{d}\) such that \(\alpha^{\prime}=g\cdot\alpha\). In other words, \(\alpha\) is congruent to \(\alpha^{\prime}\) if both \(\alpha\) and \(\alpha^{\prime}\) lie in the same orbit under the standard action of \(BC_{d}\) on \(\mathbb{R}^{d}\). Notice that the phrase "floral arrangement congruence class" is meaningful only when \(d\) is fixed. We use lower case Greek letters \(\alpha,\beta\) to denote either a floral vertex, a floral arrangement, a floral vertex congruence class, or a floral arrangement congruence class. We justify the overuse of this notation by the fact that this theory is already laden with notation; we try to explain the usage where it may be unclear. The _standard \(d\)-dimensional unit cube in \(\mathbb{R}^{d}\)_ is \(I^{d}=[0,1]^{d}\). A _\(d\)-dimensional integral unit cube_ is any translate \(v+I^{d}\), where \(v\in\mathbb{Z}^{d}\). If \(k\in\{0,1,2,...,d\}\), then a _\(k\)-dimensional integral unit cube_ is any \(k\)-dimensional face of a \(d\)-dimensional integral unit cube. A _relatively open \(k\)-dimensional integral unit cube_ is the relative interior of a \(k\)-dimensional integral unit cube with respect to its affine hull. Notice that every \(x\in\mathbb{R}^{d}\) belongs to a unique relatively open \(k\)-dimensional integral unit cube for some \(k\in\{0,1,2,...,d\}\). The dimension of the cube containing \(x=(x_{1},x_{2},...,x_{d})\) is given by \[k(x):=\#\left(\{i:x_{i}\notin\mathbb{Z}\}\right).\] Thus, \(k(x)=0\) precisely when \(x\in\mathbb{Z}^{d}\) is a lattice point and \(k(x)=d\) precisely when \(x\) lies interior to a \(d\)-dimensional integral unit cube. On multiple occasions throughout, we refer to the "loral type" of a point or a relatively open cube. The following, which may be inferred from the results in [7], is intended to clarify the meaning of this. **Proposition 1.1**.: _Suppose \(P\subset\mathbb{R}^{d}\) is a \(d\)-dimensional integral generic orthotope and \(C\subset P\) is a relatively open integral unit cube. Then there is a floral arrangement \(\alpha\subset\mathbb{R}^{d}\) such that the tangent cone at every point of \(C\) is congruent to \(\alpha\)._ Thus, if \(C\subset P\) is a relatively open integral unit cube, then define the _floral type_ of \(C\) as the floral arrangement congruence class which contains the tangent cone of any given point of \(C\). Notice that this meaning extends to the case when \(C\) is a point (i.e. a cube of dimension zero) of \(P\). Figure 1.1 displays a \(3\)-dimensional generic orthotope whose faces are marked with various floral types. We recall from [7] that we associate a _dimension_ to every floral vertex \(\alpha\), defined as the number of segments in the series-parallel diagram which defines it. Thus, \(\dim(\bullet)=0\), \(\dim(\bullet\bullet)=1\), \(\dim(\bullet\bullet\bullet)=\dim(\bullet\bullet\bullet)=2\), and so on. On several occasions, we also refer to the _degree_\(\deg\alpha\) of a floral arrangement, this being its degree of genericity as defined in [7]. Moreover, if \(\alpha\) is a floral arrangement, then there exists a positive integer \(k\) and a floral vertex \(\beta\) supported on \(k\) half-spaces such that \(\alpha\) is congruent to \(\beta\times\mathbb{R}^{d-k}\). In such a circumstance, we may rely on the relation \(\deg\alpha=d-\dim\beta=d-k\). Suppose \(s:[d]\rightarrow\{-1,0,1\}\) is a vector. Define the _generalized orthant of s_ as \[\Omega_{s}:=\{(x_{1},x_{2},...,x_{d}):\text{sign}(x_{i})=s_{i}\text{ for all }i\}.\] Notice \(\Omega_{s}\) is homeomorphic to an open cell \(\mathbb{R}^{k(s)}\), where \(k(s)=\#\left\{s_{i}:s_{i}\neq 0\right\}\), and \(\mathbb{R}^{d}\) contains precisely \(2^{k}\cdot\binom{d}{k}=2^{k}\cdot\frac{d!}{k!(d-k)!}\) generalized orthants of dimension \(k\). Also, each \(x=(x_{1},x_{2},...,x_{d})\in\mathbb{R}^{d}\) lies in the generalized orthant \(\Omega_{s}\), where \(s_{i}=\text{sign}(s_{i})\in\{-1,0,1\}\) for all \(i\). ## 2 Introductory theory and example If \(P\subset\mathbb{R}^{d}\) is a bounded subset, then define the _lattice point enumerator_ of \(P\) by \[L(P):=\#(P\cap\mathbb{Z}^{d}).\] If \(P\) is an integral convex polytope and \(tP\) is obtained by uniformly dilating \(P\) by the factor of \(t\in\mathbb{N}\), then \(L(tP)\) is well known to be a polynomial function of \(t\) called the Ehrhart polynomial of \(P\), cf. [2, 6]. This note adapts this theory to the case when \(P\) is an integral generic orthotope. Throughout this note we use an example in 2 dimensions to illustrate the theory. Thus, define an orthogonal polygon by \(P:=\bigcup_{v\in S}\left(v+[0,1]^{2}\right),\) where \[S=\left\{\begin{array}{l}(0,0),(0,1),(0,2),(0,3),(1,3),(2,1),(2,2),\\ (2,3),(3,1),(3,2),(3,3),(4,0),(4,1),(4,2),\\ (5,2),(6,1),(6,2),(6,3),(6,4)\end{array}\right\}.\] Figure 2.1 displays a sketch of \(P\). Apparently \(P\) is an integral generic orthotope with \(L(P)=37\). If \(\alpha\) is a floral arrangement congruence class, then let \(L_{\alpha}(P)\) denote the number of lattice points \(x\in P\cap\mathbb{Z}^{d}\) such that the tangent cone at \(x\) lies in \(\alpha\). For example, \(L_{\bullet}(P)\) counts the number of lattice points lying interior to \(P\) and \(L_{\alpha}(P)\) counts the number of vertices of \(P\) congruent to a member of \(\alpha\) when \(\alpha\) is a floral vertex congruence class. In the example, we distinguish the lattice points of various types by color. Thus, \[\begin{array}{rcll}L_{\bullet}(P)&=&3&(\text{green}),\\ L_{\bullet}(P)&=&16&(\text{black}),\\ L_{\bullet}(P)&=&11&(\text{blue}),\\ L_{\bullet}(P)&=&7&(\text{red}).\end{array}\] If \(t\) is a positive integer, then let \(tP\) denote the image of \(P\) under the dilation map \(x\mapsto tx\). That \(L(tP)\) is a polynomial function of \(t\) follows from well-known Figure 1.1: Floral types for a 3-dimensional generic orthotope. results concerning Ehrhart polynomials. Thus, we shall refer to \(L(tP)\) as the _Ehrhart polynomial_ of \(P\). We are interested in decomposing \(L(tP)\) according to floral arrangement congruence classes. Thus, we write \[L(tP)=\sum_{\alpha}L_{\alpha}(tP),\] where we sum over floral arrangement congruence classes \(\alpha\). As \(L_{\alpha}(tP)\) is a polynomial function of \(t\) for each \(\alpha\), let \(L_{\alpha,k}(P)\) denote the coefficients such that \[L_{\alpha}(tP)=\sum_{k}L_{\alpha,k}(P)t^{k}.\] The coefficients \(L_{\alpha,k}(P)\) for the running example appear in Figure 2.2. Thus, \[L_{\bullet}(tP)=1-17t+19t^{2}\text{ and }L_{\bullet}(tP)=-18+34t,\] while \(L_{\bullet}(tP)=11\) and \(L_{\bullet}(tP)=7\) are constant. Notice that evaluating \(L_{\alpha}(tP)\) at \(t=1\) yields the values \(L_{\alpha}(P)\) given above. Also notice that the degree of \(L_{\alpha}(tP)\) coincides with the degree of genericity of a tangent cone lying in the equivalence class \(\alpha\). We shall demonstrate a couple ways to determine \(L_{\alpha}(tP)\) generally. The first is based on counting unit cubes in \(P\) while keeping track of the floral types of their tangent cones. The second method is the formula appearing in Theorem 5.1, which uses a set of "local polynomials" \(h(e_{\alpha})\) and \(h^{-1}(e_{\alpha})\) lying in a certain associative algebra which we introduce below. The first method is trivial, as we are still counting lattice points in more or less the same way. However, this method is significant as it will facilitate the establishment of second method. ## 3 Counting cubes Suppose \(P\subset\mathbb{R}^{d}\) is an integral generic orthotope. For each floral arrangement congruence class \(\alpha\) and for each integer \(k\geq 0\), let \(C_{\alpha,k}(P)\) denote the number of Figure 2.1: Illustration of the running example. relatively open \(k\)-dimensional integral unit cubes \(C\subset P\) such that the tangent cone along each point of \(C\) lies in \(\alpha\). Figure 3.1 shows the values \(C_{\alpha,k}(P)\) for the running example. For each positive integer \(t\), define \[C_{\alpha}(P,t):=\sum_{k}C_{\alpha,k}(P)t^{k}.\] Although \(C_{\alpha}(P,t)\) does not count cubes of type \(\alpha\) in the dilate \(tP\), it is closely related to the function \(L_{\alpha}(tP)\): **Proposition 3.1**.: _Suppose \(P\) is an integral generic orthotope and \(t\) is a positive integer. Then_ \[L_{\alpha}(tP)=C_{\alpha}(P,t-1). \tag{1}\] Proof.: Suppose \(C\subset P\) is a relatively open integral unit cube of dimension \(k\) and \(t\) is a positive integer. Then there are precisely \((t-1)^{k}\) lattice points \(x\in tC\) such that the tangent cone at \(x\) coincides with the tangent cone at each point of \(C\). Using the partition of \(P\) into relatively open integral unit cubes (of varying dimensions), the relation (1) follows. Figure 3.2 displays the result of dilating the running example by the factor \(t=3\). Notice that if \(C\subset P\) is a relatively open integral unit square, for example, then the interior of \(3C\) contains precisely \((3-1)^{2}=4\) lattice points. Using the binomial theorem, we obtain a relation between the coefficients \(L_{\alpha,k}(P)\) and the counts \(C_{\alpha,k}(P)\): **Corollary 3.2**.: _Suppose \(P\) is an integral generic orthotope, \(\alpha\) is a floral arrangement, and \(k\in\{0,1,2,...,\deg\alpha\}\). Then_ \[L_{\alpha,k}(P)=\sum_{j=k}^{\deg\alpha}(-1)^{j+k}\binom{j}{k}C_{\alpha,j}(P).\] Figure 3.1: Cube counts for the example. Figure 2.2: Coefficients \(L_{\alpha,k}\) of the lattice point enumerator. For the running example, we compute \[L_{\bullet}(tP)=C_{\bullet}(P,t-1)=3+21(t-1)+19(t-1)^{2}=1-17t+19t^{2}\] and \[L_{\bullet}(tP)=C_{\bullet}(P,(t-1))=16+34(t-1)=-18+34t,\] yielding the first two rows of the table in Figure 2.2. ## 4 Local polynomials This section introduces and studies a set of "local polynomials" \(h(e_{\alpha})\), defined for each floral vertex congruence class \(\alpha\). These are instrumental in the relation, stated as Theorem 5.1, between the number of lattice points and the numbers of relatively open integral unit cubes of various floral types in an integral generic orthotope. Each local polynomial has an expression \[h(e_{\alpha}):=\sum_{\beta}m_{\alpha,\beta}e_{\beta},\] summing over floral vertex congruence classes \(\beta\). Here, \(m_{\alpha,\beta}\) is an integer that measures the occurrence of floral vertices congruent to \(\beta\) in \(\alpha\). As such, their definition requires careful analysis of faces and cross-sections of floral vertices, thus explaining the length of this section. We call the expressions \(h(e_{\alpha})\) "polynomials" because they lie in an infinite-dimensional, commutative, unital associative algebra \(\mathcal{F}\) which behaves much like a ring of polynomials. (We note that the algebra \(\mathcal{F}\), while containing a ring Figure 3.2: Dilating the example. of polynomials in one variable as a subring, has two different ways to multiply. Thus, although \(\mathcal{F}\) qualifies as having the structure of an associative algebra, this terminology neglects this additional structure.) We call the expressions \(h(e_{\alpha})\) "local" because they contain information about incidences of floral arrangements in a given floral vertex. A basis for \(\mathcal{F}\) as a vector space consists of all expressions \(e_{\alpha}\), where \(\alpha\) is a floral vertex congruence class, together with \(e_{\star}\). The multiplication in \(\mathcal{F}\) is given by the rule \[e_{\alpha}\cdot e_{\beta}:=e_{\alpha\wedge\beta},\] and the element \(e_{\star}\) serves as the multiplicative identity element. Notice that there is a unique monomorphism \(\mathbb{Q}[t]\hookrightarrow\mathcal{F}\) which maps \(1\mapsto e_{\star}\) and \(t\mapsto e_{\star}\). (We assume throughout that the field of scalars is \(\mathbb{Q}\).) The definition of the coefficients \(m_{\alpha,\beta}\) is facilitated by the observation, which was established in [7]: **Proposition 4.1**.: _Suppose \(\alpha\) is a floral vertex and \(x\in\alpha\). Then there is a floral vertex \(\beta\) such that the tangent cone at \(x\) is congruent to \(\beta\times\mathbb{R}^{d-\dim\beta}\)._ With this, say that \(x\in\alpha\)_has floral type_\(\beta\) if the tangent cone at \(x\) is congruent to \(\beta\times\mathbb{R}^{d-k}\), where \(\beta\) is a \(k\)-dimensional floral vertex. If \(x\) lies in the interior of \(\alpha\), then we decree that \(x\) has floral type \(\bullet\). Recalling [7], we may assert: **Proposition 4.2**.: _Suppose \(\alpha\) is a floral vertex and \(x\in\alpha\). (a) If \(x\) lies in the relative interior of an edge \(e\) of \(\alpha\), then the floral type of \(x\) coincides with the cross-section of \(\alpha\) across \(e\). (b) If \(x\) lies in the relative interior of a facet of \(\alpha\), then \(x\) has floral type \(\clubsuit\). (c) The origin \(x=\mathbf{0}\) has floral type \(\alpha\)._ Suppose \(\alpha\) and \(\beta\) are floral vertices with \(\dim\alpha=d\) and \(\dim\beta=k\). From the results above, we may define \[m_{\alpha,\beta}:=\sum_{f}\mu_{d-k}(f),\] summing over all \((d-k)\)-dimensional faces \(f\) which have floral type \(\beta\), and \(\mu_{d-k}\) is the \((d-k)\)-dimensional orthant-counting function (defined in [7]). In other words, \(m_{\alpha,\beta}\) is the number of \((d-k)\)-dimensional generalized orthants \(\Omega\subset\alpha\) such that each point of \(\Omega\) has floral type \(\beta\). Apparently we have \(m_{\alpha^{\prime},\beta^{\prime}}=m_{\alpha,\beta}\) whenever \(\alpha^{\prime}\) is congruent to \(\alpha\) or \(\beta^{\prime}\) is congruent to \(\beta\). Thus, we use \(m_{\alpha,\beta}\) to denote this common value when \(\alpha\) and \(\beta\) are floral vertex congruence classes. The table in Figure 4.2 displays the polynomials \(h(e_{\alpha})\) for \(\dim\alpha\leq 3\). Inspecting Figure 4.1, for example, we may see \[m_{\clubsuit}=5,\] as the floral vertex defined by has precisely 5 two-dimensional generalized orthants with floral type \(\beta=\clubsuit\). The polynomials \(h(e_{\alpha})\) when \(\dim\alpha=4\) appear in a table in the appendix. Figure 4.1: Three-dimensional floral vertices. The polynomials \(h(e_{\alpha})\) and the coefficients \(m_{\alpha,\beta}\) enjoy several properties which we need in order to state and prove Theorem 5.1. Like the results in [7], all of these properties are established using elementary means. **Proposition 4.3**.: _Suppose \(\alpha\) and \(\beta\) are floral vertices. If \(\dim\beta>\dim\alpha\), then no point of \(\alpha\) has floral type \(\beta\)._ A corollary to this is that \(m_{\alpha,\beta}=0\) when \(\dim\beta>\dim\alpha\), and \(m_{\alpha,\beta}=0\) for all but a finite number of \(\beta\) for each \(\alpha\). By a similar token, if \(\dim\alpha=\dim\beta\), then the values of \(m_{\alpha,\beta}\) are quickly determined: **Proposition 4.4**.: _Suppose \(\alpha\) and \(\beta\) are floral vertex congruence classes with \(\dim\alpha=\dim\beta\). Then_ \[m_{\alpha,\beta}=\left\{\begin{array}{ll}1&\mbox{if}\quad\beta=\alpha,\\ 0&\mbox{if}\quad\beta\neq\alpha.\end{array}\right.\] Recall from [7] that \(\overline{\alpha}\) denotes the floral vertex complementary to \(\alpha\). Since \(m_{\alpha,\bullet}=\mu_{d}(\alpha)\) coincides with the number of \((\dim\alpha)\)-dimensional orthants occupied by \(\alpha\), this yields: **Proposition 4.5**.: _If \(\alpha\) is a floral vertex congruence class with \(\alpha\neq\bullet\), then \(m_{\alpha,\bullet}+m_{\overline{\alpha},\bullet}=2^{\dim\alpha}\)._ Similarly, that \(\partial\overline{\alpha}=\partial\alpha=\alpha\cap\overline{\alpha}\) for all \(\alpha\) gives us: **Proposition 4.6**.: _Suppose \(\alpha\subset\mathbb{R}^{d}\) is a floral vertex and \(x\in\partial\alpha\). Then \(x\) has floral type \(\beta\) in \(\alpha\) if and only if \(x\) has floral type \(\overline{\beta}\) in \(\overline{\alpha}\)._ As a corollary, we have \(m_{\overline{\alpha},\overline{\beta}}=m_{\alpha,\beta}\) whenever \(\beta\neq\bullet\). Recall from [7] that if \(\alpha\) and \(\beta\) are floral vertices, then \(\alpha\times\beta\) is a floral vertex given by the conjunction of the series-parallel diagrams underlying \(\alpha\) and \(\beta\). This in turn implies that \[h(e_{\alpha}\cdot e_{\beta})=h(e_{\alpha\wedge\beta})=h(e_{\alpha})\cdot h(e_ {\beta})\] for all floral vertex congruence classes \(\alpha\) and \(\beta\). Since \(h(e_{\bullet})=e_{\bullet}\) we may say: **Proposition 4.7**.: _The map \(h:\mathcal{F}\to\mathcal{F}\) is an algebra automorphism._ We illustrate how the properties outlined above serve to compute small examples of the expressions \(h(e_{\alpha})\). For example, we may write: \[h(\raisebox{-0.5pt}{\includegraphics[height=56.905512pt]{figs/fig as \(h\) is an algebra automorphism. Using this and the facts \(m_{\overline{\alpha},\overline{\beta}}=m_{\alpha,\beta}\) and \(m_{\alpha,\bullet}+m_{\overline{\alpha},\bullet}=2^{\dim\alpha}\) we also obtain: \[h(\includegraphics[width=14.226378pt]{figs/2-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 - ## 5 The Main Result We use the expressions \(h^{-1}(e_{\alpha})\) to state our main result. For notational convenience, define \(D\in\operatorname{End}(\mathcal{F})\) by \[D(e_{\alpha}):=2^{\dim\alpha}e_{\alpha}\] for each \(\alpha\). With this, we assert: **Theorem 5.1**.: _Suppose \(P\) is an integral generic orthotope and \(k\in\{0,1,2,...,d\}\). Then_ \[\sum_{\alpha}L_{\alpha,k}(P)e_{\alpha}=2^{k-d}\sum_{\deg\alpha=k}C_{\alpha, \deg\alpha}(P)D(h^{-1}(e_{\alpha})). \tag{2}\] Before proving it, we demonstrate this formula using our running example. Using the formula from Theorem 5.1 with the values tabulated in Figure 3.1 and \(k=0\), we have \[2^{k-d}\sum_{\deg\alpha=k}C_{\alpha,\deg\alpha}(P)D(h^{-1}(e_{ \alpha}))\] \[=2^{0-2}\sum_{\deg\alpha=0}C_{\alpha,0}(P)D(h^{-1}(e_{\alpha}))\] \[=2^{-2}\left[C_{\includegraphics[width=14.226378pt]{figs/2011.eps}}, 0(P)D(h^{-1}(\includegraphics[width=14.226378pt]{figs/2011.eps}))+C_{ \includegraphics[width=14.226378pt]{figs/2011.eps}},0(P)D(h^{-1}(\includegraphics [width=14.226378pt]{figs/2011.eps}))\right]\] \[=\frac{1}{4}\left[11D(h^{-1}(\includegraphics[width=14.226378pt] {figs/2011.eps})+7D(h^{-1}(\includegraphics[width=14.226378pt]{figs/2011.eps}))\right]\] \[=\frac{11}{4}D\left((\includegraphics[width=14.226378pt]{figs/2011.eps} -2(\includegraphics[width=14.226378pt]{figs/2011.eps})+(\includegraphics[width=14.226 378pt]{figs/2011.eps}-2(\includegraphics[width=14.226378pt]{figs/2011.eps}-2( \includegraphics[width=14.226378pt]{figs/2011.eps}-2(\includegraphics[width=14.226 378pt]{figs/2011.eps}-2(\includegraphics[width=14.226378pt]{figs/2011.eps}-2( \includegraphics[width=14.226378pt]{figs/2011.eps}-2(\includegraphics[width=14.226 378pt]{figs/2011.eps}-2(\includegraphics[width=14.226378pt]{figs/2011.eps}-2( \includegraphics[width=14.226378pt]{figs/2011.eps}-2(\includegraphics[width=14.226 378pt]{figs/2011.eps}-2(\includegraphics[width=14.226378pt]{figs/2011.eps}-2( \includegraphics[width=14.226378pt]{figs/2011.eps}-2(\includegraphics[width=14.226 378pt]{figs/2011.eps}-2(\includegraphics[width=14.226378pt]{figs/2011.eps}-2( \includegraphics[width=14.226378pt]{figs/2011.eps}-2(\includegraphics[width=14.226 378pt]{figs/2011.eps}-2(\includegraphics[width=14.226378pt]{figs/2011.eps}-2( \includegraphics[width=14.226378pt]{figs/2011.eps}-2(\includegraphics[width=14.226 378pt]{figs/2011.eps}-2(\includegraphics[width=14.226378pt]{figs/2011.eps}-2( \includegraphics[width=14.226378pt]{figs/2011.eps}-2(\includegraphics[width=14.226 378pt]{figs/2011.eps}-2(\includegraphics[width=14.226378pt]{figs/2011.eps}-2( \includegraphics[width=14.226378pt]{figs/2011.eps}-2(\includegraphics[width=14.226 378pt]{figs/2011.eps}-2(\includegraphics[width=14.226378pt]{figs/2011.eps}-2( \includegraphics[width=14.226378pt]{figs/2011.eps}-2(\includegraphics[width=14.226 378pt]{figs/2011.eps}-2(\includegraphics[width=14.226378pt]{figs/2011.eps}-2( \includegraphics[width=14.226378pt]{figs/2011.eps}-2(\includegraphics[width=14.226 378pt]{figs/2011.eps}-2(\includegraphics[width=14.226378pt]{figs/2011.eps}-2( \includegraphics[width=14.226378pt]{figs/2011.eps}-2(\includegraphics[width=14.226 378pt]{figs/2011.eps}-2(\includegraphics[width=14.226378pt]{figs/2011.eps}-2( \includegraphics[width=14.226378pt]{figs/2011.eps}-2(\includegraphics[width=14.226 378pt]{figs/2011.eps}-2(\includegraphics[width=14. yielding the first column of the table in Figure 2.2. Likewise, with \(k=1\), \[2^{-1}C_{\includegraphics[width=14.226378pt]{figs/figs On the other hand, \[\binom{\deg\beta}{k}2^{-\dim\beta}C_{\beta,\deg\beta}(P)\] \[=\binom{\deg(\bullet\bullet)}{0}2^{-\dim(\bullet\bullet)}C_{\bullet \bullet,\deg(\bullet\bullet)}(P)\] \[=\binom{1}{0}2^{-1}C_{\bullet\bullet,1}(P)=1\cdot\frac{1}{2}\cdot 34 =17.\] The formula in Lemma 5.2 is a generalization of a formula from [7] for the volume expressed with the orthant-counting function \(\mu_{d}\). Thus, with \(\beta=\bullet\) and \(k=0\), we have \[\sum_{\alpha}2^{-\dim\alpha}m_{\alpha,\beta}C_{\alpha,k}(P)=\sum_{\alpha}2^{- \dim\alpha}m_{\alpha,\bullet}C_{\alpha,0}(P)=2^{-d}\sum_{\alpha}\mu_{d}(\alpha )n_{\alpha}(P),\] where \(2^{-\dim\alpha}m_{\alpha,\bullet}=2^{-d}\mu_{d}(\alpha)\) is the fraction of all \(d\)-dimensional orthants occupied by \(\alpha\in\mathbb{R}^{d}\) and \(C_{\alpha,0}=n_{\alpha}(P)\) is the number of lattice points in \(P\) having floral type \(\alpha\). On the other hand, we have \[2^{-\dim\beta}\binom{\deg\beta}{k}C_{\beta,\deg\beta}(P)=2^{-0}\binom{d}{0}C_{ \bullet,d}(P)=\operatorname{Vol}_{d}(P).\] **Proposition 5.3**.: _If \(P\) is an integral generic orthotope, then_ \[\sum_{k=0}^{d}(-1)^{k}\sum_{\alpha}2^{-\dim\alpha}C_{\alpha,k}(P)e_{\alpha}=2^ {-d}\sum_{\deg\beta=0}C_{\beta,0}(P)h^{-1}(e_{\beta}). \tag{4}\] We remark that the stipulation \(\deg\beta=0\) in this proposition is equivalent to requiring that \(\beta\) be a floral vertex. In other words, the domain of summation in the right-hand side of this formula coincides with \(d\)-dimensional floral vertex congruence classes. Proof.: This is a straightforward computation using the results above. In detail, notice \[\sum_{k=0}^{d}(-1)^{k}\sum_{\alpha}2^{-\dim\alpha}C_{\alpha,k}(P)h(e_ {\alpha})\] \[=\sum_{k=0}^{d}(-1)^{k}\sum_{\alpha}2^{-\dim\alpha}C_{\alpha,k}(P) \sum_{\beta}m_{\alpha,\beta}e_{\beta}\text{ (definition of $h$)}\] \[=\sum_{k=0}^{d}(-1)^{k}\sum_{\beta}\sum_{\alpha}2^{-\dim\alpha}m_ {\alpha,\beta}C_{\alpha,k}(P)e_{\beta}\] \[=\sum_{k=0}^{d}(-1)^{k}\sum_{\beta}\binom{\deg\beta}{k}2^{-\dim \beta}C_{\beta,\deg\beta}(P)e_{\beta}\text{ (from Proposition \ref{prop:2})}\] \[=\sum_{\beta}2^{-\dim\beta}C_{\beta,\deg\beta}\left[\sum_{k=0}^{ \deg\beta}(-1)^{k}\binom{\deg\beta}{k}\right]e_{\beta}.\] The expression in square brackets vanishes when \(\deg\beta>0\) and equals \(1\) when \(\deg\beta=0\), so we obtain \[\sum_{k=0}^{d}(-1)^{k}\sum_{\alpha}2^{-\dim\alpha}C_{\alpha,k}(P) h(e_{\alpha})\] \[=\sum_{\deg\beta=0}2^{-\dim\beta}C_{\beta,\deg\beta}(P)e_{\beta}\] \[=2^{-d}\sum_{\deg\beta=0}C_{\beta,0}(P)e_{\beta}\] Applying \(h^{-1}\), the result follows. **Corollary 5.4**.: _Suppose \(P\) is an integral generic orthotope. Then_ \[\phi(P)=2^{-d}\sum_{\deg\alpha=0}C_{\alpha,0}(P)D(h^{-1}(e_{\alpha})).\] Proof.: Notice \[\phi(P) =\sum_{\alpha}L_{\alpha,0}(P)e_{\alpha}\text{ (definition of $\phi$)}\] \[=\sum_{\alpha}\sum_{k=0}^{d}(-1)^{k}C_{\alpha,k}(P)e_{\alpha}\text{ (from Corollary \ref{cor:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def__def_def_def_def__def_def_def__def_def__def_def__def_def__def__def_def__def_def__def__def_def__def__def__def__def__def__def__def__def_def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def___def__def__def__def__def___def__def__def__def___def__def__def___def__def__def___def__def__def___def__def___def__def__def___def__def___def__def__def__def__def__def__def__def___def__def__def___def__def__def___def__def__def___def__def__def__def__def___def__def__def__def___def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def_def__def__def__def_def__def__def_def__def_def__def_def__def_def__def_def__def_def_def__def_def__def_def__def_def_def__def_def_def__def_def_def__def Proof.: For a \(d\)-dimensional integral generic orthotope, define \[f_{d}(P):=\sum_{k=0}^{d}(-1)^{k}C_{\star,k}(P).\] We will see that \((-1)^{d}f_{d}\) agrees with the Euler characteristic valuation. The proof follows by a routine induction argument on \(d\). Suppose first that \(d=1\). Then \(P\) is a disjoint union of closed intervals and \(-f_{d}(P)\) is the number of such intervals of \(P\), i.e. the Euler characteristic of \(P\). Next, suppose \(P\) is an integral generic orthotope of dimension \(d\) and \(\Pi\) is a \((d-1)\)-dimensional integral hyperplane such that the half-spaces on either side of \(\Pi\) intersect \(P\) in \(d\)-dimensional orthogonal polytopes, say \(P^{+}\) and \(P^{-}\). Thus, if \(H^{\pm}\) are the open half-spaces on either side of \(\Pi\), then \(P^{+}\) (resp. \(P^{-}\)) is the closure of the intersection \(P\cap H^{+}\) (resp. \(P\cap H^{-}\)). We observe that \[C_{\star,k}\left(P^{+}\cup P^{-}\right)=C_{\star,k}\left(P^{+}\right)+C_{ \star,k}\left(P^{-}\right)+C_{\star,k}\left(P^{+}\cap P^{-}\right)\] for all \(k\), although we must interpret this equation properly. Thus, while \(C_{\star,k}(P)\) generally denotes the number of relatively open \(k\)-dimensional integral unit cubes of floral type \(\bullet\) in \(P\), we notice that each such cube necessarily lies in the relative interior of \(P\). In particular, we notice that the orthogonal polytopes \(P=P^{+}\cup P^{-}\), \(P^{+}\), and \(P^{-}\) are all \(d\)-dimensional generic orthotopes, whereas \(P^{+}\cap P^{-}\) is a generic orthotope of dimension \(d-1\). Thus, for example, if \(x\) lies in the relative interior of \(P^{+}\cap P^{-}\), then \(x\in\partial P^{+}\) and \(x\in\partial P^{-}\) and \(x\) cannot have floral type \(\bullet\) in \(P^{+}\) or in \(P^{-}\). From this observation, we see that \[f_{d}(P^{+}\cup P^{-})=f_{d}(P^{+})+f_{d}(P^{-})+f_{d-1}(P^{+}\cap P^{-}).\] Thus, \[(-1)^{d}f_{d}(P^{+}\cup P^{-})\] \[=(-1)^{d}f_{d}(P^{+})+(-1)^{d}f_{d}(P^{-})-(-1)^{d-1}f_{d-1}(P^{+ }\cap P^{-}),\] Notice that \(C_{\star,k}(I^{d})\) is either \(0\) or \(1\) depending respectively on whether \(k<d\) or \(k=d\), where \(I^{d}\) is the standard unit cube. In particular, we have \(f_{d}(I^{d})=(-1)^{d}\) for all \(d\). Therefore \((-1)^{d}f_{d}=\chi\) for all \(d\). This in turn yields: **Corollary 6.2**.: _If \(P\) is a \(d\)-dimensional integral generic orthotope and \(\alpha\) is a floral arrangement congruence class, then \((-1)^{\deg\alpha}L_{\alpha,0}(P)\) is the sum of the Euler characteristics of the faces of \(P\) whose tangent cones lie in \(\alpha\)._ Thus, the Euler vector \(\phi(P)=\sum_{\alpha}L_{\alpha,0}(P)e_{\alpha}\) is a tabulation of these values ranging over all floral arrangement congruence classes. This is apparent in the running example. Thus, notice that each vertex has Euler characteristic \(1\), while \(P\) has precisely \(L_{\bullet,0}(P)\) and \(L_{\bullet,0}(P)\) vertices of types and respectively. Similarly, \(P\) has \(-L_{\bullet,0}=18\) edges, with each having Euler characteristic \(1\). Finally, \(L_{\star,0}(P)=1\), as \(P\) is homeomorphic to the disc \(I^{2}\). ### Ehrhart-Macdonald reciprocity The preceding result quickly leads to a derivation of Ehrhart-Macdonald reciprocity for integral generic orthotopes. Recall that \(tP\) contains precisely \(L_{\bullet}(tP)\) interior lattice points. If \(t>0\), then let \(L(-tP)\) denote the evaluation of the polynomial \(L(tP)\) at \(-t\). **Proposition 6.3**.: _If \(P\) is a \(d\)-dimensional integral generic orthotope, then_ \[L_{\bullet}(tP)=(-1)^{d}L(-tP).\] Proof.: The Euler characteristic of \(P\) is given by \[\chi(P) =\sum_{\alpha}\sum_{k}(-1)^{k}C_{\alpha,k}(P)\] \[=\sum_{\alpha}L_{\alpha,0}(P)\] \[=L_{\bullet,0}(P)+\sum_{\alpha\neq\bullet}L_{\alpha,0}(P),\] where \(\alpha\) denotes a floral arrangement congruence class. Using \(\chi(P)=(-1)^{d}L_{\bullet,0}(P)\) from above, this yields \[\sum_{\alpha\neq\bullet}L_{\alpha,0}(P)=\left[(-1)^{d}-1\right]L_{\bullet,0}(P).\] As in the proof of Theorem 5.1, we consider cross-sections of dimension \(d-k\) to obtain \[\sum_{\alpha\neq\bullet}L_{\alpha,k}(P)=\left[(-1)^{d-k}-1\right]L_{\bullet,k} (P)\] for all \(k\). The result then follows immediately. In the running example, we notice while ### Properties of \(h^{-1}(e_{\alpha})\) For any \(\alpha\), let \(s_{\alpha,\beta}\in\mathbb{Z}\) denote the coefficients such that \[h^{-1}(e_{\alpha})=\sum_{\beta}s_{\alpha,\beta}e_{\beta},\] summing over floral vertex congruence classes \(\beta\). Recall from [7] that the bouquet sign function \(\sigma\) satisfies \(\sigma(\alpha)=(-1)^{\rho(\alpha)}\), where \(\rho(\alpha)\) is the number of loops in the series-parallel diagram defining \(\alpha\) (which coincides with the number of disjunctions used in its read-once expression). Using Theorem 5.1 and the fact that both of \((-1)^{d}L_{\alpha,0}(P)\) and the sum of the bouquet signs of the vertices of \(P\) are equal to the Euler characteristic of \(P\) yields: **Proposition 6.4**.: \(s_{\alpha,\bullet}=(-1)^{\dim\alpha}\sigma(\alpha)\) _for every floral vertex congruence class._ We summarize several other properties of the polynomials \(h^{-1}(e_{\alpha})\) and the coefficients \(s_{\alpha,\beta}\): * \(s_{\alpha,\alpha}=1\) for all \(\alpha\). * \(s_{\alpha,\beta}=0\) if \(\dim\alpha=\dim\beta\) and \(\alpha\neq\beta\). * \(s_{\alpha,\beta}=0\) whenever \(\dim\beta>\dim\alpha\). * \(s_{\alpha,\beta}=s_{\overline{\alpha},\overline{\beta}}\) whenever \(\beta\neq\bullet\). * \(s_{\alpha,\beta}\) is the number of cross-sections of type \(\beta\) in \(\alpha\) when \(\dim\beta=\dim\alpha-1\). * \(s_{\alpha,\bullet\bullet}=-(-1)^{\dim\alpha}\sum_{\beta}\sigma(\beta)\), summing over all facets \(\beta\) of \(\alpha\). * \(\sum_{\beta}2^{\dim\beta}s_{\alpha,\beta}=\sigma(\alpha)\), summing over all floral vertex congruence classes \(\beta\). One may establish all of these in a manner similar to our study of the polynomials \(h(e_{\alpha})\). Notice that the last of these is a reflection of Ehrhart-Macdonald reciprocity. All of these properties are apparent in the tables in Figures 4.4 and in the appendix. ### Special Cases Let \(P\subset\mathbb{R}^{d}\) be a \(d\)-dimensional integral generic orthotope. We have already seen that \((-1)^{d}L_{\bullet,0}(P)\) coincides with the Euler characteristic of \(P\). The purpose of this section is to detail some other special formulas involving some familiar valuations of \(P\), if only to better understand the notation. In particular, we notice: * \(\sum_{\deg\alpha=0}L_{\alpha,0}(P)=\#(\text{vertices of }P)\). * \(\sum_{\deg\alpha=1}L_{\alpha,0}(P)=-\#(\text{edges of }P)\). * \(L_{\bullet\bullet,d-1}(P)=-2L_{\bullet,d-1}(P)=\text{Volume}_{d-1}(\partial P)\). * \(L_{\bullet,d}(P)=C_{\bullet,d}(P)=\text{Volume}_{d}(P)\). These are all trivial given an understanding of the notation \(L_{\alpha,k}(P)\). Notice the third of these is a statement of the well-known fact that the degree-\((d-1)\) coefficient of the Ehrhart polynomial of a \(d\)-dimensional polytope \(P\) is half of the negative of the measure of \(\partial P\). A multivariable generalization The enumeration of lattice points described above can be adapted to the case where coordinates are dilated independently. Thus, if \(\mathbf{t}=(t_{1},t_{2},...,t_{d})\) is a tuple of positive integers, then define the _generalized Ehrhart function_ of \(P\) by \[L(\mathbf{t}P):=\#(\mathbf{t}P\cap\mathbb{Z}^{d}),\] where \(\mathbf{t}P\) denotes the image of \(P\) under the dilation map \[(x_{1},x_{2},...,x_{d})\mapsto(t_{1}x_{1},t_{2}x_{2},...,t_{d}x_{d}).\] All of the results in this section follow through an analysis identical to that used above. As above, we write \[L(\mathbf{t}P):=\sum_{\alpha}L_{\alpha}(\mathbf{t}P),\] where \(L_{\alpha}(\mathbf{t}P)\) denotes the number of lattice points in \(\mathbf{t}P\) at which the tangent cone lies in \(\alpha\). **Proposition 7.1**.: _Suppose \(P\) is an integral generic orthotope and \(\alpha\) is a floral arrangement. Then \(\mathbf{t}\mapsto L_{\alpha}(\mathbf{t}P)\) is a polynomial function of \(\mathbf{t}=(t_{1},t_{2},...,t_{d})\) of degree \(\deg\alpha\) in which every term is a square-free monomial in \(\{t_{1},t_{2},...,t_{d}\}\)._ For \(I\subset[d]\), let \(\mathbf{t}_{I}=\prod_{i\in I}t_{i}\) be the square-free monomial corresponding to \(I\). Then we write \[L_{\alpha}(\mathbf{t}P):=\sum_{k=0}^{\deg\alpha}\sum_{|I|\leq k}L_{\alpha,I}(P) \mathbf{t}_{I}.\] We obtain explicit expressions for \(L_{\alpha,I}(P)\) as corollary of Theorem 5.1. Thus, given a floral arrangement \(\alpha\) and \(I\subset[d]\), let \(C_{\alpha,I}(P)\) denote the number of relatively open integral unit cubes of dimension \(|I|\) in \(P\) whose tangent cones are congruent to \(\alpha\) and which are parallel to the hyperplane \(\Pi_{I}=\operatorname{span}\{e_{i}:i\in I\}\). Then the formula gives us: \[\sum_{\alpha}L_{\alpha,I}(P)e_{\alpha}=2^{|I|-d}\sum_{\deg\alpha=|I|}C_{\alpha,I}(P)D(h^{-1}(e_{\alpha})). \tag{5}\] We illustrate this for the running example. The cube counts \(C_{\alpha,I}(P)\) appear in the table in Figure 7.1 and the coefficients \(L_{\alpha,I}(P)\) appear in the table in Figure 7.2. ## 8 Example in 3 dimensions Define a 3-dimensional integral orthogonal polytope as \(P:=\bigcup_{v\in S}\big{(}v+[0,1]^{3}\big{)}\), where \(S\subset\mathbb{R}^{3}\) appears in Figure 8.1. Evidently \(P\) lies in the primary octant in \(\mathbb{R}^{3}\). One should imagine \(P\) as an assembly of several stacks of unit cubes resting "skyscraper style" on a flat surface representing the \((x,y)\)-plane in \(\mathbb{R}^{3}\). A sketch of \(P\) appears in Figure 8.2. The shading in the figure is intended to illustrate the annular facet where \(P\) makes contact with the \((x,y)\) plane. The numbers appearing in the sketch tell the height of each stack. Notice that \(P\) is a solid 3-dimensional torus. Figure 8.3 displays the values of \(C_{\alpha,k}(P)\) and \(L_{\alpha,k}(P)\). Using the formula from Theorem 5.1 with the tabulated values, for example, one may compute the Figure 7.1: Multivariable cube counts for the example. Figure 8.1: Generating corners of the example. Figure 7.2: Multivariable cube counts for the example. Euler vector: \[\phi(P)= 2^{-3}\left[15D(h^{-1}(\includegraphics[height=99.03pt]{42.eps}))+11D(h^ {-1}(\includegraphics[height=99.03pt]{42.eps}))\right.\] \[\left.+5D(h^{-1}(\includegraphics[height=99.03pt]{42.eps}))+1D(h^{-1}( \includegraphics[height=99.03pt]{42.eps}))\right]\] \[= \frac{15}{8}\left[-(\includegraphics[height=99.03pt]{42.eps})+6( \includegraphics[height=99.03pt]{42.eps})-12(\includegraphics[height=99.03pt]{4 2.eps})+8(\includegraphics[height=99.03pt]{42.eps})]\] \[+\frac{11}{8}\left[(\includegraphics[height=99.03pt]{42.eps})+2( \includegraphics[height=99.03pt]{42.eps})-8(\includegraphics[height=99.03pt]{4 2.eps})-4(\includegraphics[height=99.03pt]{42.eps})+8(\includegraphics[height=99. 03pt]{42.eps})]\] \[+\frac{5}{8}\left[(\includegraphics[height=99.03pt]{42.eps})+2( \includegraphics[height=99.03pt]{42.eps})-4(\includegraphics[height=99.03pt]{4 2.eps})-8(\includegraphics[height=99.03pt]{42.eps})+8(\includegraphics[height=99. 03pt]{42.eps})]\] \[+\frac{1}{8}\left[-(\includegraphics[height=99.03pt]{42.eps})+6( \includegraphics[height=99.03pt]{42.eps})-12(\includegraphics[height=99.03pt]{4 2.eps})+8(\includegraphics[height=99.03pt]{42.eps})]\] \[= 0(\includegraphics[height=99.03pt]{42.eps})+16(\includegraphics[height=99.03pt]{42.eps})-36(\includegraphics[height=99.03pt]{42.eps})-12( \includegraphics[height=99.03pt]{42.eps})\] \[+15(\includegraphics[height=99.03pt]{42.eps})+11(\includegraphics[height=9 9.03pt]{42.eps})+5(\includegraphics[height=99.03pt]{42.eps})+1( \includegraphics[height=99.03pt]{42.eps}).\] In particular, notice that \(L_{\includegraphics[height=99.03pt]{42.eps},0}(P)=(-1)^{3}\chi(P)=0\), as \(P\) is a solid 3-dimensional torus. We may also use the formula (5) to compute the multivariable Ehrhart polynomials. These appear in Figure 8.4. Reading the table, for example, one sees \[L_{\includegraphics[height=99.03pt]{42.eps}}(\mathbf{t}P)=16-(32t_{1}+24t_{2}-28t _{3})+(28t_{1}t_{2}+32t_{1}t_{3}+20t_{2}t_{3})\] yields the number of lattice points in \(\mathbf{t}P\) whose tangent cones have type. ## 9 Conclusion We have described a theory of Ehrhart polynomials adapted to integral generic orthotopes. We anticipate further development concerning combinatorics of generic orthotopes. This note was concerned solely with generic orthotopes which are integral, and one can imagine extending these results to generic orthotopes which are rational. Even further, we anticipate a version of the Euler-Maclaurin summation formulae for generic orthotopes in a vein similar to that of [3]. Figure 8.2: A 3-dimensional integral generic orthotope. This author is particularly curious about the space of generic orthotopes with a given count data set \(\{C_{\alpha,\deg\alpha}\}\). As we have seen, the number of lattice points in \(P\) depends only on these values. However, even in dimension \(d=2\), it is easy to find non-congruent generic orthogens which have the same data \(\{C_{\alpha,\deg\alpha}\}\) (and therefore the same number of lattice points). Can we say anything definitive about the space of integral generic orthotopes having the same combinatorial data? We introduced the algebra \(\mathcal{F}\) and the "local polynomials" \(h(e_{\alpha}),h^{-1}(e_{\alpha})\in\mathcal{F}\) in order to express our formula in Theorem 5.1. One notices that these objects can be defined entirely within the theory of Boolean functions. That is, they are intrinsic to the study of read-once Boolean functions. The question of how these polynomials and especially the coefficients \(m_{\alpha,\beta},s_{\alpha,\beta}\in\mathbb{Z}\) may arise in studies of complexity measures of read-once Boolean functions therefore arises naturally, cf. [5].
2309.11682
Dr. FERMI: A Stochastic Distributionally Robust Fair Empirical Risk Minimization Framework
While training fair machine learning models has been studied extensively in recent years, most developed methods rely on the assumption that the training and test data have similar distributions. In the presence of distribution shifts, fair models may behave unfairly on test data. There have been some developments for fair learning robust to distribution shifts to address this shortcoming. However, most proposed solutions are based on the assumption of having access to the causal graph describing the interaction of different features. Moreover, existing algorithms require full access to data and cannot be used when small batches are used (stochastic/batch implementation). This paper proposes the first stochastic distributionally robust fairness framework with convergence guarantees that do not require knowledge of the causal graph. More specifically, we formulate the fair inference in the presence of the distribution shift as a distributionally robust optimization problem under $L_p$ norm uncertainty sets with respect to the Exponential Renyi Mutual Information (ERMI) as the measure of fairness violation. We then discuss how the proposed method can be implemented in a stochastic fashion. We have evaluated the presented framework's performance and efficiency through extensive experiments on real datasets consisting of distribution shifts.
Sina Baharlouei, Meisam Razaviyayn
2023-09-20T23:25:28Z
http://arxiv.org/abs/2309.11682v1
# Dr. FERMI: A Stochastic Distributionally Robust Fair Empirical Risk Minimization Framework ###### Abstract While training fair machine learning models has been studied extensively in recent years, most developed methods rely on the assumption that the training and test data have similar distributions. In the presence of distribution shifts, fair models may behave unfairly on test data. There have been some developments for fair learning robust to distribution shifts to address this shortcoming. However, most proposed solutions are based on the assumption of having access to the causal graph describing the interaction of different features. Moreover, existing algorithms require full access to data and cannot be used when small batches are used (stochastic/batch implementation). This paper proposes the first stochastic distributionally robust fairness framework with convergence guarantees that do not require knowledge of the causal graph. More specifically, we formulate the fair inference in the presence of the distribution shift as a distributionally robust optimization problem under \(L_{p}\) norm uncertainty sets with respect to the Exponential Renyi Mutual Information (ERMI) as the measure of fairness violation. We then discuss how the proposed method can be implemented in a stochastic fashion. We have evaluated the presented framework's performance and efficiency through extensive experiments on real datasets consisting of distribution shifts. ## 1 Introduction Machine Learning models demonstrate remarkable results in automated decision-making tasks such as image processing (Krizhevsky et al., 2017), object detection (Tan et al., 2020), natural language understanding (Devlin et al., 2018; Saravani et al., 2021), speech recognition (Abdel-Hamid et al., 2014), and automated code generation (Vaithalingam et al., 2022; Nazari et al., 2023). However, naively optimizing the accuracy/performance may lead to biased models against protected groups such as racial minorities and women (Angwin et al., 2016; Buolamwini and Gebru, 2018). A wide range of algorithms has been proposed to enhance the _fairness_ of machine learning models. These algorithms can be divided into three main categories: pre-processing, post-processing, and in-processing. Pre-processing algorithms remove the dependence/correlation of the training data with the sensitive attributes by transforming the data to a new "fair representation" in a pre-processing stage (Zemel et al., 2013; Creager et al., 2019). Post-processing algorithms adjust the decision boundary of the learned model after the training stage to satisfy the fairness constraints of the problem (Hardt et al., 2016; Lohia et al., 2019; Kim et al., 2019). Finally, in-processing approaches optimize the parameters of the model to maximize accuracy while satisfying the fairness constraints during the training procedure; see, e.g., (Zafar et al., 2017; Donini et al., 2018; Mary et al., 2019; Baharlouei et al., 2019). The underlying assumption of most fair learning algorithms is that the train and test domains have an identical distribution. Thus, establishing fairness in the training phase can guarantee fairness in the test phase. However, this assumption does not hold in many real applications. As an example, consider the new collection of datasets (ACS PUMS) released by (Ding et al., 2021) with the underlying task of predicting the income level of individuals (similar to the adult dataset (Dua and Graff, 2017)). The data points are divided based on different years and states. When several fair algorithms are applied to ACS PUMS dataset to learn fair logistic regression and XGBoost classifiers in the people of one US state, the performance of the learned models is severely dropped in terms of both accuracy and fairness violation when they are evaluated in other states (Ding et al., 2021). We also made a similar observation in our numerical experiments (see Figure 2). As another example, Schrouff et al. (2022) reviews several examples from the field of healthcare and medicine where a trained "fair" model in one hospital does not behave fairly in other hospitals. Such examples demonstrate the importance of developing efficient algorithms for training fair models against distribution shifts. Another crucial requirement in modern machine learning algorithms is the amenability to stochastic optimization. In large-scale learning tasks, only a mini-batch of data is used at each algorithm step. Thus, implementing the algorithm based on the mini-batches of data has to improve the loss function over iterates and be convergent. While this requirement is met when training based on vanilla ERM loss functions (Lee et al., 2019; Kale et al., 2022), the stochastic algorithms (such as stochastic gradient descent) may not be necessarily convergent in the presence of fairness regularizers (Lowy et al., 2022). In this work, we propose a distributionally robust optimization framework to maintain fairness across different domains in the presence of distribution shifts. Our framework **does not rely on having access to the knowledge of the causal graph of features**. Moreover, our framework is **amenable to stochastic optimization** and comes with convergence guarantees. ### Related Work Machine learning models can suffer from severe performance drop when the distribution of the test data differs from training data (Ben-David et al., 2006; Moreno-Torres et al., 2012; Recht et al., 2018, 2019). Learning based on spurious features (Buolamwini and Gebru, 2018; Zhou et al., 2021), class imbalance (Li et al., 2019; Jing et al., 2021), non-ignorable missing values (Xu et al., 2009), and overfitting (Tzeng et al., 2014) are several factors contributing to such a poor performance on test data. To mitigate distribution shift, Ben-David et al. (2010) formalizes the problem as a domain adaptation task where the unlabeled test data is available during the training procedure. In this case, the key idea is to regularize the empirical risk minimization over the given samples via a divergence function measuring the distribution distance of the source and target predictors. When data from the target domain is not available, the most common strategy for handling distribution shift is distributionally robust optimization (Rahimian and Mehrotra, 2019; Kuhn et al., 2019; Lin et al., 2022). This approach relies on minimizing the risk for the worst-case distribution within an uncertainty set around the training data distribution. Such uncertainty sets can be defined as \(\ell_{1}\)(Ben-David et al., 2010), \(\ell_{2}\)(Gao and Kleywegt, 2017), \(\ell_{\infty}\)(Baharlouei et al., 2023), optimal transport (Blanchet et al., 2019), Wasserstein distance (Kuhn et al., 2019), Sinkhorn divergences (Oneto et al., 2020), \(\chi^{2}\)-divergence (Levy et al., 2020), or Conditional Value at Risk (CVaR) (Zymler et al., 2013; Levy et al., 2020) balls around the empirical distribution of the source domain. A diverse set of methods and analyses are introduced in the context of learning fair models in the presence of the distribution shift. Lechner et al. (2021) shows how learning a _fair representation_ using a fair pre-processing algorithm is almost _impossible_ even under the demographic parity notion when the distribution of the test data shifts. Dai and Brown (2020) empirically demonstrates how bias in labels and changes in the distribution of labels in the target domain can lead to catastrophic performance and dreadful unfair behavior in the test phase. Ding et al. (2021) depicts through extensive experiments that applying post-processing fairness techniques to learn fair predictors of income with respect to race, gender, and age fails to transfer from one US state (training domain) to another state. Several in-processing methods are proposed in the literature to mitigate specific types of distribution shifts. Singh et al. (2019) finds a subset of features with the minimum risk on the domain target with fairness guarantees on the test phase relying on the causal graph and conditional separability of context variables and labels. Du and Wu (2021) proposes a reweighting mechanism (Fang et al., 2020) for learning robust fair predictors when the task involves sample selection bias with respect to different protected groups. Rezaei et al. (2021) generalizes the fair log loss classifier of (Rezaei et al., 2020) to a distributionally robust log-likelihood estimator under the covariate shift. An et al. (2022) finds sufficient conditions for transferring fairness from the source to the target domain in the presence of sub-population and domain shifts and handles these two specific types of shifts using a proposed consistency regularizer. The above methods rely on the availability of unlabeled test samples in the training phase (domain adaptation setting) or explicit assumptions on having access to the causal graph describing the causal interaction of features, and/or knowing the specific type of the distribution shift apriori. As an alternative approach, Taskesen et al. (2020) learns a distributionally robust fair classifier over a Wasserstein ball around the empirical distribution of the training data as the uncertainty set. Unfortunately, their formulation is challenging to solve efficiently via scalable first-order methods, even for linear and convex classifiers such as logistic regression, and there are no efficient algorithms converging to the optimal solutions of the distributional robust problem. The proposed algorithm in the paper is a greedy approach whose time complexity grows with the number of samples. In another effort, Wang et al. (2022) optimizes the accuracy and maximum mean discrepancy (MMD) of the curvature distribution of the two sub-populations (minorities and majorities) jointly to impose robust fairness. This formulation is based on the idea that flatters local optima with less sharpness have nicer properties in terms of robustness to the distribution shift. However, no convergence guarantee for the proposed optimization problem is provided. Compared to Wang et al. (2022), our work is amenable to stochastic optimization. In addition, we define the distributionally robust problem **directly** over the fairness violation measure, while Wang et al. (2022) relies on matching the curvature distribution of different sub-populations as a heuristic proxy for the measuring robustness of fairness. ### Our Contribution We propose stochastic and deterministic distributionally robust optimization algorithms under \(L_{p}\)-norm balls as the uncertainty sets for maintaining various _group fairness criteria_ across different domains. We established the convergence of the algorithms and showed that the group fairness criteria can be maintained in the target domain with the proper choice of the uncertainty set size and fairness regularizer coefficient without relying on _the knowledge of the causal graph of the feature interactions_ or explicit knowledge of the distribution shift type (demographic, covariate, or label shift). The proposed stochastic algorithm is the _first provably convergent algorithm_ with any arbitrary batch size for distributionally robust fair empirical risk minimization. Stochastic (mini-batch) updates are crucial to large-scale learning tasks consisting of a huge number of data points. ## 2 Preliminaries and Problem Description Consider a supervised learning task of predicting a label/target random variable \(y\in\mathcal{Y}\triangleq\{1,2,\ldots,m\}\) based on the input feature \(\mathbf{x}\). Assume our model (e.g. a neural network or logistic regression model) outputs the variable \(\hat{y}_{\mathbf{\theta}}(\mathbf{x})\) where \(\mathbf{\theta}\) is the parameter of the model (e.g. the weights of the neural network). For simplicity, sometimes we use the notation \(\hat{y}\) instead of \(\hat{y}_{\mathbf{\theta}}(\mathbf{x})\). Let \(s\in\mathcal{S}=\{1,2,\ldots,k\}\) denote the random variable modeling the sensitive attribute (e.g. gender or ethnicity) of data points. In the fair supervised learning task, we aim to reach two (potentially) competing goals: Goal 1) Maximize prediction accuracy \(\mathbb{P}^{*}(\hat{y}_{\mathbf{\theta}}(\mathbf{x})=y)\); Goal 2) Maximize fairness. Here, \(P^{*}(\cdot)\) is the ground-truth distribution during the deployment/test phase, which is typically unknown during training. The first goal is usually achieved by minimizing a certain loss function. To achieve the second goal, one needs to mathematically define "fairness", which is described next. ### Notions of Group Fairness Different notions of group fairness have been introduced (Zafar et al., 2017; Narayanan, 2018). Among them, demographic parity (Act, 1964; Dwork et al., 2012), equalized odds (Hardt et al., 2016), and equality of opportunity (Hardt et al., 2016) gained popularity. A classifier satisfies _demographic parity_(Dwork et al., 2012) if the output of the classifier is independent of the sensitive attribute, i.e., \[\mathbb{P}^{*}(\hat{y},s)=\mathbb{P}^{*}(\hat{y})\,\mathbb{P}^{*}(s) \tag{1}\] On the other hand, a classifier is fair with respect to the _equalized odds_ notion (Hardt et al., 2016) if \[\mathbb{P}^{*}(\hat{y},s|y=a)=\mathbb{P}^{*}(\hat{y}|y=a)\,\mathbb{P}^{*}(s|y=a )\quad\forall a\in\mathcal{Y}. \tag{2}\] Further, in the case of binary classification, _equality of opportunity_ is defined as (Hardt et al., 2016): \[\mathbb{P}^{*}(\hat{y},s|y=1)=\mathbb{P}^{*}(\hat{y}|y=1)\,\mathbb{P}^{*}(s|y= 1), \tag{3}\] where \(y=1\) is the advantaged group (i.e., the desired outcome from each individual's viewpoint). ### Fairness Through Regularized ERM In the fairness notions defined above, \(\hat{y}=\hat{y}_{\mathbf{\theta}}(\mathbf{x})\) depends on the parameter of the model \(\mathbf{\theta}\). Therefore, the above fairness notions depend on the parameters of the (classification) model \(\mathbf{\theta}\). Moreover, they are all in the form of (conditional) statistical independence between random variables \(\hat{y},y,\) and \(s\). Thus, a popular framework to reach the two (potentially competing) goals of maximizing accuracy and fairness is through regularized empirical risk minimization framework (Zafar et al., 2017; Baharlouei et al., 2019; Mary et al., 2019; Gari et al., 2019; Lowy et al., 2022). In this framework, we add a fairness-imposing regularizer to the regular empirical risk minimization loss. More specifically, the model is trained by solving the optimization problem \[\min_{\mathbf{\theta}}\quad\mathbb{E}_{\mathbb{P}}\left[\ell(\hat{y}_{\mathbf{\theta}} (\mathbf{x}),y)\right]+\lambda\rho\Big{(}\hat{y},y,s;\mathbb{P}\Big{)}. \tag{4}\] Here, the first term in the objective function aims to improve the model's accuracy with \(\ell(\cdot,\cdot)\) being the loss function, such as cross-entropy loss. On the other hand, \(\rho(\hat{y},s,y,\mathbb{P})\) is a group _fairness violation measure_, which quantifies the model's fairness violation. We will discuss examples of such measures in section 2.3. \(\lambda\) is a positive regularization coefficient to control the tradeoff between fairness and accuracy. In the above formulation, \(\mathbb{P}\) denotes the data distribution. Ideally, we would like to define the expectation term \(\mathbb{E}_{\mathbb{P}}[\ell(\hat{y}_{\mathbf{\theta}}(\mathbf{x}),y)]\) and the fairness violation term \(\rho\Big{(}\hat{y},y,s;\mathbb{P}\Big{)}\) in (4) with respect to the test distribution \(\mathbb{P}^{*}\), i.e., \(\mathbb{P}=\mathbb{P}^{*}\). However, since this distribution is unknown, existing works typically use the training data instead. Next, we describe popular approaches in the literature for defining fairness violation measure \(\rho\Big{(}\hat{y},y,s;\mathbb{P}\Big{)}\). ### Measuring Fairness Violation The fairness criteria defined in (1), (2), and (3) can all be viewed as a statistical (conditional) independence condition between the random variables \(\hat{y},y,\) and \(s\). For example, the equalized odds notion (2) means that \(\hat{y}\) is independent of \(s\) condition on the random variable \(y\). Therefore, one can quantify the "violation" of these conditions through well-known statistical dependence measures. Such quantification can be used as a regularization in (4). In what follows, we briefly describe some of these measures. **We only describe these measures and our methodology for the demographic parity notion.** The equalized odds and the equality of opportunity notions can be tackled in a similar fashion. To quantify/measure fairness violation based on the demographic parity notion, one needs to measure the statistical dependence between the sensitive attribute \(s\) and the output feature \(\hat{y}\). To this end, Zafar et al. (2017) utilizes the empirical covariance of the two variables, i.e., \(\rho_{c}(\hat{y},s,y;\mathbb{P})\triangleq\mathbb{E}[\hat{y}s]\). However, it is known that if two random variables have \(0\) covariance, it does not necessarily imply their statistical independence. To address this issue, several in-processing approaches utilize nonlinear statistical dependence measures such as mutual information (Roh et al., 2020), maximum-mean-discrepancy (MMD) (Oneto et al., 2020; Prost et al., 2019), Renyi correlation Baharlouei et al. (2019), Grari et al. (2019), and Exponential Renyi Mutual Information (ERMI) (Mary et al., 2019; Lowy et al., 2022). Let us briefly review some of these notions: Given the joint distribution of sensitive attributes and predictions, the mutual information between the sensitive features and predictions is defined as: \(\rho_{MI}(\hat{y},s,y,\mathbb{P})=\sum_{s,y}\mathbb{P}(s,y)\log\Big{(}\frac{ \mathbb{P}(s,y)}{\mathbb{P}(s)\mathbb{P}(y)}\Big{)}\). Some other notions are based on the Hirschfeld-Gebelein-Renyi (HGR) and Exponential Renyi Mutual Information (ERMI), which are respectively defined as (Hirschfeld, 1935; Gebelein, 1941; Renyi, 1959; Lowy et al., 2022): \[\rho_{\text{HGR}}(\hat{y},s,y,\mathbb{P})=\sigma_{2}(Q)\quad\text{and}\quad \rho_{\text{ERMI}}(\hat{y},s,y,\mathbb{P})=\mathrm{Tr}(Q^{T}Q). \tag{5}\] where \(\sigma_{2}(Q)\) denotes the second largest singular value of matrix \(Q\) defined as \[Q_{ij}=\Big{[}\frac{\mathbb{P}_{\hat{y},s}(\hat{y}=i,s=j)}{\sqrt{\mathbb{P}_{ \hat{y}}(\hat{y}=i)}\sqrt{\mathbb{P}_{s}(s=j)}}\Big{]}, \tag{6}\] #### 2.3.1 ERMI as Fairness Violation Measure In this paper, we utilize \(\rho_{\text{ERMI}}(\hat{y},s,y,\mathbb{P})\) as the measure of independence between the target variable and sensitive attribute(s) due to the following reasons. First, as shown in (Lowy et al., 2022), ERMI upper-bounds many well-known measures of independence such as HGR, \(\ell_{1}\), \(\ell_{\infty}\) and Shannon Divergence. Thus, by minimizing ERMI, we also guarantee that other well-known fairness violation measures are minimized. Second, unlike mutual information, Renyi correlation Baharlouei et al. (2019), Mary et al. (2019), and MMD (Prost et al., 2019) measures, there exists a **convergent** stochastic algorithms to the stationary solutions of (4) for ERMI, which makes this measure suitable for large-scale datasets containing a large number of samples (Lowy et al., 2022). ## 3 Efficient Robustification of ERMI In most applications, one cannot access the test set during the training phase. Thus, the regularized ERM framework (4) is solved w.r.t. the training set distribution. That is, the correlation term \(\rho(\cdot)\) is computed w.r.t. the training data distribution in (4) to learn \(\mathbf{\theta}\). However, as mentioned earlier, the test distribution may differ from the training distribution, leading to a significant drop in the accuracy of the model and the fairness violation measure \(\rho(\cdot)\) at the test phase. ### Distributionally Robust Formulation of ERMI A prominent approach to robustify machine learning models against distribution shifts is _Distributionally Robust Optimization_ (DRO) (Delage and Ye, 2010; Wiesemann et al., 2014; Lin et al., 2022). In this framework, the performance of the model is optimized with respect to the worst possible distribution (within a certain uncertainty region). Assume that \(P_{\text{tr}}\) is the joint training distribution of \((\mathbf{x},y)\) where \(\mathbf{x}\) is the set of input features, and \(y\) is the target variable (label). The distributionally robust optimization for fair risk minimization can be formulated as: \[\min_{\mathbf{\theta}}\ \max_{\mathbb{P}\in\mathcal{U}(\mathbb{P}_{\mathbf{v}})} \mathbb{E}_{(\mathbf{x},y)\sim\mathbb{P}}[\ell(\hat{y}_{\mathbf{\theta}}(\mathbf{ x}),y)]+\lambda\rho\Big{(}\hat{y}_{\mathbf{\theta}}(\mathbf{x}),y,s,\mathbb{P} \Big{)}, \tag{7}\] where \(\mathbb{P}_{\mathbf{w}}\) denotes the empirical distribution over training data points. The set \(\mathcal{U}(\mathbb{P}_{\mathbf{v}})\) is the uncertainty set for test distribution. This set is typically defined as the set of distributions that are close to the training distribution \(\mathbb{P}_{\text{tr}}\). The min-max problem in (7) is non-convex non-concave in general. Therefore, even finding a locally optimal solution to this problem is not guaranteed using efficient first-order methods (Razaviyayn et al., 2020; Daskalakis et al., 2021; Jing et al., 2021; Khalafi and Boob, 2023). To simplify this problem, we upperbound (7) by the following problem where the uncertainty sets for the accuracy and fairness terms are decoupled: \[\min_{\mathbf{\theta}}\ \max_{\mathbb{P}\in\mathcal{U}_{1}(\mathbb{P}_{\mathbf{v}})} \mathbb{E}_{(\mathbf{x},y)\sim\mathbb{P}}[\ell(\hat{y}_{\mathbf{\theta}}(\mathbf{ x}),y)]+\max_{\mathbb{P}\in\mathcal{U}_{2}(\mathbb{P}_{\mathbf{v}})}\lambda\rho\Big{(} \hat{y}_{\mathbf{\theta}}(\mathbf{x}),y,s,\mathbb{P}\Big{)}, \tag{8}\] The first maximum term of the objective function is the distributionally robust optimization for empirical risk minimization, which has been extensively studied in the literature and the ideas of many existing algorithms can be utilized (Kuhn et al., 2019; Sagawa et al., 2019; Levy et al., 2020; Zymler et al., 2013). However, the second term has not been studied before in the literature. Thus, from now on, and to simplify the presentation of the results, we focus on how to deal with the second term. In other words, for now, we consider \(\hat{\mathcal{U}}_{1}(\mathbb{P}_{\mathsf{tr}})=\{\mathbb{P}_{\mathsf{tr}}\}\) is singleton and we only robustify fairness. Later we will utilize existing algorithms for CVaR and Group DRO optimization to modify our framework for solving the general form of (8), and robustify both accuracy and fairness. ### The Uncertainty Set Different ways of defining/designing the uncertainty set in the DRO framework are discussed in section 1.1. The uncertainty set can be defined based on the knowledge of the potential type of distribution shifts (which might not be available during training). Moreover, the uncertainty set should be defined in a way that the resulting DRO problem is efficiently solvable. As mentioned in section 2.3.1, we utilize the correlation measure \(\rho_{\text{ERMI}}(\cdot)\), which depends on the probability distribution \(\mathbb{P}\) through matrix \(Q\); see equations (6) and (5). Consequently, we define the uncertainty set directly on the matrix \(Q\). Thus, our robust fair learning problem can be expressed as \[\min_{\mathbf{\theta}}\ \ \mathbb{E}_{\mathbb{P}_{\mathsf{tr}}}[\ell(\hat{y}_{ \mathbf{\theta}}(\mathbf{x}),y)]+\lambda\max_{Q_{\mathbf{\theta}}\in\mathcal{B}(Q^{ \mathsf{tr}}_{\mathbf{\theta}},\epsilon)}\operatorname{Tr}(Q^{T}_{\mathbf{\theta}}Q _{\mathbf{\theta}}),\] (Dr. FERMI) where \(Q^{\mathsf{tr}}_{\mathbf{\theta}}\) is obtained by plugging \(\mathbb{P}_{\mathsf{tr}}\) in (6), and \(\mathcal{B}(Q^{\mathsf{tr}}_{\mathbf{\theta}},\epsilon)\) is the uncertainty set/ball around \(Q^{\mathsf{tr}}_{\mathbf{\theta}}\). ### Simplification of Dr. FERMI The formulation (Dr. FERMI) is of the min-max form and could be challenging to solve at first glance Razaviyayn et al. (2020); Daskalakis et al. (2021). The following theorem shows how (Dr. FERMI) can be simplified for various choices of the uncertainty set \(\mathcal{B}(Q^{\mathsf{tr}}_{\mathbf{\theta}},\epsilon)\), leading to the development of efficient optimization algorithms. The proof of this theorem is deferred to Appendix A **Theorem 1**: _Define \(\mathcal{B}(Q^{\mathsf{tr}}_{\mathbf{\theta}},\epsilon,p)=\{Q:\|\sigma(Q)-\sigma (Q^{\mathsf{tr}}_{\mathbf{\theta}})\|_{p}\leq\epsilon\}\) where \(\sigma(Q)\) is the vector of singular values of \(Q\). Then:_ _a) for \(p=1\), we have:_ \[\max_{Q_{\mathbf{\theta}}\in\mathcal{B}(Q^{\mathsf{tr}}_{\mathbf{\theta}},\epsilon,p),\sigma_{1}(Q_{\mathbf{\theta}})=1}\operatorname{Tr}(Q^{T}_{\mathbf{\theta}}Q_{\mathbf{ \theta}})=\operatorname{Tr}\Bigl{(}(Q^{\mathsf{tr}}_{\mathbf{\theta}})^{T}Q^{ \mathsf{tr}}_{\mathbf{\theta}}\Bigr{)}+2\epsilon\sigma_{2}\Bigl{(}Q^{\mathsf{tr} }_{\mathbf{\theta}}\Bigr{)}+\epsilon^{2}, \tag{9}\] _which means the robust fair regularizer can be described as \(\rho_{\text{ERMI}}(\hat{y},s)+2\epsilon\rho_{\text{HGR}}(\hat{y},s)\) when \(p=1\)._ _b) For \(p=2\),_ \[\max_{Q_{\mathbf{\theta}}\in\mathcal{B}(Q^{\mathsf{tr}}_{\mathbf{\theta}},\epsilon,p) }\operatorname{Tr}(Q^{T}_{\mathbf{\theta}}Q_{\mathbf{\theta}})=\operatorname{Tr} \Bigl{(}(Q^{\mathsf{tr}}_{\mathbf{\theta}})^{T}Q^{\mathsf{tr}}_{\mathbf{\theta}}\Bigr{)} +2\epsilon\sqrt{\operatorname{Tr}\Bigl{(}(Q^{\mathsf{tr}}_{\mathbf{\theta}})^{T}Q ^{\mathsf{tr}}_{\mathbf{\theta}}\Bigr{)}}+\epsilon^{2}, \tag{10}\] _which means that when \(p=2\), the regularizer is equivalent to \(\rho_{\text{ERMI}}(\hat{y},s)+2\epsilon\sqrt{\rho_{\text{ERMI}}(\hat{y},s)}\)._ _c) For \(p=\infty\), assume that \(Q^{\mathsf{tr}}_{\mathbf{\theta}}=U_{\mathbf{\theta}}\Sigma_{\mathbf{\theta}}V^{T}_{\mathbf{ \theta}}\) is the singular value decomposition of \(Q^{\mathsf{tr}}_{\mathbf{\theta}}\). Therefore:_ \[\max_{Q_{\mathbf{\theta}}\in\mathcal{B}(Q^{\mathsf{tr}}_{\mathbf{\theta}},\epsilon,p) }\operatorname{Tr}(Q^{T}_{\mathbf{\theta}}Q_{\mathbf{\theta}})=\operatorname{Tr}\Bigl{(} (Q^{\mathsf{tr}}_{\mathbf{\theta}})^{T}Q^{\mathsf{tr}}_{\mathbf{\theta}}\Bigr{)}+2 \epsilon\operatorname{Tr}(|\Sigma_{\mathbf{\theta}}|)+\epsilon^{2}. \tag{11}\] **Remark 2**: _In the case of \(p=1\), we enforced \(\sigma_{1}(Q)=1\). Without this condition, as we will see in the proof, the maximum is attained for a \(Q\) which does not correspond to a probability vector._ ### Generalization of Dr. FERMI A natural question on the effectiveness of the proposed DRO formulation for fair empirical risk minimization is whether we can guarantee the boundedness of the fairness violation by optimizing (Dr. FERMI). The following theorem shows that if \(\lambda\) and \(\epsilon\) are appropriately chosen, then the fairness violation of the solution of (Dr. FERMI) can be reduced to any small positive value on the unseen test data for any distribution shift. This result is proven for the cross-entropy loss (covering logistic regression and neural networks with cross-entropy loss). Following the proof steps, the theorem can be extended to other loss functions that are bounded below (such as hinge loss and mean squared loss). **Theorem 3**: _Let \(\ell(\cdot,\cdot)\) in (Dr. FERMI) be the cross-entropy loss. For any shift in the distribution of data from the source to the target domain, and for any given \(\gamma>0\), there exists a pair of \((\lambda,\epsilon)\) such that the solution to (Dr. FERMI) has a demographic parity violation less than or equal to \(\gamma\) on the target domain (unseen test data)._ The proof is deferred to Appendix B. A discussion on the significance and limitations of the theorem can be found in Appendix D. Algorithms for Solving (Dr. FERMI) In this section, we utilize Theorem 1 to first develop efficient deterministic algorithms for solving (Dr. FERMI). Building on that, we will then develop stochastic (mini-batch) algorithms. ### Deterministic (Full-Batch) Algorithm Let us first start by developing an algorithm for the case of \(p=1\), as in (9). Notice that \[\sigma_{2}(Q_{\mathbf{\theta}}^{T}Q_{\mathbf{\theta}})=\max_{\mathbf{v}\perp\mathbf{v} _{1},\,\|\mathbf{v}\|^{2}\leq 1}\sqrt{\mathbf{v}^{T}Q_{\mathbf{\theta}}^{T}Q_{\mathbf{ \theta}}\mathbf{v}}. \tag{12}\] where \(\mathbf{v}_{1}=\left[\sqrt{\mathbb{P}(S=s_{1})},\ldots,\sqrt{\mathbb{P}(S=s_{1 })}\right]\); see also (Baharlouei et al., 2019, Equation (6)). Further, as described in (Lowy et al., 2022, Proposition 1), we have \[\widehat{\rho}_{\text{ERMI}}(\hat{y},s,y;\mathbb{P})=\max_{W}\{-\text{Tr}(W \widehat{P}_{\hat{y}}W^{T})+2\text{Tr}(W\widehat{P}_{\hat{y},s}\widehat{P}_{s }^{-1/2})-1\} \tag{13}\] where \(\widehat{\rho}_{\text{ERMI}}\) is the ERMI correlation measure (5) defined on training distribution \(\mathbb{P}_{\text{tr}}\). Here, \(\widehat{P}_{\hat{y},s}\) is a probability matrix whose \((i,j)\)-th entry equals \(\mathbb{P}_{\text{tr}}(\hat{y}=i,s=j)\). Similarly, we define \(\widehat{P}_{\hat{y}}\). Combining (9) with (12) and (13) leads to a min-max reformulation of (Dr. FERMI) in variables \((W,\mathbf{v},\mathbf{\theta})\) (see Appendix E for details). This reformulation gives birth to Algorithm 1. At each iteration of this algorithm, we maximize w.r.t. \(\mathbf{v}\) and \(W\) variables, followed by a step of gradient descent w.r.t. \(\mathbf{\theta}\) variable. One can establish the convergence of this algorithm following standard approaches in the literature; see Theorem 7 in Appendix G. Note that \(\hat{P}_{\hat{y}}\) and \(\hat{P}_{\hat{y},s}\) are functions of \(\mathbf{\theta}\) (through \(\hat{y}\)). Thus, it is crucial to recompute them after each update of \(\mathbf{\theta}\). However, as \(\widehat{P}_{s}\) does not depend on \(\theta\), it remains the same throughout the training procedure. Details on the gradient computations in Algorithm 1 are deferred to Appendix E. Following similar steps, we developed algorithms for \(L_{2}\) and \(L_{\infty}\) uncertainty balls using (10) and (11); see Appendix H. ### Stochastic (Mini-Batch) Algorithm In large-scale learning problems, we can only use small batches of data points to update the parameters at each iteration. Thus, it is crucial to have stochastic/minibatch algorithms. The convergence of such algorithms relies on the unbiasedness of the (minibatch) gradient at each iteration. To develop a stochastic algorithm version of Dr. FERMI, one may naively follow the steps of Algorithm 1 (or a gradient descent-ascent version of it) via using mini-batches of data. However, this heuristic does not converge (and also leads to statistically biased measures of fairness). It is known that for the stochastic gradient descent algorithm to converge, the update direction must be a statistically unbiased estimator of the actual gradient of the objective function (Polyak, 1990, Nemirovski et al., 2009); thus, the naive heuristic fails to converge and may even lead to unfair trained models. The following lemma rewrites the objective in (10) so that it becomes a summation/expectation over the training data and hence provides a statistically unbiased estimator of the gradient for stochastic first-order algorithms. **Lemma 4**: _Let \(\mathcal{B}(Q_{\mathbf{\theta}}^{\text{tr}},\epsilon)=\{Q:\|Q-Q_{\mathbf{\theta}}^{ \text{tr}}\|_{2}\leq\epsilon\}\), then (Dr. FERMI) is equivalent to:_ \[\min_{\alpha>0,\,\mathbf{\theta}}\ \max_{W\in\mathbb{R}^{h\times d}}\frac{1}{n} \big{[}\sum_{i=1}^{n}\ell(\mathbf{z}_{i};\mathbf{\theta})+\lambda(1+\epsilon\alpha) \psi(\mathbf{z}_{i};\mathbf{\theta},W)\big{]}+\frac{\lambda\epsilon}{\alpha} \tag{14}\] _Where \(\psi\) is a quadratic concave function in \(W\) defined as:_ \[\psi(\mathbf{z}_{i};\mathbf{\theta},W):=-\text{Tr}(W\mathbb{E}[\hat{\mathbf{y}}_{ \mathbf{\theta}}(\mathbf{x}_{i})\hat{\mathbf{y}}_{\mathbf{\theta}}(\mathbf{x}_{i})^{T} |\mathbf{x}_{i}]W^{T})+2\text{Tr}(W\mathbb{E}[\hat{\mathbf{y}}_{\mathbf{\theta}} (\mathbf{x}_{i})\mathbf{s}_{i}^{T}|\mathbf{x}_{i},s_{i}]\widehat{P}_{s}^{-1/ 2}) \tag{15}\] Lemma 4 is key to the development of our convergent stochastic algorithms (proof in Appendix C). It rewrites (Dr. FERMI) as the average over \(n\) training data points by introducing new optimization variables \(\alpha\) and \(W\). Consequently, taking the gradient of the new objective function w.r.t. a randomly drawn batch of data points is an unbiased estimator of the full batch gradient. Having an unbiased estimator of the gradient, we can apply the Stochastic Gradient Descent Ascent (SGDA) algorithm to solve (14). The details of the algorithm and its convergence proof can be found in Appendix F. ## 5 Robustification of the Model Accuracy In Section 4, we analyzed the DRO formulation where \(\mathcal{U}_{1}(\mathbb{P}_{\text{tr}})\) is singleton in (8). However, to make the accuracy of the model robust, we can consider various choices for \(\mathcal{U}_{1}(\mathbb{P}_{\text{tr}})\). For example, we can consider the set of distributions with the bounded likelihood ratio to \(\mathbb{P}_{\text{tr}}\), which leads to Conditional Value at Risk (CVaR) [Rockafellar et al., 2000]. Using the dual reformulation of CVaR, (8) can be simplified to (see [Levy et al., 2020, Appendix A] and [Shapiro et al., 2021, Chapter 6]): \[\min_{\mathbf{\theta},\eta>0}\ \frac{1}{\alpha}\mathbb{E}_{(\mathbf{x},y)\sim \mathbb{P}_{\text{tr}}}\Big{[}\ell\big{(}\hat{y}_{\mathbf{\theta}}(\mathbf{x}),y \big{)}-\eta\Big{]}_{+}+\eta+\max_{\mathbb{P}\in\mathcal{U}_{2}(\mathbb{P}_{ \text{tr}})}\lambda\rho\Big{(}\hat{y}_{\mathbf{\theta}}(\mathbf{x}),y,s,\mathbb{P }\Big{)},\] (CVaR-Dr. FERMI) where \([x]_{+}\) is the projection to the set of non-negative numbers and we define \(\mathcal{U}_{2}(\mathbb{P}_{\text{tr}})\) the same way as we defined in subsection 3.2. All methods and algorithms in Section 4 can be applied to (CVaR-Dr. FERMI). Compared to the ERM, the CVaR formulation has one more minimization parameter \(\eta\) that can be updated alongside \(\mathbf{\theta}\) by the (stochastic) gradient descent algorithm. Similarly, one can use group DRO formulation for robustifying the accuracy [Sagawa et al., 2019]. Assume that \(\mathcal{G}\) contains the groups of data (in the experiments, we consider each sensitive attribute level as one group). Then, (8) for group DRO as the measure of accuracy can be written as: \[\min_{\mathbf{\theta}}\ \max_{g\in\mathcal{G}}\ \mathbb{E}_{(\mathbf{x},y) \sim P_{\text{g}}}[\ell(\hat{y}_{\mathbf{\theta}}(\mathbf{x}),y)]+\max_{\mathbb{P }\in\mathcal{U}_{2}(\mathbb{P}_{\text{tr}})}\lambda\rho\Big{(}\hat{y}_{\mathbf{ \theta}}(\mathbf{x}),y,s,\mathbb{P}\Big{)},\] (Group-Dr. FERMI) Different variations of Algorithm 1 for optimizing (CVaR-Dr. FERMI) and (Group-Dr. FERMI) are presented in Appendix I. These developed algorithms are evaluated in the next section. ## 6 Numerical Experiments We evaluate the effectiveness of our proposed methods in several experiments. We use two well-known group fairness measures: Demographic Parity Violation (DPV) and Equality of Opportunity Violation (EOV) [Hardt et al., 2016], defined as \[\text{DPV}=|P(\hat{y}=1|s=1)-P(\hat{y}=1|s=0)|,\ \text{and EOV}=|P(\hat{y}=1|s=1,y=1)-P(\hat{y}=1|s=0,y=1)|\] The details of our hyperparameter tuning are provided in Appendix L. All implementations for experiments are available at [https://github.com/optimization-for-data-driven-science/DR-FERMI](https://github.com/optimization-for-data-driven-science/DR-FERMI). More plots and experiments are available in Appendix K and Appendix J. ### Modifying Benchmark Datasets to Contain Hybrid Distribution Shifts Standard benchmark datasets in algorithmic fairness, such as Adult, German Credit, and COMPAS [Dua and Graff, 2017], include test and training data that follow the same distribution. To impose distribution shifts, we choose a subset of the test data to generate datasets containing demographic and label shifts. The demographic shift is the change in the population of different sensitive sub-populations from the source and the target distributions, i.e., \(\widehat{\mathbb{P}}_{s}(s)\neq\mathbb{P}_{s}^{*}(s)\). Similarly, the label shift means \(\widehat{\mathbb{P}}_{y}(y)\neq\mathbb{P}_{y}^{*}(y)\). The Adult training (and test) data has the following characteristics: \[\widehat{\mathbb{P}}_{s}(s=\text{`Woman'})=33.07\%\quad\text{and}\quad \widehat{\mathbb{P}}_{s}(s=\text{`Woman'}|\ y=\text{`High Income'})=15.03\%\] We generate two different test datasets containing distribution shifts where the probability \(\widehat{\mathbb{P}}_{s}(s=\text{`Woman'}|y=\text{`High Income'})\) is \(10\%\) and \(20\%\) (by undersampling and oversampling high-income women). We train the model on the original dataset and evaluate the performance and fairness (in terms of demographic parity violation) on two newly generated datasets in Figure 1. ### Robustness Against Distribution Shift in Real Datasets While the original Adult dataset (Dua and Graff, 2017) has no considerable distribution shift, a relatively new dataset ACS-Income (Ding et al., 2021) designed based on the US census records contains natural distribution shifts since we can choose different US states as source and target datasets. The underlying task defined on the dataset is to predict whether a given person is high-income or low-income. The sensitive attributes are gender and race. Using this data, we perform experiments on binary and non-binary sensitive attribute cases. In the binary setting, we only consider gender as the sensitive attribute (in the dataset there are only two genders). In the non-binary case, we have four different sensitive groups: White-Male, White-Female, Black-Male, and Black-Female (a combination of race and gender). One can observe that \(P_{s}(s)\) and \(P_{y,s}(y,s)\) have large fluctuations over different states. Thus, these datasets, as mentioned in the paper (Ding et al., 2021), naturally contain the distribution shift with respect to different states. In Figure 2, we apply Hardt Post-Processing approach (Hardt et al., 2016), Zemel Pre-processing method (Zemel et al., 2013), FERMI (Lowy et al., 2022), Robust Log-Loss under covariate shift (Rezaei et al., 2021), Curvature matching with MMD (Wang et al., 2022), and our distributionally robust method under \(L_{2}\) norm for three accuracy variations (Dr. FERMI, CVaR-Dr. FERMI, and Group-Dr. FERMI), on the new adult (ACS-Income) dataset (Ding et al., 2021). The methods are trained on a single state (California, Texas, Utah, and Virginia) and evaluated/tested on all \(50\) states in terms of prediction accuracy and fairness violation under the equality of opportunity notion. We chose California and Texas in accordance with other papers in the literature as two datasets with a large number of samples. Further, we chose Utah and Virginia as the two states with the highest and lowest initial equality of opportunity violations. Each method's horizontal and vertical range in Figure 2 denote \(25\)-th and \(75\)-th percentiles of accuracy and fairness violations across \(50\) states, respectively. Thus, if a line is wider, it is _less robust_ to the distribution shift. Ideally, we prefer models whose corresponding curves are on the upper-left side of the plot. Figure 2 shows that Dr. FERMI with \(L_{2}\) uncertainty ball has a better robust, fair performance than other approaches among \(50\) states when the training (in-distribution) dataset is either of the four aforementioned states. When maintaining more accuracy is a priority, one can use CVaR or Group DRO as the objective function for the accuracy part instead of the ERM. As a note, we can see that learning the model on a dataset with a more initial fairness gap (Utah) leads to a better generalization in terms of fairness violation, while learning on datasets with a less fairness gap (Virginia) leads to poorer fairness in the test phase. ### Handling Distribution Shift in Non-Binary Case We run different versions of Dr. FERMI alongside Mary et al. (2019), Baharlouei et al. (2019), Cho et al. (2020), and Lowy et al. (2022) as the baselines supporting multiple sensitive attributes. The algorithms are executed on the ACS PUMS (Ding et al., 2021) dataset with gender and race as the sensitive attributes. The accuracy and equality of opportunity violations are calculated on \(50\) different datasets, and the average is reported in Table 6.3. Training is done on California data. Dr. FERMI under \(L_{2}\) ball has the best performance in terms of average fairness violation across \(50\) states. For robust accuracy, we suggest using CVaR for robustifying the accuracy. Note that, in all cases, the training equality of opportunity violation is set to **0.02** for all methods in the table. Figure 1: Performance of the trained fair models on test datasets that have different distributions than the training data. In the left figure, we undersampled the high-income minority group (women), while we oversampled in the right figure. The proposed methods (Dr. FERMI and CVaR-Dr. FERMI) generally outperform other benchmarks. In both figures, either Dr. FERMI or CVaR-Dr. FERMI can reach lower demographic parity violations, given any accuracy level. The red dashed line represents the Naïve baseline where the model outputs zero with probability \(p\). By increasing \(p\), the model becomes fairer at the cost of the loss in accuracy. **Evaluation of Stochastic DR ERMI:** batch sizes. By reducing the batch size from full-batch to small batches, we observed that our algorithm's performance remains nearly the same, while other benchmarks' performance varies significantly. Further, we report the time and memory consumption of Dr. FERMI and other benchmarks on the experiment presented in Table 3. The results of these experiments are available in Appendix J. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Method** & **Tr Accuracy** & **Test Accuracy** & **Test EO Violation** \\ \hline Mary et al., 2019 & 71.41\% & 68.35\% & 0.1132 \\ \hline Cho et al., 2020 & 71.84\% & 68.91\% & 0.1347 \\ \hline Baharlouei et al., 2020 & 72.77\% & 69.44\% & 0.0652 \\ \hline Lowy et al., 2022 & 73.81\% & 70.22\% & 0.0512 \\ \hline **Dr. FERMI-\(L_{1}\)** & 73.45\% & 70.09\% & 0.0392 \\ \hline **Dr. FERMI-\(L_{2}\)** & 73.12\% & 69.71\% & **0.0346** \\ \hline **Dr. FERMI-\(L_{\infty}\)** & 73.57\% & 69.88\% & 0.0359 \\ \hline **CVaR-Dr. FERMI-\(L_{1}\)** & **74.21**\% & **70.94\%** & 0.0471 \\ \hline **CVaR-Dr. FERMI-\(L_{2}\)** & 73.84\% & 70.26\% & 0.0429 \\ \hline **CVaR-Dr. FERMI-\(L_{\infty}\)** & 73.92\% & 70.45\% & 0.0466 \\ \hline \end{tabular} \end{table} Table 1: Train and Test Accuracy and Fairness Violation of Different Methods on ACS PUBS dataset. Dr. FERMI under \(L_{2}\) ball achieves the best fairness violation on average among different states. The distributionally robust algorithm under \(L_{1}\) ball has a better accuracy but is less fair than \(L_{2}\). Figure 2: Learning fair logistic regression models on four different states as the source (in-distribution) domain and evaluating them on all \(50\) states of the US.
2309.13095
Multiple Independent DE Optimizations to Tackle Uncertainty and Variability in Demand in Inventory Management
To determine the effectiveness of metaheuristic Differential Evolution optimization strategy for inventory management (IM) in the context of stochastic demand, this empirical study undertakes a thorough investigation. The primary objective is to discern the most effective strategy for minimizing inventory costs within the context of uncertain demand patterns. Inventory costs refer to the expenses associated with holding and managing inventory within a business. The approach combines a continuous review of IM policies with a Monte Carlo Simulation (MCS). To find the optimal solution, the study focuses on meta-heuristic approaches and compares multiple algorithms. The outcomes reveal that the Differential Evolution (DE) algorithm outperforms its counterparts in optimizing IM. To fine-tune the parameters, the study employs the Latin Hypercube Sampling (LHS) statistical method. To determine the final solution, a method is employed in this study which combines the outcomes of multiple independent DE optimizations, each initiated with different random initial conditions. This approach introduces a novel and promising dimension to the field of inventory management, offering potential enhancements in performance and cost efficiency, especially in the presence of stochastic demand patterns.
Sarit Maitra, Sukanya Kundu, Vivek Mishra
2023-09-22T13:15:02Z
http://arxiv.org/abs/2309.13095v2
Multiple Independent DE Optimizations to Tackle Uncertainty and Variability in Demand in Inventory Management ###### Abstract To determine the effectiveness of metaheuristic Differential Evolution optimization strategy for inventory management (IM) in the context of stochastic demand, this empirical study undertakes a thorough investigation. The primary objective is to discern the most effective strategy for minimizing inventory costs within the context of uncertain demand patterns. Inventory costs refer to the expenses associated with holding and managing inventory within a business. The approach combines a continuous review of IM policies with a Monte Carlo Simulation (MCS). To find the optimal solution, the study focuses on meta-heuristic approaches and compares multiple algorithms. The outcomes reveal that the Differential Evolution (DE) algorithm outperforms its counterparts in optimizing IM. To fine-tune the parameters, the study employs the Latin Hypercube Sampling (LHS) statistical method. To determine the final solution, a method is employed in this study which combines the outcomes of multiple independent DE optimizations, each initiated with different random initial conditions. This approach introduces a novel and promising dimension to the field of inventory management, offering potential enhancements in performance and cost efficiency, especially in the presence of stochastic demand patterns. differential evolution; genetic algorithm; inventory management; non-linear optimization; stochastic demand. ## I Introduction Inventory management (IM) is a critical aspect that is linked to business profitability in a modern organization. With increasing uncertainties and complexities, businesses need data-driven computational techniques to manage inventory. Real-world issues like stockouts, excess inventory, and revenue losses can be addressed using mathematical optimization approaches. In the past, several studies have made significant contributions to IM and highlighted the importance of sophisticated computational techniques to optimize inventory decisions to manage demand variations (e.g., [19]; [33]; [6] etc.). Building on their arguments, recent studies highlighted the growing complexities in IM driven by demand uncertainties, which have led to the development of computation-intensive simulation and optimization methods ([44] and [10]). Several recent studies have emphasized the relevance of optimization throughout the value chain (e.g., [24], [25], [11], etc.). Despite several available works, the advancement of technology, globalization, and evolving customer expectations have made IM a complex task and an active research area. Researchers are constantly exploring innovative approaches and methodologies to handle this complexity effectively. Through this work, we address the questions of how to effectively manage inventory with stochastic demand, focusing on a continuous review policy approach, and how to optimize the total cost. Meta-heuristic optimization techniques such as Grey Wolf Optimizer (GWO), Whale Optimization Algorithm (WOA), Metaheuristics (MH) with Simulated Annealing (SA), Monte Carlo Simulation (MCS) with Bayesian Algorithm (BA), and Differential Evolution (DE) have been explored in this work. The findings reveal that DE is the most effective and simple heuristic optimization to deal with stochastic demands. In the end, this work implements Adaptive DE by combining several DE variants and dynamically allotting computing resources based on their individual historical performance. The method helps to mitigate the risk of local optima and enhance the optimization process. The goal is to explore different regions of the parameter space to find a robust and reliable solution. The efficacy of the optimized policy may be sensitive to demand distribution. Therefore, this work performed a sensitivity analysis to assess the robustness of the policy under various scenarios. The major contribution of this study is to experiment with a simulation-optimization model that can be applied with DE to select a nearly ideal IM policy under stochastic demand. The finding shows the proposed simulation optimization efficiently solves inventory policies by using the structure of the objective function rather than an exhaustive approach. ## II Previous Work The importance of optimization is reflected in several studies (e.g., [25]; [24]; [11]; etc.). Optimization approaches offer a systematic method to further enhance inventory management ([31]; [28]). Studies (e.g., [33]; [34]; [26]) and subsequently industry reports suggest that IM costs can range a sizeable portion, which is approximately 20-40% of the total supply chain costs. It has been proposed that the effective simulation-optimization approach can bring a 16% reduction in costs by implementing the optimal policy. [12]. Several studies have discussed different optimization techniques ([14]; [4]; [42]). However, none of the works guaranteed an optimal solution if the original assumptions and considerations were violated [15]. On the same note, when using optimization techniques to solve IM problems, it is important to carefully consider the assumptions and constraints underlying the model [29]. These studies highlighted the importance of incorporating uncertainty and variability into the model because real-world inventory systems are often subject to such factors. Moreover, IM under uncertainty is challenging to solve due to the non-linearity of the model and several local optimum solutions ([8]; [17]). In recent times, metaheuristic algorithms have frequently been employed as powerful solutions for IM ([13]; [7]). Owing to their ability to effectively search for the solution space of complicated problems, meta-heuristic algorithms have received considerable attention in recent years (e.g., [1]; [9]; [41]; [7]; [34]; etc.). A recent study employed meta-heuristic algorithms for inventory optimization by presenting GWO and WOA as two novel solution approaches [30]. Though GWO was introduced to solve optimization problems [e.g., 23], some researchers criticized the fact that GWO mostly suffers from a lack of population diversity ([29]; [39]). To overcome this limitation, an improved version of GWO was presented [27]. Some authors emphasized the application of SA in the context of IM and the efficiency of SA in resolving the difficulties brought on by the unpredictability of demand and the requirement to optimize inventory policies under uncertainty (e.g., [40]; [21] etc.). To overcome the limitations and improve algorithm efficiency, DE was introduced effectively for optimization [36]. It was compared with different optimization approaches; however, the DE method outperformed all other approaches in terms of the required number of function evaluations necessary to locate a global minimum of the test functions. Meta-heuristic approaches for inventory forecasting were also studied, which revealed the superior performance of DE even compared to CNN-LSTM [43]. The superiority of DE was supported by an exhaustive literature review, which revealed that 158 out of 192 papers were published between 2016 and 2021, showing that academics have improved DE to increase its effectiveness and efficiency in handling a variety of optimization challenges [2]. Moreover, the MCS method is commonly used to propagate the uncertainties of random inputs in the case of stochastic demand (e.g., [16]; [14]; [31]). This establishes that simulation is an integral part of IM during stochastic demands. MCS allows the incorporation of stochastic variability in demand patterns. The growing body of work in metaheuristic optimization indicates ongoing research efforts to improve the effectiveness of these techniques and their application in solving various optimization challenges in inventory management. ## III Methodology A three-stage approach employing a comprehensive methodology was adopted in this study to analyze and optimize the IM policy. Fig. 1 displays the methodological framework applied in this study, with shaded areas for various stages. First, the demand for products was collected over 365 days. The data were simulated to estimate the probability of experiencing various demand levels. These simulations allowed us to create multiple scenarios and observe the potential outcomes. The policies considered herein include continuous reviews and cross-docking. The performance of these policies was compared based on their ability to minimize total costs while ensuring an acceptable level of service. \[Total_{cost} =\;Purchase_{cost}\;+\;Order_{cost}\;+\;Holding_{cost}\;+\;Stockout_{cost} \tag{1}\] By considering all these costs, we aim to develop an inventory policy that minimizes costs and maximizes profits. The results of the simulations are analyzed and interpreted to provide insights into the effectiveness of each policy. Once the optimal policy is identified, the next goal is to determine the optimal inventory levels that balance the \(Total_{costs}\). With both the goals in place, in stage three (blue shaded area), the various optimization techniques (e.g., GWO, MH + SA, MCS, WO, MCS + BO, and DE) are employed. By employing the optimization technique and running simulations for a full year (365 days), we aim to fine-tune the inventory policy to minimize the \(total_{costs}\) while considering the uncertainties in demand. In the final stage, sensitivity analysis is performed to identify different convergence rates, quality of solutions, and computational efficiency. ## IV Data Analysis The business case selected here examines the sale of four distinct products and considers the adoption of a suitable IM policy. The goal is to minimize the total cost associated with purchasing, ordering, and holding inventory by optimizing inventory levels. We use historical demand data to calculate the central tendency of the data. Table I displays the statistics related to the four products. * Pr A, Pr B, Pr C, and Pr D are four distinct products. * \(Purchase_{costs}=\) cost of purchasing one unit of the item from the supplier. * \(Lead_{time}=\) time it takes for the supplier to deliver the item after placing an order. * Size = size or quantity of each item. * \(Selling_{price}=\) price at which each item is sold to the customers. * \(Starting_{stock}=\) initial stock level of each item in the inventory. * Mean = average demand for each item over a given period. * \(Std_{dev}=Std_{dev}\) of demand for each item over a given period. * \(Order_{cost}=\) cost of placing an order with the supplier. * \(Holding_{cost}=\) cost of holding one unit of inventory for a given period. Fig. 1: Methodological framework (Source: Authors) * Probability = probability of a stock-out event occurring, i.e., the probability of demand exceeding the available inventory level. * \(Demand_{lead}=\) lead time demand for each item, i.e., the demand that is expected to occur during the lead time. Fig. 2 displays the KDE plots of the demand distribution of the products over 365 days. The shapes of the curves provide insight into the underlying stochastic distribution of the data. The isolated peaks in the curves show potential outliers in the demand data. ### _ABC analysis_ ABC analysis is performed to categorize items based on their initial IM values. The analysis follows the Pareto Principle for the annual consumption value of each product. Considering CV = consumption value, then, \[CV_{annual}=\ Demand\ *\ Selling_{price} \tag{2}\] \[CV_{cumulative}\ =\ \sum(CV_{annual}) \tag{3}\] \[\textit{Cumulative\%}=\frac{\sum(CV_{annual}/CV_{cumulative})}{\text{Product A:}\ The number of experiments was set at 27 because, based on orthogonal arrays, it represents the total number of unique combinations for the specified parameter space. ## V Simulation & inventory systems CR (continuous review) is more suitable for managing inventory with stochastic demand ([22]; [3]; [36]). An extensive literature review of over seven decades suggests that continuous policy is the most employed policy in stochastic inventory literature [28]. Taking a clue from their work, we examined both CR and CD in our empirical analysis. MCS was used to simulate and observe the cost and inventory levels over multiple simulation runs (10,000) for a period of 365 days. Table III depicts the output; the reorder points and safety stocks for the analysis have been taken from Table II. Average cost and average inventory levels are calculated as: \[\begin{array}{l}average_{cost}=total_{cost}/(num_{simulations}\\ *num_{periods})\\ \end{array}\] \[\begin{array}{l}average_{inventory\_level}\\ =total_{inventory\_level}/(num_{simulations}\\ *num_{periods})\\ \end{array}\] The CD strategy involves minimizing the need for inventory storage by transferring products directly from the supplier to the customer. We implemented CD as additional logic within the IM simulation. ### _Optimization_ Multiple optimization algorithms (GWO, SA, MCS, MCS with BO, WO, and DE) were tested to get the optimal cost. #### V-A1 Cost breakdown The total cost is computed based on the following parameters: * Purchase Cost is computed as \(unit\ purchase_{cost}*max(order_{quantities},0)\). * Order Cost is only applied when the order quantity is greater than zero and is computed as \(unit\ order_{cost}*(order_{quantities}>0)\). * Holding Cost is computed as \(unit\ holding_{cost}*max(inventory-demand,0)\). * Stockout Cost is inferred from the equation \(unit\ holding_{cost}*max(demand-inventory),0)\). \[\begin{array}{l}f(x)=\sum_{n}^{i=1}(PurchaseCost_{i}+OrderCost_{i}+\\ HoldingCost_{i}+StockoutCost_{i}\end{array}\] (6) These costs can be subtracted from the revenue to give the corresponding profit for that one realization of the year. Eq. (7) formulates the annual profit, which is the future direction of this work: \[SP_{l}\sum_{t=1}^{365}S_{l,t}-\left(\left(\frac{20V_{l}}{365}\right)\sum_{t=1 }^{365}I_{l,t}+N_{l}C_{o,l}+\sum_{t=1}^{365}c_{l}P_{l,t}\right) \tag{7}\] Our goal is to minimize costs. Table IV presents a summary of all the algorithms tested on the given parameters and MCS to simulate the data for 365 days. We chose meta-heuristic techniques that are designed to tackle complex and non-linear optimization issues where typical optimization techniques struggle to find the global optimum. Based on the \(Total_{cost}\), the optimization method with the lowest total cost appears to be DE (best1bin), with the corresponding cost of 250,774. The future direction of our work is to check if DE can be further optimized. ### _Multiple Independent DE Optimizations_ In this approach, we performed optimization using multiple optimizers (five optimizers of DE with different random initial conditions) in parallel to determine the best parameters and cost. By using this approach, we aim to mitigate the impact of random variations in the MCS and increase the likelihood of finding a robust and optimal solution for IM. Considering, the below parameters: * \(purchase_{cost}=Pc_{1},Pc_{2},\ldots,Pc_{n}\), where, \(Pc_{i}\) is the \(purchase_{cost}\) of product \(i\) * \(lead_{time}=L_{1}\), \(L_{2}\),..., \(L_{n}\), where \(L_{i}\) is the \(lead_{time}\) of product i * sizes = \(s_{1}\), \(s_{2}\),..., \(s_{n}\), where \(s_{i}\) is the sizes of product i * \(selling_{price}\) = SP\({}_{1}\), SP\({}_{2}\),..., SP\({}_{n}\), where SP\({}_{i}\) is the \(selling_{price}\) of product i * \(starting_{stock}\) = ss\({}_{1}\), ss\({}_{2}\),..., ss\({}_{n}\), where ss\({}_{i}\) is the initial inventory level of product i * means = \(\mu_{1}\), \(\mu_{2}\),..., \(\mu_{n}\), where \(\mu_{i}\) is the mean demand of product i; * standard deviation = \(\sigma_{1}\), \(\sigma_{2}\),..., \(\sigma_{n}\), where \(\sigma_{i}\) is the standard deviation of demand of product i * order cost = \(C_{1}\), \(C_{2}\),..., \(C_{n}\), where \(C_{i}\) is the order cost of product i * \(holding_{cost}\) = \(V_{1}\), \(V_{2}\),..., \(V_{n}\), where \(V_{i}\) is the order cost of product i * probabilities = \(p_{1}\), \(p_{2}\),..., \(p_{n}\), where \(p_{i}\) is the probability of demand for product i * demand lead = \(D_{1}\), \(D_{2}\),..., \(D_{n}\), where \(D_{i}\) is the demand lead time of product i. * Parameters space of optimization: \(bounds=[(0,\,ss_{1}),(0,ss_{2})\,...\,(0,\,ss_{n})]\), where each tuple represents the lower and upper bounds of inventory levels of the respective products. MCS and objective function (x), where \(x=x_{1},\,x_{2},x_{3},...\,x_{n}\) representing inventory levels. \(reorder\,\,levels\,=\,[means1\,*\,L_{1}\,+\,\sqrt{L_{1}}\,*\] \[\sigma 1,...,meansn\,*\,\sqrt{L_{n}}\,*\,\,\sigma n]\] \[order_{quantity}\,\,=\,[\max\,(reorder_{level_{1}}-x_{1},0),...\,,\] \[max\,(reorder_{level_{n}}-x_{n},0)]\] \(total_{cost}=\) for each day and product: if \(x_{i}<reorder_{level}\), \(order_{quantity}=order_{quantity_{i}}\) (this checks if the current inventory level is below the reorder level). If it is, the order quantity is set to the predetermined value for that product (\(order_{quantity_{i}}\)), indicating that an order should be placed to replenish the inventory. * \(increaseInventory\,(xi)=xi\,+\,orderQuantity\) * \(increaseTotalCost\,=\,totalCost\,+\) * \(decreaseInventory\,(x_{i})=x_{i}\,-\,daily_{demand}\), if \(x_{i}<0\), \(x_{i}=0\) and \(totalCost\,=\) \(totalCost\,+\,holdingCost/2\) * \(if\,(d+1)\%\,lead_{times_{i}}=0\), decrease inventory \(again\): \(xi\,=\) \(xi\,-\,daily_{demand_{i}}\) and \(totalCost\,=\,totalCost\,+\,holding_{cost_{i}}*x_{i}\) \[mean_{cost}=\frac{1}{num_{samples}}\,\Sigma_{s=1}^{num_{samples}}\,total_{ cost_{s}}\] (8) \[MultipleDE_{optimization}=\] \[[result_{1},result_{2},...,result_{num_{ensemble}}\] (9) where, each result represents the optimization result of one DE member. Table V reports the output. The total cost has been marginally reduced from 250,774 (Table IV) to 249,128 (Table V). The stocks are optimized from 19,100 (1220, 13204, 3359, 1317) to 17,229 (2,567, 9,063, 4,277, 1,322). The mutation rate and crossover rate are adaptively adjusted during the process based on iteration success or failure, ensuring a balance between exploration and exploitation during optimization. ### Sensitivity analysis Sensitivity analysis is employed on this to ensure the robustness of the multiple independent DE optimizations model under different scenarios. The analysis was performed on the population size parameters of the DE algorithm. The goal was to evaluate the effects of different population sizes on the optimization results. Different values of population size, for example, 10, 20, 50, and 100, were tested to observe how they affected the optimization results. By exploring different population sizes, we have assessed their impact on convergence behavior and the quality of the obtained solutions. By varying the population size in the DE, we can observe how it affects the optimization results. In this case, the observed differences in total cost are small, indicating that the model's performance is stable and not heavily influenced by changes in population size. ### Critical findings This study emphasizes the importance of optimization in IM, specifically in the context of stochastic demand and supply disruptions. DE is a successful method for establishing near-optimal inventory policies when combined with best/1/bin mutation strategy, LHS, and multiple independent DE optimizations. Sensitivity analysis with varying population sizes confirmed the stability of the optimization model. ## VI Conclusion This study highlighted the significance of optimization techniques, particularly DE and multiple independent DE optimizations, in achieving cost-effective and robust inventory management strategies in the face of uncertain demand and supply disruptions. Empirical analysis was conducted using 365-day demand data and reported the optimal policy, along with cost comparisons. The study also discussed the use of LHS for efficient parameter sampling. ABC analysis was applied to categorize items and assign Pareto classes to products. The optimal policy and inventory levels were determined through simulations and optimization techniques. A sensitivity analysis assessed convergence rate, solution quality, and computational efficiency. This comprehensive approach contributes to IM by improving efficiency and cost-effectiveness while addressing demand uncertainties.
2309.14922
Segment-Level Vectorized Beam Search Based on Partially Autoregressive Inference
Attention-based encoder-decoder models with autoregressive (AR) decoding have proven to be the dominant approach for automatic speech recognition (ASR) due to their superior accuracy. However, they often suffer from slow inference. This is primarily attributed to the incremental calculation of the decoder. This work proposes a partially AR framework, which employs segment-level vectorized beam search for improving the inference speed of an ASR model based on the hybrid connectionist temporal classification (CTC) attention-based architecture. It first generates an initial hypothesis using greedy CTC decoding, identifying low-confidence tokens based on their output probabilities. We then utilize the decoder to perform segment-level vectorized beam search on these tokens, re-predicting in parallel with minimal decoder calculations. Experimental results show that our method is 12 to 13 times faster in inference on the LibriSpeech corpus over AR decoding whilst preserving high accuracy.
Masao Someki, Nicholas Eng, Yosuke Higuchi, Shinji Watanabe
2023-09-26T13:30:58Z
http://arxiv.org/abs/2309.14922v2
# Segment-Level Vectorized Beam Search Based on Partially Autoregressive Inference ###### Abstract Attention-based encoder-decoder models with autoregressive (AR) decoding have proven to be the dominant approach for automatic speech recognition (ASR) due to their superior accuracy. However, they often suffer from slow inference. This is primarily attributed to the incremental calculation of the decoder. This work proposes a partially AR framework, which employs segment-level vectorized beam search for improving the inference speed of an ASR model based on the hybrid connectionist temporal classification (CTC) attention-based architecture. It first generates an initial hypothesis using greedy CTC decoding, identifying low-confidence tokens based on their output probabilities. We then utilize the decoder to perform segment-level vectorized beam search on these tokens, re-predicting in parallel with minimal decoder calculations. Experimental results show that our method is \(12\) to \(13\) times faster in inference on the LibriSpeech corpus over AR decoding whilst preserving high accuracy. Masao Someki\({}^{1}\), Nicholas Eng\({}^{2}\), Yosuke Higuchi\({}^{3}\), Shinji Watanabe\({}^{4}\)\({}^{1}\)IBM Japan Ltd., Japan \({}^{2}\)The University of Auckland, New Zealand \({}^{3}\)Waseda University, Japan \({}^{4}\)Carnegie Mellon University, USA Decoding algorithm, autoregressive, semi-autoregressive, hybrid CTC/attention, beam search ## 1 Introduction Due to recent advances in deep learning, automatic speech recognition (ASR) has witnessed remarkable achievements [1, 2, 3]. ASR plays an essential role in facilitating human-computer interaction by providing an interface for converting audio to text and demonstrating substantial applicability in real-world scenarios. In particular, the RNN-Transducer model [4], which operates relatively fast and can be extended to streaming speech recognition, is widely used in real-world applications. Recent research in ASR has also made significant progress in achieving higher accuracy through Attention-based Encoder-Decoder (AED) models [5, 6]. AED models have also been utilized in models such as Whisper [7] and speech translation [8], and their usefulness has been reevaluated. However, there are various trade-offs that can potentially limit its application in certain scenarios. For example, one can construct an ASR system with high accuracy through large and complex models, but this comes at the expense of increased computational cost and inference time. There has been extensive research devoted to alleviating the trade-off between recognition accuracy and inference speed. Inspired by the great success in neural machine translation [9, 10], non-autoregressive (NAR) models have been actively studied in the context of ASR, with the aim of achieving fast inference [11, 12, 13, 14]. Compared to the conventional autoregressive (AR) models [15, 16], which generates output at each step conditioned on the previously generated outputs, NAR models can produce multiple outputs simultaneously. This parallel computation accelerates the inference process of ASR, resulting in a significant reduction in inference time compared to AR models [17]. However, it remains a challenge to achieve the same level of recognition accuracy as AR models. In addition, NAR models require a complex model structure or a unique training strategy for the successful implementation of parallel generation during inference [17]. Regarding AR decoding, the decoder is trained to learn linguistic information. This allows us to utilize the relationship between the current token and the previous token. Furthermore, it is common to enhance accuracy by implementing the beam search algorithm, which is a heuristic search for the best hypothesis. However, because AR requires the previous tokens to estimate the next token, it is not possible to parallelize the inference of a single audio. As a consequence, achieving NAR-level speed through inference parallelization is challenging. In this paper, we focus on the difference in trade-off balance between AR and NAR, and propose a new decoding method, the partially autoregressive (PAR) method as a fast and accurate decoding method. By utilizing the NAR approach and segment-level vectorized beam search, we can compensate for the weaknesses of both AR and NAR. This results in a better trade-off between accuracy and latency. In particular, we show that by optimizing the inference operations of the pre-trained hybrid connectionist temporal classification (CTC) and attention model, we are able to achieve NAR-level inference speed while maintaining its high accuracy, as well as not needing additional training of the model. In this paper, we make the following contributions: * We propose a new decoding framework that combines AR and NAR. * We demonstrate a better balance of accuracy-latency trade-off without additional training. ## 2 Related Works Numerous studies have been conducted to balance the accuracy-latency trade-off. In general, there are two main approaches; reducing latency while preserving good accuracy or improving accuracy while preserving fast inference speed. To reduce latency, there are several methods to lower the computational cost during inference. One such method is the pruning technique [18, 19, 20], which reduces the number of parameters in a trained model by identifying subnetworks with better performance and fewer parameters. Another method is knowledge distillation [21, 22], which trains a smaller model to reduce the number of parameters required during inference. To further reduce the number of search iterations, hypotheses can be predicted in a batch [23], or the search process can be stopped prematurely [24]. In addition, other approaches focus on utilizing machine resources more efficiently [25, 26, 27], rather than relying solely on the model architecture, to achieve faster processing. In this study, we parallelized the AR decoding of single audio to achieve high-speed processing. The process is similar to batch processing of hypotheses, but in this case, the audio is segmented and parallelized in addition to the hypotheses. On the other hand, we can also improve the accuracy of a low-latency model, such as a non-autoregressive (NAR) model, though at a lower accuracy than typical AR models. For example, by replacing and inserting the model output with a mask sequence [11, 12, 13, 28], NAR can more accurately predict the target sequence compared to a standard NAR model. We can also improve the accuracy by utilizing an external language model [29, 30]. Additionally, several investigations have been undertaken to improve both the inference speed and accuracy, such as parallelizing the decoding process in the streaming situation [31]. In our work, it is not possible to perform parallel processing of all tokens as investigated above. However, high accuracy can still be achieved by utilizing AR decoding in areas where parallel processing leads to decreased accuracy. Research has investigated the combination of AR and NAR methods [32]. The researchers trained a dual-mode Transformer decoder that can be used for both NAR and AR-style processes. They first applied NAR-style decoding to generate several hypotheses, then used AR-style rescoring to select the best hypothesis. Our approach indeed uses AR decoding after NAR decoding. However, we utilize NAR decoding in our work to parallelize AR decoding within a single audio, so the purpose of NAR decoding is different. ## 3 Background In this section, to provide a clear understanding of how the proposed approach combines the benefits of autoregressive (AR) and non-autoregressive (NAR) decoding, we begin with a brief overview of the conventional encoder-decoder-based models, including hybrid CTC-attention [33] and Mask-CTC [13]. Additionally, we investigate the role of each encoder and decoder in influencing the overall inference time for each model. ### Autoregressive ASR with Hybrid CTC/Attention AR decoding is a recursive method for estimating target sequences. In the example shown in Fig. 0(a), the token \(s\) is estimated first and Figure 1: Overview and comparison of AR, NAR, and PAR decoding. <sos> denotes the “start-of-sequence” symbol,and the mask token is denoted by # or red characters. PAR is a hybrid of AR and NAR methods, in which the masking process is applied first, followed by segment-level vectorized beam search. Figure 2: Average inference time and proportion of time spent on the encoder, decoder, and CTC computation for (a) AR, as well as the encoder and decoder computation for (b) NAR architectures. then used to estimate the next token, e. This simple left-to-right beam search similar to [15] is widely used in ASR to search for the most likely transcriptions [16]. However, due to the iterative nature of beam search, this can greatly slow the inference time of AR decoding. For example, using the hybrid CTC/attention architecture, the beam search requires decoder computation and CTC rescoring process for each search iteration, hence inference time increases if the search process is iterated a large number of times. This effect is shown in Fig. 1(a), which shows the proportion of time spent on the encoder and decoding process, including decoder and CTC prefix score computation, as well as the average time spent during inference for various lengths of audio input. This shows that as the audio length and inference time increase, the proportion of time spent in the decoding process also increases. ### Non-autoregressive ASR with Mask-CTC NAR decoding is a method that avoids recursively estimating the target sequences to address the problem of slow inference. There are various architectures for this method [17], but this work focuses on the Mask-CTC [13] model. In the Mask-CTC model, we first estimate the target sequences with greedy CTC decoding (gCTC) output, and then a mask is applied based on the CTC probabilities for each token. As illustrated in Fig. 1(b), the token, e, a, and n is masked due to its lower probability. Then, the masked token # is estimated using the masked language model decoder. Since the decoder is only required for the masked token, and the number of decoder calculations is fixed, the number of decoder computations is significantly reduced compared to the AR method. Fig. 1(b) illustrates the proportion of time spent on the encoder and decoder, and the inference time for the Mask-CTC model. We set the number of the decoder iterations to \(10\) for measurement. Compared to Fig. 1(a), we can see that the encoder's share of NAR is larger than AR. Considering the number of encoder computations is always \(1\), a large encoder's share means a shorter computation time. As audio length increases, the inference time also increases, but the impact on inference time is small. Therefore the difference in inference time between AR and NAR can be seen as the difference in the proportion of the encoder and decoding process. However, there is an accuracy issue with the Mask-CTC-based NAR decoding. If the number of masked tokens is different from the actual number of tokens, the accuracy degrades significantly. For example, if the correct sequence is s, e, a and the masked sequence is s, f, f, a, the result of Mask-CTC becomes incorrect, since it tends to assign the same numbers of tokens to the masks, e., s, e, a or s, e, a, a. Similarly, if the masked sequence is s, e, f, a, it will have an insertion error. In our preliminary experiments, we observed that almost 40% of the masked sequences do not match the proper length. Additionally, we have observed that insertion errors caused by the absence of a target token, occur with a probability of 2%. ## 4 Partially Autoregressive Framework ### Partially Autoregressive Inference To address the issues inherent in AR and NAR decoding, we propose a _partially autoregressive_ (PAR) decoding method. Both AR and NAR models use the CTC and decoder components, but there is a significant difference in the trade-off balance based on usage. AR uses an iterative process to predict the target sequence, so it does not have an accuracy issue with target sequence length. NAR first uses gCTC results to reduce the number of tokens that need to be predicted with the decoder, resulting in fast inference. In PAR, we combine NAR-style CTC usage and AR-style decoder usage to utilize these two advantages fully. The architecture of PAR is illustrated in Fig. 0(c). PAR first generates a sequence of tokens by gCTC approach, then apply the mask process using the Mask-CTC method [13]. Finally, it predicts the tokens corresponding to a mask token using beam search, similarly to the AR approach. To reduce the computational complexity, we propose the segment-level vectorized beam search, which significantly reduces the number of search iterations. By applying this beam search, we can solve the NAR accuracy issue related to the incorrect target length. As an example, let us focus on a case where only one mask is in the sequence for simplicity, and let the \(P_{\mathrm{thres}}\) represent the threshold value ranging from 0 to 1. Initially, we use the gCTC result, obtained without the AR process. However, the resulting text can contain errors due to the conditional independence assumption. We expect these errors can be corrected by the AR process, where we utilize the pre-trained decoder and beam search. To determine which tokens to update using the AR process, we use \(P_{\mathrm{thres}}\) as a filter for posterior probability. Tokens with probability less than \(P_{\mathrm{thres}}\) will be considered less confident and replaced with the mask token. Consecutive mask tokens are merged into a single mask because we will estimate the sequences in the AR process. Finally, we use beam search to estimate the tokens. We iterate the beam search for _max_iteration_ times for any audio length, where _max_iteration_ denotes the maximum number of tokens for one mask. The value of _max_iteration_ is highly dependent on the threshold \(P_{\mathrm{thres}}\), where a higher value is required when \(P_{\mathrm{thres}}\) is closer to 1 and more tokens are replaced with mask tokens. ### Segment-level Vectorized Beam Search The beam search process explained in the previous section focused on the case where there is a single mask token. However, in practice, multiple mask tokens may exist for a given sequence. To handle multiple masks, we parallelize the decoding of masks by extending the vectorized beam search [23]. The vectorized beam search is an extension of traditional beam search that allows for the calculation of \(B\) beams simultaneously. In an offline scenario, it can also be extended to the parallel computation of \(S\) utterances. In other words, \begin{table} \begin{tabular}{l l} \hline \hline Masked sequence & after early night + the yellow lamps would light up here and there the squalid quarter of theftel\# \\ iteration=1 & after early nightfall the yellow lamps would light up here and there the squalid quarter of the hrebels \\ iteration=2 & after early nightfall the yellow lamps would light up here and there the squalid quarter of the hrebels \\ Ground truth & after early nightfall the yellow lamps would light up here and there the squalid quarter of the hrebels \\ \hline \hline \end{tabular} \end{table} Table 1: Decoding example from the LibriSpeech test-clean set (1089-134686-0002). The target sequence is initially predicted by gCTC and replaced with masks (“#”). Consecutive masks are merged into one mask. Red tokens indicate the best hypotheses from the segment-level vectorized beam search at each iteration. We set _max_iteration_ to \(5\) and have five sentences for this sample. However, we only show iterations 1 and 2 since there is no difference from the third iteration. \(S\times B\) hypotheses can be calculated as a single batch at each step. In our work, we treat the value \(S\) as the number of masks to enable parallel computation of all mask tokens. The overview of the proposed segment-level vectorized beam search is described in Algorithm 1. First, we initialize \(S\times B\) hypotheses \(Y_{S,B}\) from the gCTC result and masked sequence. Initially, there is only one hypothesis, so \(Y_{s,1}\) contains the gCTC result up to the corresponding mask as the hypothesis. The rest of the hypotheses, from \(Y_{s,2}\) to \(Y_{s,B}\), are initialized with dummy hypotheses. Additionally, we store the next token for each mask in an end-of-sequence list, \(E_{S}\). Next, we calculate the probabilities of the next token for each hypothesis and update \(Y_{S,B}\). We then check, for each mask \(s\), if \(E_{s}\) is predicted as the next token for each of the hypothesis from \(Y_{s,1}\) to \(Y_{s,B}\). We push the ended hypothesis to the list of ended hypotheses, \(F_{s}\). Finally, after iterating \(max\_iteration\) times, we replace each mask token with the best hypothesis in \(F_{s}\). To simplify the implementation, we apply padding to the missing hypotheses so that the number of hypotheses remains constant, similar to [34]. ``` Beam size: \(B\) and the number of masks: \(S\) Encoder output: \(X\) Initialize decoder cache: \(C\) Initialize a \(S\)-length list for ended hypotheses: \(F_{S}\) Initialize hypotheses \(Y_{S,B}\) and end-of-sequences \(E_{S}\) for\(i=1\) to \(max\_iteration\)do Calculate probability \(prob=Decoder(Y_{S,B},X,C)\) Update hypotheses \(Y_{S,B}\) by top-k method. for\(s=1\) to \(S\)do for\(b=1\) to \(B\)do iflast token of \(Y_{s,b}\) is \(E_{s}\)then \(F_{s}\_push(Y_{s,b})\) endif endfor endfor for\(s=1\) to \(S\)do Replace \(s\)-place mask in the masked sequence with best hypothesis in \(F_{s}\) endfor ``` **Algorithm 1** Segment-level Vectorized Beam Search Table 1 shows an example of the decoding process of a sample in the LibriSpeech _test-clean_ set. After calculating the gCTC result and merging the consecutive masks, there are three masks in this example. Since the beam size \(B\) is set to \(10\), there are \(30\) hypotheses. In the first iteration, we correctly estimated the first and third mask tokens. Since we used the BPE token for this model, the red characters fall, bra, and s were estimated in a single iteration. In the second iteration, we successfully predicted the second mask, which has two corresponding tokens: bro and th. This example demonstrates that it is possible to predict multiple tokens from a single mask token, enabling us to handle the accuracy issue in NAR due to incorrect target length. Moreover, the overall number of iterations in beam search is significantly lower than in AR by utilizing the segment-level vectorized beam search, indicating that PAR is effective in avoiding AR issues. We determine whether to end the beam search by observing the next token of the mask token. For example, in the example in Fig. 0(c), when the first mask detects a space token _ and the second mask detects b, the hypothesis is terminated and pruned from the overall search. ## 5 Experiment ### Experimental setup #### 5.1.1 Models To test the effectiveness of PAR, we compare both its performance and inference speeds of various algorithms to AR and NAR. As PAR decoding can be seen as an optimization of AR decoding, PAR can be tested using pre-trained AR models. As a result, for both AR and PAR inference, we used E-Branchformer models [6] that were pre-trained on the following datasets and are publicly available on the HuggingFace hub: AISHELL-11, JSUT2 LibriSpeech-100h3, LibriSpeech-960h4, TED-LIUM25. Note that we used two types of models trained with LibriSpeech: a model trained with the entire dataset of \(960\) hours of audio (LS-960), and another model trained with the subset of \(100\) hours of audio (LS-100). The model trained with the LS-100 dataset is considered less accurate than the one trained with LS-960, so we used LS-100 to investigate the effect of gCTC accuracy on PAR decoding. For comparison against NAR decoding, we trained models using the Conformer [5] architecture on LS-100. We prepared AR and NAR models with approximately 30 million parameters for a fair evaluation. We used the Conformer model to test if there would be any differences caused by encoder architecture. All the models were trained and evaluated using the ESPnet toolkit [35]. Footnote 1: [https://huggingface.co/pyf98/aishell_e_branchformer](https://huggingface.co/pyf98/aishell_e_branchformer) Footnote 2: [https://huggingface.co/pyf98/suit_e_branchformer](https://huggingface.co/pyf98/suit_e_branchformer) Footnote 3: [https://huggingface.co/pyf98/fisi](https://huggingface.co/pyf98/fisi) speech_100_e_branchformer Footnote 4: [https://huggingface.co/asapp/e_branchformer.librispeech](https://huggingface.co/asapp/e_branchformer.librispeech) Footnote 5: [https://huggingface.co/pyf98/ledium2_e_branchformer](https://huggingface.co/pyf98/ledium2_e_branchformer) #### 5.1.2 Decoding setup For AR decoding, we set the beam size to \(10\), the CTC weight to \(0.3\), and employed the vectorized beam search [23]. For PAR decoding, the beam size is set to \(10\), but note that for PAR inference, we do not compute the CTC prefix score, so the CTC weight is set to \(0\). We also set the \(P_{\mathrm{t}hres}\) to \(0.95\) and \(max\_iteration\) to \(5\) for PAR decoding. The decoding speed was measured using a single RTX 2080 Ti. #### 5.1.3 Evaluation Datasets We used several datasets of different languages for evaluation, described in Table 2. For this study, we used LibriSpeech and TED-LIUM2 as English datasets, JSUT as a Japanese dataset, and AISHELL-1 as a Mandarin dataset. For LibriSpeech, we evaluated dev and test sets, each containing _clean_ and _other_ sets. Each of the five evaluation datasets was evaluated using the AR/PAR model trained on the equivalent dataset. For example, the E-branchformer model pre-trained on the AISHELL-1 dataset was evaluated using the AISHELL-1 evaluation set. #### 5.1.4 Metrics We evaluated accuracy with word error rate (WER) or character error rate (CER) according to the dataset. We used the real-time factor (RTF) to measure the inference speed. The maximum memory usage of the GPU during inference is also measured to observe the effect of mask parallel decoding. For the RTF and memory usage, we took the mean of all evaluation sets. The speedup column compares the RTF values with PAR compared to AR. ### Results #### 5.2.1 AR and PAR The evaluation results comparing AR and PAR are shown in Table 3. Focusing on accuracy, it is evident that compared to AR, PAR shows a similar WER or CER for all models and evaluation datasets. The beam search process can properly update the tokens that gCTC cannot accurately estimate. While the accuracy has slightly degraded, this is due to the accuracy of the gCTC result. We will explain it further in detail in the following limitations section 5.3.1. In terms of the RTF, we have achieved approximately \(10\) times faster inference compared to AR. In particular, since the inference speed does not largely depend on the audio length, the standard deviation is approximately \(17.5\%\) of that of AR decoding. This feature is more effective for longer audio and in the _test-clean_ dataset, and we observed \(89.7\times\) speed up for \(29\) second audio as the maximum speedup. Note that this speedup depends on several factors, such as the number of masks or input audio length. Fig. 3 shows the proportion of time spent on the encoder and decoder process and the inference time for each audio length, evaluated on the LS-960 dataset. Compared to Fig. (a)a, the decoder-to-encoder ratio is significantly reduced. Furthermore, we can observe that the increase in inference time with respect to audio length is small, similar to that of NAR. This is because, in this work, we set the _max_iteration_ to \(5\), which means that the number of beam search iterations is limited to a maximum of 5. Moreover, if there are no masks, the inference is simply a gCTC result, where we do not run decoder computation. Therefore, the AR process does not have a high computational cost for long audio inputs. We can confirm that the inference speed of PAR depends on the computation speed of each model, such as the encoder or decoder, considering that the inference time slightly increases with audio length and the number of search iterations is fixed. The computation speed of the encoder or decoder depends on the audio length, but its impact on the overall inference speed is small. We investigated the correlation between the WER and RTF as shown in Fig. 4, by changing the beam size from \(1\) to \(20\), and measured the WER and RTF using the _test-clean_ dataset from Librispeech on the E-Branchformer based pre-trained models for LS-100 and LS-960 datasets. Comparing the AR models, the RTF differs greatly because the model sizes are different, however, the RTF of the PAR method remained almost the same. This is because the inference time of PAR depends on the computation time of the models itself. Therefore, the PAR curves have a significantly lower RTF compared to the AR curves, and the change in RTF is negligible as the beam-size changes, hence the narrower width. Although the RTF remains constant, we still see the differences in WER for the PAR models, especially with the pre-trained model of the LS-100 dataset compared to the model for the LS-960 dataset. This is due to the accuracy degradation problem we mention in Section 5.3.1. #### 5.2.2 NAR and PAR Comparing NAR and PAR, we can see that PAR is not as fast as NAR, as shown in Table 4. The number of decoder iterations is \(10\), which is larger than _max_iteration_ for PAR, but it performs faster than PAR. One reason for this is that the size of the decoder input is different. In this work, we added a dummy hypothesis as mentioned in Section 4.2. As a result, the batch size of decoder input for PAR decoding is \(S\times B\), where the batch size for NAR decoding is \(S\). Therefore, it seems that the computation time per decoder is \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Dataset & Language & Hours & Token & Metric & Evaluation Sets \\ \hline AISHELL-1 [36] & zh & 170 & char & CER & dev / test \\ JSUT [37] & ja & 10 & char & CER & dev / test (\(\dagger\)) \\ LS-100 [38] & en & 100 & BPE & WER & dev-\(\{\)clean,other\(\}\) / test-\(\{\)clean,other\(\}\) \\ LS-960 [38] & en & 960 & BPE & WER & dev-\(\{\)clean,other\(\}\) / test-\(\{\)clean,other\(\}\) \\ TED-LIUM2 [39] & en & 210 & BPE & WER & dev / test \\ \hline \hline \end{tabular} \end{table} Table 2: Dataset descriptions. The order of evaluation sets corresponds to the error results in Table 3. \({}^{\dagger}\)Data split based on ESPnet [35]. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{AR} & \multicolumn{3}{c}{PAR} \\ \cline{2-7} Dataset & RTF (\(\downarrow\)) & Error [\%] (\(\downarrow\)) & Memory Usage [MB] & RTF (\(\downarrow\)) & Error [\%] (\(\downarrow\)) & Memory Usage [MB] & Speedup (\(\uparrow\)) \\ \hline AISHELL-1 & 0.027 (0.006) & 4.6 / 5.0 & 176.3 (4.2) & 0.010 (0.006) & 4.6 / 4.9 & 176.3 (5.9) & 2.70\(\times\) \\ JSUT & 0.129 (0.021) & 11.8 / 13.2 & 208.4 (6.2) & 0.018 (0.008) & 12.0 / 13.3 & 199.4 (6.4) & 7.17\(\times\) \\ LS-100 & 0.111 (0.038) & 6.2 / 16.8 / 6.4 / 17.1 & 197.5 (25.3) & 0.009 (0.006) & 6.5 / 17.2 / 6.7 / 17.7 & 210.6 (81.6) & 12.33\(\times\) \\ LS-960 & 0.110 (0.039) & 2.2 / 2.5 / 2.5 / 5.2 & 528.4 (44.8) & 0.008 (0.006) & 2.2 / 2.5 / 2.5 / 5.6 & 550.5 (132.6) & 13.75\(\times\) \\ TED-LIUM2 & 0.182 (0.060) & 7.3 / 7.1 & 281.1 (47.7) & 0.012 (0.008) & 7.6 / 7.3 & 474.8 (298.9) & 15.17\(\times\) \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of WER, CER, RTF, elapsed time, and memory usage for each model. RTF, inference time, and memory usage compare mean and standard deviation (in brackets) values. The symbols (\(\downarrow\)) and (\(\uparrow\)) indicate a lower or higher number is preferable, respectively. Figure 3: Average inference time and proportion of time spent on the encoder and decoder computation during PAR decoding. The decoder’s share is greatly reduced from AR. shorter for Mask-CTC NAR, and the resulting decoder processing time becomes faster. In terms of accuracy, we can see that PAR outperforms NAR. Improvement in performance is similar to that of E-branchformer models in both RTF and WER, and it was confirmed that there was no difference in improvement due to architecture differences. From these results, it is evident that applying the PAR method can solve the built-in accuracy issue for the NAR method we mentioned in Section 3.2. In particular, we can confirm that PAR obtains a speed between AR and NAR but achieves similar accuracy to AR. Therefore, a new trade-off balance that is not present in AR and NAR has been realized. ### Limitations #### 5.3.1 Accuracy with PAR The accuracy of PAR can be degraded if the gCTC result is not accurate. If the result of gCTC is incorrect with high confidence, we cannot use the AR process to refine the gCTC result. From the comparison of the LS-100 and LS-960 models in Table 3, we can see that more accuracy degradation can occur with the LS-100 model. The accuracy may also degrade at a higher \(P_{\mathrm{thres}}\) because the number of target tokens per mask may exceed _max_iteration_. Since we stop the beam search after _max_iteration_ iterations, if the number of target tokens exceeds _max_iteration_, we cannot predict the entire sequence for one mask. Therefore, the accuracy may be degraded if the \(P_{\mathrm{thres}}\) is closer to \(1.0\). Fig. 5 describes the relationship between WER and \(P_{\mathrm{thres}}\) for _max_iteration_. Using a _max_iteration_ of \(5\), we observe the accuracy degrades at higher \(P_{\mathrm{thres}}\) levels. This issue is caused by the lack of beam search iterations, so to solve this problem, we need to increase the _max_iteration_. In Fig. 5, we increased the _max_iteration_ to \(8\) and observed a more accurate result. #### 5.3.2 Memory usage It is important to note the standard deviation of memory usage in Table 3 increases greatly compared to AR. Since the decoder process in PAR is computed simultaneously for all masks, we need more GPU memory if there are many masks. Therefore if the number of masks increases due to long audio inputs, high \(P_{\mathrm{thres}}\), or low accuracy of the gCTC result, we may get an out-of-memory error as GPU memory is exceeded during inference. Considering that the inference of all masks does not depend on each other, it is possible to alleviate this issue by using multiple GPUs to perform inference. #### 5.3.3 Segment-level Vectorized Beam Search If a masked sequence contains multiple masks, the predicted tokens for the second or later masks may not be accurate due to the inaccuracy of the tokens predicted for the first mask. For example, in Fig. 0(c), the masked sequence is sef_cut@ber, where we have two masks at the positions of tokens e and am. Since we use the gCTC result to predict masked tokens, the decoder input for estimating the mask for the am part becomes see_cuc, which is incorrect, instead of the correct sea_cuc. This incorrect decoder input might impact the accuracy of the prediction. However, considering that even the AR decode may also have an incorrect decoder input, we believe PAR can search the same hypothesis with AR by utilizing the beam search. ## 6 Conclusion In this work, we propose a partially autoregressive framework to obtain a new trade-off balance between accuracy and latency. With our novel architecture design to compensate for the weakness inherent in each NAR and AR, PAR is a decoding method that takes advantage of the strengths of these two methods. In our experiments, we observed that the AR model can be inferred at NAR-level speeds without sacrificing accuracy. Notably, the LS-960 pre-trained model achieved a \(13.75\times\) speedup with the same WER on the test-clean set. We believe that using PAR can significantly improve the usability of the traditional hybrid CTC/Attention model. Our framework has limitations, such as memory usage issues, that restrict the scenarios where PAR can be applied. In future work, we plan to extend our framework to edge devices with limited computational resources. \begin{table} \begin{tabular}{l r r r r} \hline \hline & Model & RTF (\(\downarrow\)) & WER [\%] (\(\downarrow\)) & Speedup (\(\uparrow\)) \\ \hline AR & CTC/Attention & 0.198 (0.080) & 6.7 / 18.3 / 7.0 / 18.6 & 1.00\(\times\) \\ NAR & CTC & 0.005 (0.004) & 7.7 / 21.0 / 7.9 21.4 & 39.60\(\times\) \\ & Mask-CTC & 0.008 (0.005) & 7.1 / 20.8 / 7.5 / 21.0 & 24.75\(\times\) \\ & CTC/Attention & 0.014 (0.009) & 6.2 / 18.5 / 6.6 / 18.7 & 14.14\(\times\) \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison with NAR. The speedup shows the speedup from the AR decoding. Figure 4: The comparison of WER and RTF measured using the AR and PAR methods. We used the models trained with LS-100 and LS-960 datasets and measured by changing the beam size between \(1\) and \(20\). Figure 5: The relationship between the WER and \(P_{\mathrm{thres}}\). We evaluated by changing the \(P_{\mathrm{thres}}\) from \(0.95\) to \(0.999\). We used the E-Branchformer-based pre-trained model for LS-960.
2301.00813
A Survey on Protein Representation Learning: Retrospect and Prospect
Proteins are fundamental biological entities that play a key role in life activities. The amino acid sequences of proteins can be folded into stable 3D structures in the real physicochemical world, forming a special kind of sequence-structure data. With the development of Artificial Intelligence (AI) techniques, Protein Representation Learning (PRL) has recently emerged as a promising research topic for extracting informative knowledge from massive protein sequences or structures. To pave the way for AI researchers with little bioinformatics background, we present a timely and comprehensive review of PRL formulations and existing PRL methods from the perspective of model architectures, pretext tasks, and downstream applications. We first briefly introduce the motivations for protein representation learning and formulate it in a general and unified framework. Next, we divide existing PRL methods into three main categories: sequence-based, structure-based, and sequence-structure co-modeling. Finally, we discuss some technical challenges and potential directions for improving protein representation learning. The latest advances in PRL methods are summarized in a GitHub repository https://github.com/LirongWu/awesome-protein-representation-learning.
Lirong Wu, Yufei Huang, Haitao Lin, Stan Z. Li
2022-12-31T04:01:16Z
http://arxiv.org/abs/2301.00813v1
# A Survey on Protein Representation Learning: Retrospect and Prospect ###### Abstract Proteins are fundamental biological entities that play a key role in life activities. The amino acid sequences of proteins can be folded into stable 3D structures in the real physicochemical world, forming a special kind of _sequence-structure data_. With the development of Artificial Intelligence (AI) techniques, _Protein Representation Learning_ (PRL) has recently emerged as a promising research topic for extracting informative knowledge from massive protein sequences or structures. To pave the way for AI researchers with little bioinformatics background, we present a timely and comprehensive review of PRL formulations and existing PRL methods from the perspective of model architectures, pretext tasks, and downstream applications. We first briefly introduce the motivations for protein representation learning and formulate it in a general and unified framework. Next, we divide existing PRL methods into three main categories: sequence-based, structure-based, and sequence-structure co-modeling. Finally, we discuss some technical challenges and potential directions for improving protein representation learning. The latest advances in PRL methods are summarized in a GitHub repository [https://github.com/LirongWu/awesome-protein-representation-learning](https://github.com/LirongWu/awesome-protein-representation-learning). ## 1 Introduction Proteins perform specific biological functions that are essential for all living organisms and therefore play a key role when investigating the most fundamental questions in the life sciences. The proteins are composed of one or several chains of amino acids that fold into a stable 3D structure to enable various biological functionalities. Therefore, understanding, predicting, and designing proteins for biological processes are critical for medical, pharmaceutical, and genetic research. Previous approaches on protein modeling are mostly driven by biological or physical priors, and they explore complex sequence-structure-function relationships through energy minimization [11, 17], dynamics simulations [14, 18], etc. With the development of artificial intelligence and low-cost sequencing technologies, data-driven _Protein Representation Learning_ (PRL) [15, 16, 17, 18, 19] has made remarkable progress due to its superior performance in modeling complex nonlinear relationships. The primary goal of protein representation learning is to extract transferable knowledge from protein data with well-designed model architectures and pretext tasks, and then generalize the learned knowledge to various protein-related downstream applications, ranging from structure prediction to sequence design. Despite their great progress, it is still tricky for AI researchers without bioinformatics background to get started with protein representation learning, and one obstacle is the vast amount of physicochemical knowledge involved behind the proteins. Therefore, a survey on PRL methods that is friendly to the AI community is urgently needed. Existing surveys related to PRL [12, 13, 14, 15] are mainly developed from the perspective of biological applications, but do not go deeper into other important aspects, such as model architectures and pretext tasks. Overall, our contributions can be summarized as follows: **(1)**_Comprehensive review_. Our survey provides a comprehensive and up-to-date review of existing PRL methods from the perspective of the model architectures and pretext tasks. **(2)**_New taxonomy._ We divide existing PRL methods into three categories: sequence-based, structure-based, and sequence-structure co-modeling. **(3)**_Detailed Implementations._ We summarize the paper lists and open-source codes in a public GitHub repository, setting the stage for the development of more future works. **(4)**_Future directions_. We point out the technical limitations of current research and discuss several promising directions. ## 2 Notation and Problem Statement The sequence of amino acids can be folded into a stable 3D structure, forming a special kind of _sequence-structure data_, which determines its properties and functions. Therefore, we can model each protein as a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{X},\mathcal{F})\), where \(\mathcal{V}\) is the _ordered set_ of \(N\) nodes in the graph representing amino acid residues and \(\mathcal{E}\in\mathcal{V}\times\mathcal{V}\) is the set of edges that connects the nodes. Each node \(u\in\mathcal{V}\) in graph \(\mathcal{G}\) can be attributed with a scalar-vector tuple \(\mathbf{x}_{u}=(s_{u},V_{u})\), where \(s_{u}\in\mathbb{R}^{O}\) and \(V_{u}\in\mathbb{R}^{3\times P}\). Each edge \(e\in\mathcal{E}\) can be attributed with a scalar-vector tuple \(\mathbf{f}_{e}=(s_{e},V_{e})\), where \(s_{e}\in\mathbb{R}^{T}\) and \(V_{e}\in\mathbb{R}^{3\times D}\). Given a model architecture \(f_{\theta}(\cdot)\) and a set of \(K\) losses of pretext tasks \(\{\mathcal{L}_{pre}^{(1)}(\theta,\eta_{1}),\mathcal{L}_{pre}^{(2)}(\theta, \eta_{2}),\cdots,\mathcal{L}_{pre}^{(K)}(\theta,\eta_{K})\}\) with projection heads \(\{g_{\eta_{k}}(\cdot)\}_{k=1}^{K}\), _Protein Representation Learning_ (PRL) usually works in a two-stage manner: (1) Pre-training the model \(f_{\theta}(\cdot)\) with pretext tasks; and (2) Fine-tuning the pre-trained model \(f_{\theta_{init}}(\cdot)\) with a projection head \(g_{\omega}(\cdot)\) under the supervision of a specific downstream task \(\mathcal{L}_{task}(\theta,\omega)\). The learning objective can be formulated as \[\theta^{*},\omega^{*}= \arg\min_{(\theta,\omega)}\mathcal{L}_{task}(\theta_{init}, \omega), \tag{1}\] \[\text{s.t.}\ \ \theta_{init},\{\eta_{k}^{*}\}_{k=1}^{K} =\arg\min_{\theta,\{\eta_{k}\}_{k=1}^{K}}\sum_{k=1}^{K}\lambda_{k} \mathcal{L}_{pre}^{(k)}(\theta,\eta_{k})\] where \(\{\lambda_{k}\}_{k=1}^{K}\) are trade-off task hyperparameters. A high-level overview of the PRL framework is shown in Fig. 1. In practice, if we set \(K=1\), \(\omega=\eta_{1}\), i.e., \(\mathcal{L}_{pre}^{(1)}(\theta,\eta_{1})=\mathcal{L}_{task}(\theta,\omega)\), it is equivalent to learning task-specific representations directly under downstream supervision, which in this survey can be considered as a special case of Eq. (1). In this survey, we mainly focus on the model architecture \(f_{\theta}(\cdot)\) and pretext tasks \(\{\mathcal{L}_{pre}^{(k)}(\theta,\eta_{k})\}_{k=1}^{K}\) for protein representation learning, and defer the discussion on downstream applications until Sec. 5. A high-level overview of this survey with some representative examples is shown in Fig. 2. ## 3 Model Architectures In this section, we summarize some commonly used model architectures for learning protein sequences or structures. ### Sequence-based Encoder The sequence encoder takes as input \((\mathcal{V},\mathcal{X})\) and then aims to capture the dependencies between amino acids. [22] treats protein sequences as a special "biological language" and then establishes an analogy between such "biological language" and natural (textual) language. Inspired by this, many classical model architectures developed for natural language processing can be directly extended to handle protein sequences [1]. Depending on whether a single sequence or multiple sequences are to be encoded, there are a variety of different sequence-based encoders. #### 3.1.1 Single Sequences The commonly used sequence encoders for modeling _single sequences_ include Variational Auto-Encoder (VAE) [23, 24], Recurrent Neural Networks (RNNs) [10], Long Short-Term Memory (LSTM) [15], BERT [14], Transformer [25]. Based on the vanilla Transformer, [26] proposes a novel geometry-inspired transformer (Geoformer) to further distill the structural and physical pairwise relationships between amino acids into the learned protein representation. If we do not consider the ordering of amino acids in the sequences, we can also directly apply Convolutional Neural Networks (CNNs) [11] or ResNet [15] to capture the local dependencies between adjacent amino acids. #### 3.1.2 MSA Sequences The long-standing practices in computational biology are to make inferences from a family of evolutionarily related sequences [26, 27, 28, 10]. Therefore, there have been several _multiple sequences_ encoders proposed to capture co-evolutionary information by taking as input a set of sequences in the form of multiple sequence alignment (MSA). For example, MSA Transformer [15] extends the self-attention mechanism to the MSA setting, which interleaves self-attention across rows and columns to capture dependencies between amino acids and between sequences. As a crucial component of AlphaFold2, Evoformer [16] alternatively updates _MSA_ and _Pair_ representations in each block, which encode co-evolutionary information in sequences and relations between residues, respectively. ### Structure-based Encoder Despite the effectiveness of sequence-based encoders, the power of pre-training with protein structures has been rarely explored, even though protein structures are known to be determinants of protein functions. To better utilize this critical structural information, a large number of structure-based encoders have been proposed to model structural information, which can be mainly divided into three categories: feature map-based, message-passing GNNs, and geometric GNNs. #### 3.1.3 Feature map-based Methods The use of deep learning to model protein 3D structures could be traced back to a decade ago [17, 18]. Early methods directly extracted several hand-crafted _feature maps_ from protein structures and then applied 3D CNNs to model the geometric information of proteins [1, 19, 20]. Later work extended 3D CNNs to spherical convolution for identifying interaction patterns on protein surfaces [23, 18]. #### 3.1.4 Message-passing GNNs To further capture the geometric relationships and biomedical interactions between amino acids, it has been proposed to first construct a graph from the extracted feature maps by thresholding or \(k\) Nearest Neighbors (\(k\)NN) [24, 25]. Then, many existing message-passing Graph Neural Networks (GNNs) can be directly applied to model protein structures, including Graph Convolutional Network (GCN) [13], Graph Isomorphism Network (GIN) [26], and GraphSAGE [12]. Figure 1: A general framework for protein representation learning. _et al._, 2017). However, the edges in the protein graph may have some key properties, such as dihedral angles and directions, which determine the biological function of proteins. With this in mind, there have been several structure-based encoders proposed to simultaneously leverages the node and edge features of the protein graph. For example, [11] proposes IE convolution (IEconv) to simultaneously capture the primary, secondary and tertiary structures of proteins by incorporating intrinsic and extrinsic distances between nodes. Besides, [11] adopts a similar architecture to IEConv, but introduces seven additional edge features to efficiently describe the relative position and orientation of neighboring nodes. Furthermore, GeaNet [23] proposes a simple structure encoder, which encodes spatial information by adding different types of sequential or structural edges and then performs both node-level and edge-level message passing simultaneously. **Geometric GNNs** The above message-passing GNNs incorporate the 3D geometry of proteins by encoding the vector features \(V_{u}\)/\(V_{e}\) into rotation-invariant scalars \(s_{u}\)/\(s_{e}\). However, reducing this vector information directly to scalars may not fully capture complex geometry. Therefore, geometric-aware neural networks are proposed to bake 3D rigid transformations into network operations, leading to SO(3)-invariant and equivariant GNNs. For example, [12] introduces Geometric Vector Perceptrons (GVPs), which replace standard multi-layer perceptrons (MLPs) in feed-forward layers and operate directly on both scalar and vector features under a global coordinate system. Besides, [10] proposes Geometric Bottleneck Perceptron (GBPs) to integrate geometric features and capture complex geometric relations in the 3D structure, based on which a new SO(3)-equivariant message passing neural network is proposed to support a variety of geometric representation learning tasks. To achieve more sensitive geometric awareness in both global transformations and local relations, [13] proposes Directed Weight Perceptrons (DWPs) by extending not only the hidden neurons but the weights from scalars to 2D/3D vectors, naturally saturating the network with 3D structures in the Euclidean space. ### Sequence-structure Encoder Compared to sequence- and structure-based encoders, comparatively less work has focused on the co-encoding of protein sequences and structures. The mainstream model architecture is to extract amino acid representations as node features by a language model and then capture the dependencies between amino acids using a GNN module. For example, [14] introduces DeepFRI, a Graph Convolutional Network (GCN) for predicting protein functions by leveraging sequence representations extracted from a protein language model (LSTM) and protein structures. Besides, LM-GVP [15] is composed of a protein language model (composed of Transformer blocks) and a GVP network, where the protein LM takes protein sequences as input to compute amino acid embeddings and the GVP network is used to make predictions about protein properties on a graph derived from the protein 3D structure. Moreover, [15] Figure 2: A high-level overview of this survey with representative examples. and Shen, 2022] applies the hierarchical RNN and GAT to encode both protein sequences and structures and proposes a cross-interaction module to enforce a learned relationship between the encoded embeddings of the two protein modalities. ## 4 Pretext Task The pretext tasks are designed to extract meaningful representations from massive data through optimizing some well-designed objective functions. In this section, we summarize some commonly used pretext tasks for learning on proteins. ### Sequence-based Pretext Task There have been many pretext tasks proposed for pre-training language models, including Masked Language Modeling (MLM) and Next Sentence Prediction (NSP) [14], which can be naturally extended to pre-train protein sequences. We divide existing sequence-based pretext tasks into two main categories: self-supervised and supervised. **Self-supervised Pretext Task** The self-supervised pretext tasks utilize the training data itself as supervision signals without the need for additional annotations. If we consider an amino acid in a sequence as a word in a sentence, we can naturally extend masked language modeling to protein sequences. For example, we can statically or dynamically mask out a single or a set of contiguous amino acids and then predict the masked amino acids from the remaining sequences [13, 14, 15, 16, 17]. Besides, [13] combines adversarial training with MLM and proposes to mask amino acids in a learnable manner. Taking into account the dependence between masked amino acids, Pairwise MLM (PMLM) [12] proposes to model the probability of a pair of masked amino acids instead of predicting the probability of a single amino acid. Besides, Next Amino acid Prediction (NAP) [16, 15] aims to predict the type of the next amino acid based on a set of given sequence fragments. Different from the above methods, Contrastive Predictive Coding (CPC) [11] applies different augmentation transformations on the input sequence to generate different views, and then maximizes the agreement of two jointly sampled pairs against that of two independently sampled pairs. **Supervised Pretext Task** The supervised pretext tasks use additional labels as auxiliary information to guide the model to learn knowledge relevant to downstream tasks. For example, PLUS [15] devises a protein-specific pretext task, namely Same-Family Prediction (SFP), which trains a model to predict whether a given protein pair belongs to the same protein family. The protein family labels provide weak structural information and help the model learn structurally contextualized representations. Besides, [16] proposes to use HMM profiles derived from MSA as labels and then take Profile Prediction as a pretext task to help the model learn information about protein structures. In addition, to leverage the exponentially growing protein sequences that lack costly structural annotations, Progen [15] trains a language model with conditioning tags that encode various annotations, such as taxonomic, functional, and locational information. ### Structure-based Pretext Task Despite the great progress in the design of structure-based encoders and graph-based pretext tasks [14, 15, 16], there are few efforts focusing on the structure-based pre-training of proteins. Existing structure-based pretext tasks for proteins can be mainly classified into two branches: contrastive and predictive methods. **Contrastive Pretext Task** The primary goal of contrastive methods is to maximize the agreement of two jointly sampled positive pairs. For example, Multiview Contrast [13] proposes to randomly sample two sub-structures from each protein, encoder them into two representations, and finally maximize the similarity between representations from the same protein while minimizing the similarity between representations from different proteins. Besides, [16] adopts almost the same architecture as Multiview Contrast, but replaces GearNet with IEConv as the structure encoder. **Predictive Pretext Task** The contrastive methods deal with the _inter-data_ information (data-data pairs). In contrast, the predictive methods aim to self-generate informative labels from the data as supervision and handle the _data-label_ relationships. Categorized by different types of pseudo labels, the predictive methods have different designs that can capture different levels of structural protein information. For example, [10] proposes two predictive tasks, namely _Distance Prediction_ and _Angle Prediction_, which take hidden representations of residues as input and aim to predict the relative distance between pairwise residues and the angle between two edges, respectively, which helps to learn structure-aware protein representations. Furthermore, [13] propose _Residue Type Prediction_ and _Dihedral Prediction_ based on geometric or biochemical properties. Specifically, _Residue Type Prediction_ randomly masks the node features of some residues and then lets the structure-based encoders predict these masked residue types. Instead, _Dihedral Prediction_ constructs a learning objective by predicting the dihedral angle between three consecutive edges. Besides, [14] proposes graph completion (GraphComp), which takes as input a protein graph with partially masked residues and then makes predictions for those masked tokens. ### Sequence-structure Pretext Task Most of the existing methods design pretext tasks for a single modality but ignore the dependencies between sequences and structures. If we can design the pretext task based on both protein sequences and structures, it should capture richer information than using single modality data. In practice, there is no clear boundary between pretext tasks and downstream tasks. For example, AlphaFold2 [16] takes full-atomic structure prediction as a downstream task. However, if we are concerned with protein property prediction, structure prediction can also be considered as a pretext task that enables the learned sequence representations to contain sufficient structural information. It was found by [14] that the representations from AlphFold2's Evoformer could work well on various protein-related downstream tasks, including fold classification, stability prediction, etc. Moreover, [23] proposes a novel pre-training pre-text task, namely Masked Inverse Folding (MIF), which trains a model to reconstruct the original amino acids conditioned on the corrupted sequence and the backbone structure. ## 5 Downstream Tasks (Applications) In the above, we have presented a variety of commonly used model architectures and pretext tasks for protein representa \begin{table} \begin{tabular}{l l l c c} \hline \hline **Method** & **Category** & **Architecture** & **Pretext Task** & **Year** \\ \hline Bio2Vec-CNN [23] & Sequence-based & CNN & - & 2019 \\ \hline TAPE [10] & Sequence-based & ResNet, LSTM, Transformer & Masked Language Modeling, & 2019 \\ \hline UniRep [1] & Sequence-based & Multiplicative LSTM & Next Amino Acid Prediction & 2019 \\ \hline TripletProt [24] & Sequence-based & Siamese Networks & Contrastive Predictive Coding & 2020 \\ \hline PLP-CNN [25] & Sequence-based & CNN & - & 2020 \\ \hline CPCPotr [11] & Sequence-based & GRU, LSTM & Contrastive Predictive Coding & 2020 \\ \hline MPIPR [12] & Sequence-based & GRU, LSTM & Next Amino Acid Prediction & 2020 \\ \hline ProfTrans [13] & Sequence-based & Transformer, Bert, XLNet & Masked Language Modeling & 2020 \\ \hline DMPfold [14] & Sequence-based & GRU, ResNet & - & 2020 \\ \hline Profile Prediction [26] & Sequence-based & Transformer & HMM Profile Prediction & 2020 \\ \hline RoBERTa [27] & Sequence-based & Transformer & Masked Language Modeling & 2020 \\ \hline UDSMProt [28] & Sequence-based & LSTM & Next Amino Acid Prediction & 2020 \\ \hline ESM-10 [15] & Sequence-based & Transformer & Masked Language Modeling & 2021 \\ \hline PMLM [16] & Sequence-based & Transformer & Pairwise Masked Language Modeling & 2021 \\ \hline MSA Transformer [10] & Sequence-based & MSA Transformer & Masked Language Modeling & 2021 \\ \hline ProteinLM [10] & Sequence-based & BERT & Masked Language Modeling & 2021 \\ \hline PLUS [12] & Sequence-based & Bidirectional RNN & Masked Language Modeling, & 2021 \\ \hline Adversarial MLM [12] & Sequence-based & Transformer & Masked Language Modeling, & 2021 \\ \hline ProteinBERT [17] & Sequence-based & Transformer & Adversarial Training & 2021 \\ \hline CARP [23] & Sequence-based & BERT & Masked Language Modeling & 2022 \\ \hline \hline 3DCNN [1] & Structure-based & 3DCNN & - & 2018 \\ \hline IEConv [1] & Structure-based & IEConv & - & 2020 \\ \hline GP-GNN [18] & Structure-based & GVP & - & 2020 \\ \hline GraphMS [19] & Structure-based & GCN & Multiview Contrast & 2021 \\ \hline DL-MSFM [1] & Structure-based & GCN & - & 2021 \\ \hline PG-GNN [10] & Structure-based & PG-GNN & - & 2021 \\ \hline CRL [1] & Structure-based & IEConv & Multiview Contrast & 2022 \\ \hline DW-GNN [11] & Structure-based & DWP & - & 2022 \\ \hline GBPNet [1] & Structure-based & GBP & - & 2022 \\ \hline \multirow{2}{*}{GarNet [19]} & Structure-based & \multirow{2}{*}{GarNet} & Multiview Contrast, & \multirow{2}{*}{2022} \\ & & & & \\ \cline{1-1} \cline{4-4} & & & & \\ \cline{1-1} \cline{4-4} & & & & \\ \cline{1-1} \cline{4-4} & & & & \\ \hline ATOMRefine 1-Wu and Cheng, 2022] & Structure-based & SE(3) Transformer & - & 2022 \\ \cline{1-1} \cline{4-4} & & & & \\ \cline{1-1} \cline{4-4} & & & & \\ \cline{1-1} \cline{4-4} & & & & \\ \cline{1-1} \cline{4-4} & & & & \\ \hline \hline GraphCPI [18] & Co-Modeling & CNN, GNN & - & 2019 \\ \cline{1-1} \cline{4-4} & & & & \\ \cline{1-1} \cline{4-4} & & & & \\ \cline{1-1} \cline{4-4} & & & & \\ \cline{1-1} \cline{4-4} & & & & \\ \cline{1-1} \cline{4-4} & & & & \\ \hline \hline MT-ST [23] & Co-Modeling & Transformer, GVP & - & 2021 \\ \hline AlphaFold2 [19] & Co-Modeling & Evoformer & Masked Language Modeling, & 2021 \\ \cline{1-1} \cline{4-4} & & & & Full-atomic Structure Prediction & 2021 \\ \hline DeepFRI [17] & Co-Modeling & LSTM, GCN & - & 2021 \\ \hline HRSS [10] & Co-Modeling & SE(3) Transformer & Masked Language Modeling, & 2021 \\ \hline GraSR [10] & Co-Modeling & LSTM, GCN & Momentum Contrast & 2022 \\ \hline CPAC [26] & Co-Modeling & Hierarchical RNN, GAT & Masked Language Modeling, & 2022 \\ \cline{1-1} \cline{4-4} & & & Graph Completion & \\ \cline{1-1} \cline{4-4} & & & & \\ \cline{1-1} \cline{4-4} & & & & \\ \cline{1-1} \cline{4-4} & & & & \\ \cline{1-1} \cline{4-4} & & & & \\ \cline{1-1} \cline{4-4} & & & & \\ \cline{1-1} \cline{4-4} & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of representative protein representation learning methods. tion learning, based on which we summarized the surveyed works in Table. 1, listing their categories, model architectures, pretext tasks, and publication years. In this section, we can divide existing downstream tasks for protein representation learning into the following four main categories: protein property prediction, protein (complex) structure prediction, protein design, and structure-based drug design. It is worth noting that some downstream tasks have labels (i.e., model outputs) that do not change with rigid body transformations of the inputs (if they can, e.g., protein structures). For example, various protein property prediction tasks take a transformable protein structure as input and output a constant prediction, usually modeled as a simple multi-label classification problem or multiple binary classification problem. However, the labels of some downstream tasks will change equivariantly with the inputs, and these tasks are getting more and more attention. Typically, the learning objectives of these tasks are structure-related, and they usually have higher requirements on the model architecture, requiring the model to be SE(3)-equivariant. We believe that from the perspective of protein representation learning, the approaches to different downstream tasks can also learn from each other. ### Protein Property Prediction The protein property prediction aims to regress or classify some important properties from protein sequences or structures that are closely related to biological functions, such as the types of secondary structure, the strength of connections between amino acids, types of protein folding, fluorescence intensity, protein stability, etc. [14]. Besides, several protein-specific prediction tasks can also be grouped into this category, including quality evaluation of protein folding [1], predicting the effect of mutations on protein function [13], and predicting protein-protein interactions [20]. ### Protein (Complex) Structure Prediction The primary goal of protein structure prediction is to predict the structural coordinates from a given set of amino acid sequences. Some approaches aim to predict only backbone coordinates [1, 23], while others focus on the more challenging full-atomic coordinate predictions [15, 24, 25]. On the other hand, protein structure refinement [16, 24] proposes to update a coarse protein structure to generate a more fine-grained structure in an iterative manner. Besides, the task of protein structure inpainting aims to reconstruct the complete protein structure from a partially given sub-structure [13] or distance map [11]. ### Protein Design Deep learning-based protein design has made tremendous progress in recent years, and the major works can be divided into three categories. The first one is to pre-train the model with a large number of sequences from the same protein family, and then use it to generate new homologous sequences [24]. The structure-based methods aim to directly generate the protein sequences under the condition of a given protein structure [17]. The last and most challenging one is the de novo protein design [17, 18, 19], which aims to generate both protein sequences and structures conditioned on taxonomic and keyword tags such as molecular function and cellular component. ### Structure-Based Drug Design Structure-Based Drug Design (SBDD) is a promising direction for fast and cost-efficient compound discovery. Specifically, SBDD designs inhibitors or activators (usually small molecules, i.e., drugs) directly against protein targets of interest, which means a high success rate and efficiency [23, 17]. In the past two years, a line of auto-regressive methods have been proposed for SBDD [16, 15, 24], which generate molecule atoms one by one conditioned on given structure context of protein targets. Recently, there are some works based on Denoising Diffusion Probabilistic Model (DDPM) [16, 17]. Targeting on specific protein pockets, the diffusion-based methods generate molecule atoms as a whole from random gaussian noise. The above methods are all dependent on a proper representation module of protein, especially the protein structure. The early attempt of deep generative models in this field [16] uses 3D CNN as the protein structure context encoder to get meaningful and roto-translation invariant features. With the development of protein structure representation methods, particularly the geometric-aware models, subsequent methods widely use geometric-(equi/in)variant networks, such as EGNN [18], GVP [17], and IPA [15], as the backbones. It is worth noting that protein representation models are not only common in various protein structure context encoders, but many generative decoders can also adopt its architectural design. From this example, we can see that protein representation is a very fundamental problem and that many downstream tasks involving proteins can benefit from advances of protein representation research in various aspects, including better embeddings and more excellent model architectures. ## 6 Deep Insights and Future Outlooks ### Deeper Insights On the basis of a detailed review of the model architectures, pretext tasks, and downstream tasks, we would like to provide some deeper insights into protein representation learning. #### 6.1.1 Insights 1: PRL is the core of deep protein modeling With the development of deep learning, deep protein modeling is becoming a popular research topic, and one of its core is how to learn "meaningful" representations for proteins. This involves three key issues: (1) Feature Extraction: model architectures; (2) Pre-training: pretext tasks; and (3) Application: downstream tasks. An in-depth investigation of the above three key issues is of great importance for the development of more deep protein modeling methods. **Insights 2: Task-level convertibility** Throughout this survey, one of the main points we have emphasized is the convertibility between downstream tasks and pretext tasks. We believe we are the first to explain the role of pretext tasks from this perspective, which seems to have been rarely involved in previous work. For example, we directly categorize some well-known downstream tasks, such as full-atomic structure prediction, as a specific kinds of pretext tasks. The motivation behind such an understanding lies in the fact that the definition of a task is itself a relative concept and that different tasks can help the model extract different aspects of information, which may be complementary to each other. For example, full-atomic structure prediction helps the model capture rich structural information, which is also beneficial for various protein property prediction tasks, such as folding prediction, since it is known that protein structure often determines protein function. This suggests that whether a specific task is a downstream task or a pretext task usually depends on what we are concerned about, and the role of a task may keep changing from application to application. **Insights 3: Data-specific criterion for design selections** It is tricky to discuss the advantages and disadvantages of different methods or designs because the effectiveness of different methods depends heavily on the size, format, and complexity of the data. For example, for simple small-scale data, Transformer is not necessarily more effective than traditional LSTM for sequence modeling, and the situation may be completely opposite for large-scale complex data. Therefore, there is no "optimal" architecture or pretext task that works for all data types and downstream tasks, and the criterion for the selection of architecture and pretext task is data-specific. ### Future Outlooks Despite the great progress of existing methods, challenges still exist due to the complexity of proteins. In this section, we suggest some promising directions for future work. **Direction 1: Broader application scenarios** The biological research topics on proteins are diverse, but most of the existing work has delved into only a small subset of them, due to the fact that these topics have been well formalized by some representative works, such as AlphaFlod2 (Jumper _et al._, 2021) for protein structure prediction and TAPE (Rao _et al._, 2019) for protein property prediction. As a result, it is more worthwhile to explore the role of protein representation learning in a wider range of biological application scenarios than to design some overly complex modules for subtle performance gains in a well-formalized application. **Direction 2: Unified evaluation protocols** Research in protein representation learning is now in an era of barbarism. While a great deal of new works are emerging every day, most of them are on unfair comparisons, such as with different datasets, architectures, metrics, etc. For example, some MSA-based works on structure prediction have been blatantly compared with those single-sequence-based works and claimed to be better. To promote the health of the field, there is an urgent need to establish unified evaluation protocols in various downstream tasks to provide fair comparisons. **Direction 3: Protein-specific designs** Previous PRL methods directly take mature architectures and pretext tasks from the natural language processing field to train proteins. For example, modeling protein sequences using LSTM may be a major innovation, but replacing LSTM with Bi-LSTM for stable performance improvements makes little sense. Now, it is time to step out of this comfort zone of scientific research, and we should no longer be satisfied with simply extending techniques from other domains to the protein domain. PRL is not only a machine learning problem but also a biological problem, so we should consider designing more protein-specific architectures and pretext tasks by incorporating protein-related domain knowledge. In particular, most of the existing work on PRL is based on unimodal protein sequences or structures, and it requires more work exploring sequence-structure co-modeling to fully explore the correspondence between 1D sequences and 3D structures. **Direction 4: Margin from pre-training to fine-tuning** Currently, tremendous efforts are focusing on protein pre-training strategies. However, how to fine-tune these pre-trained models to specific downstream tasks is still under-explored. Though numerous strategies have been proposed to address this problem in the fields of computer vision and natural language processing (Zhuang _et al._, 2020), they are difficult to be directly applied to proteins. One obstacle to knowledge transfer is the huge variability between different protein datasets, both in terms of sequence length and structural complexity. The second one is poor generalization of pre-trained models especially for various tasks where collecting labeled data is laborious. Therefore, it is an important issue to design protein-specific techniques to minimize the margin between pre-training and downstream tasks. **Direction 5: Lack of explainability** While existing protein representation learning methods have achieved promising results on a variety of downstream tasks, we still know little about what the model has learned from protein data. Which of the feature patterns, sequence fragments, or sequence-structure relationships has been learned? These are important issues for understanding and interpreting model behavior, especially for those privacy-secure tasks such as drug design, but are missing in current PRL works. Overall, the interpretability of PRL methods remains to be explored further in many respects, which helps us understand how the model works and provides a guide for better usage. ## 7 Conclusions A comprehensive survey of the literature on protein representation learning is conducted in this paper. We develop a general unified framework for PRL methods. Moreover, we systematically divide existing PRL methods into three main categories: sequence-based, structure-based, and sequence-structure co-modeling from three different perspectives, including model architectures, pretext tasks, and downstream applications. Finally, we point out the technical limitations of the current research and provide promising directions for future work on PRL. We hope this survey to pave the way for follow-up AI researchers with no bioinformatics background, setting the stage for the development of more future works.
2309.16853
T1/T2 relaxation temporal modelling from accelerated acquisitions using a Latent Transformer
Quantitative cardiac magnetic resonance T1 and T2 mapping enable myocardial tissue characterisation but the lengthy scan times restrict their widespread clinical application. We propose a deep learning method that incorporates a time dependency Latent Transformer module to model relationships between parameterised time frames for improved reconstruction from undersampled data. The module, implemented as a multi-resolution sequence-to-sequence transformer, is integrated into an encoder-decoder architecture to leverage the inherent temporal correlations in relaxation processes. The presented results for accelerated T1 and T2 mapping show the model recovers maps with higher fidelity by explicit incorporation of time dynamics. This work demonstrates the importance of temporal modelling for artifact-free reconstruction in quantitative MRI.
Fanwen Wang, Michael Tanzer, Mengyun Qiao, Wenjia Bai, Daniel Rueckert, Guang Yang, Sonia Nielles-Vallespin
2023-09-28T21:04:33Z
http://arxiv.org/abs/2309.16853v1
# T1/T2 relaxation temporal modelling from accelerated acquisitions using a Latent Transformer ###### Abstract Quantitative cardiac magnetic resonance T1 and T2 mapping enable myocardial tissue characterisation but the lengthy scan times restrict their widespread clinical application. We propose a deep learning method that incorporates a time dependency _Latent Transformer_ module to model relationships between parameterised time frames for improved reconstruction from undersampled data. The module, implemented as a multi-resolution sequence-to-sequence transformer, is integrated into an encoder-decoder architecture to leverage the inherent temporal correlations in relaxation processes. The presented results for accelerated T1 and T2 mapping show the model recovers maps with higher fidelity by explicit incorporation of time dynamics. This work demonstrates the importance of temporal modelling for artifact-free reconstruction in quantitative MRI. Keywords:Quantitative MRI Deep learning MRI reconstruction ## 1 Introduction Magnetic resonance imaging (MRI) is a crucial non-invasive tool for assessing tissue properties and functions, with applications across medical disciplines. In cardiac imaging, quantitative T1 and T2 mapping provide insights into myocardial composition and structure, enabling characterisation of cardiomyocytes [7, 10, 8]. However, long scan times limit the clinical utility of cardiac T1 and T2 mapping. Although undersampled acquisitions offer a means to accelerate scans, they often lead to artifacts and errors, not only in the reconstructed images but also to a greater extent in the computed T1/T2 maps. Recent deep learning approaches have shown promise for reconstructing high-quality maps from highly accelerated scans. Encoder-decoder models like AUTOMAP [16] leveraged deep convolutional neural networks to learn efficient rep resentations directly from undersampled k-space and mapping targets. Schlemper et al. [11] explored the ability of cascaded CNN to learn the spatial-temporal correlations from multi-coil undersampled cardiac cine MRI. Qin et al. [9] exploited the spatiotemporal correlations using recurrent network for dynamic multi-coil cardiac cine data. Lyu et al. [4] divided temporal cine MRI data into several views and used a video-Transformer [14] model to capture spatial and temporal relationship. However, most existing methods disregard dependencies between parameterised time frames. As relaxation processes induce temporal correlations, explicitly modelling time structure is essential for accurate reconstruction. We introduce an innovative deep reconstruction model that introduces a temporal dependency module to effectively capture inter-frame relationships within encoded sequences. The module is seamlessly integrated into an encoder-decoder architecture by modifying the latent vectors in the skip connections of the encoder-decoder model to better exploit the temporal correlation. By incorporating temporal dynamics, the proposed model aims to significantly enhance reconstructions derived from accelerated cardiac T1 and T2 mapping scans, regardless of the number of time points or mapping methodologies employed. This could facilitate accurate analyses of myocardial tissue properties from faster, patient-friendly scans. We present preliminary validation of our technique for accelerated cardiac T1 and T2 mapping against state-of-the-art methods and gold-standard fully-sampled acquisitions. ## 2 Materials and Methods ### Data Acquisition We used both single and multi-coil T1 and T2 mapping data from the MICCAI 2023 CMRxRecon training dataset [12]. The T1 mapping data was acquired using a Modified Look-Locker Inversion recovery (MOLLI) sequence [6] with nine frames of variable T1 weighting in short-axis view at end-diastole. For each subject, between five and seven slices were collected with a slice thickness of 5.0 mm. The matrix size of each T1-weighted frame was \(144\times 512\) with an in-plane spatial resolution of 1.4 x 1.4 mm\({}^{2}\). For the multi-coil data, the coils were compressed to 10 virtual coils. The T2 mapping data was acquired using a T2-prepared (T2prep) FLASH sequence with three T2 weightings and with geometrical parameters identical to the T1 mapping acquisition. ### Data processing Both single and multi-coil T1 and T2 mapping data from the 120 healthy subjects in the training dataset were randomly split into 80% for training, 10% for validation and 10% for testing. We pre-processed the provided k-space data by scaling it to a range where the model could perform optimally. Specifically, we multiplied all k-space data by a fixed value of \(10^{2}\) to bring the magnitude of the images values approximately into the [0, 1] range. This transformation is reversed before computing the mappings. During training, the data was also augmented using a random undersampling mask for every subject. The random mask was generated by selecting every \(k^{\text{th}}\) line starting from line \(s\), where \(k\) is the acceleration factor and \(s\) is a randomly sampled integer between 0 and \(k\). As in the original acquisition, the random mask preserved the central 24 lines of k-space. This allowed the model to exhibit greater adaptability to minor variations in the acquisition protocol, thereby providing a valuable and realistic data augmentation strategy. ### Model #### 2.3.1 Latent transformer The proposed model, the Latent Transformer (LT), employs an encoder-decoder architecture with shared encoding-decoding blocks across all time-frames (Fig. 1). Embedded within each skip connection between an encoding layer and the corresponding decoding layer is an LT block, which enables modelling of dependencies across frames before passing signals to the decoder layers (Fig. 1, E). Specifically, there is a unique LT block for each layer of the main encoder-decoder network. Each LT block utilises multi-layer and multi-head self-attention (Fig. 1, C and D) to compute the updated latent code as a weighted linear combination of itself and the latent codes of the other time-frames in a pixel-wise manner (Fig. 1, A and B). The LT blocks are key to exploit temporal correlations within the data to aid reconstruction performance, as they allow information from each frame to affect the reconstruction of all other frames. #### 2.3.2 Single-coil The proposed model architecture consists of two complementary components as illustrated in Figure 1: * An encoder-decoder network that serves as the main artifact removal model. In our experiments, we utilise a U-Net architecture as a strong baseline model. The goal of this encoder-decoder network is to remove large-scale artifacts from the input MRI frames. * The latent transformer model to explicitly capture inter-frame dependencies. Finally, the predictions from the main encoder-decoder model and the LT model are combined to produce the final reconstructed output frames. By fusing the outputs this way, the model leverages both general artifact removal capabilities and inter-frame dependencies for enhanced MRI reconstruction. #### 2.3.3 Multi-coil model Similar to the single-coil model, the multi-coil artifact-removal architecture consists of a main encoder-decoder network and LT model tailored for multi-coil data. To reduce computational complexity, the LT is applied on the coil-combined complex image rather than each coil individually. Coil sensitivity maps (CSMs) \(C\) are extracted using an iterative approach [3] with smoothing based on the central 24 lines of undersampled k-space \(x\in\mathbb{C}^{W\times M}\) among all \(W\) coils. The undersampled multi-coil data is multiplied by the conjugate CSMs to maintain complex information as \(\hat{x}\in\mathbb{C}^{W}\) before input to the model. The multi-coil reconstruction can be formulated as: \[\hat{x}_{rec}=\operatorname*{arg\,min}_{x}(\|Ex-y\|_{2}^{2}+\lambda\left\| \hat{x}-S(\hat{x};\theta)\right\|_{2}^{2}) \tag{1}\] \[E=M\cdot F\cdot\hat{C} \tag{2}\] where \(x\) is the multi-coil complex image and \(y\in\mathbb{C}^{W\times M}\) is the acquired multi-coil k-space data. \(E\) represents the operator combining the undersampling mask \(M\), Fourier transform \(F\), and updated sensitivity maps \(\hat{C}\). \(S(\hat{x};\theta)\) denotes the single-coil based deep neural network with parameters \(\theta\). We separate the optimisation into conjugate gradient SENSE [5] reconstruction and neural network-based reconstruction, iteratively updating \(\hat{x}\). \[\hat{x}_{rec}=(E^{H}E+\lambda I)^{-1}(E^{H}y+\lambda S(x;\theta)) \tag{3}\] where \(\hat{x}_{rec}\) is calculated with fixed \(\theta\) parameters in the network. Figure 1: Diagram representing the architecture for the single-coil (E) and multi-coil (F) tasks. The figure also shows how the latent transformer is implemented using a pixel-wise self-attention mechanism (A, B) in a multilayer and multi-head fashion (D). The figure shows a case where three time-frames are available, but the method extends seamlessly to any number of time-frames with no modification. An additional CSM update module is integrated to improve the original \(C\) under the supervision of \(\hat{x}_{rec}\) for a better SENSE reconstruction. The CSM \(C\) initialised by iterative method [3] works as a warm start: \[\hat{C}=N(C;\beta) \tag{4}\] \(N(C;\beta)\) is the network to update the CSM with parameters of \(\beta\). It consisted of four single-scale convolutional layers with kernel size of 3\(\times\)3 followed by ReLU and a fifth layer with only 2D convolution. ### Assessments The results reported in this work were computed by comparing the model outputs with fully-sampled reconstructions on a fixed test set. We reported quantitative metrics including root-mean-square-error (RMSE), normalised mean-square-error (NMSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM). To assess the impact on the downstream mapping task, these metrics were also calculated for the estimated T1 and T2 parameter maps. To match the evaluation protocol used in the CMRxRecon challenge, the reconstructed images and parameter maps were cropped to a region of interest before metric computation. The quantitative assessment was specifically focused on the most clinically relevant region by retaining only the central 50% of rows and central 33% of columns. ## 3 Results ### Single-coil reconstruction To evaluate the proposed model, we conducted experiments for both the T1 and T2 mapping tasks using acceleration factors 4, 8, and 10. In this section, we compared two model configurations: * U-Net: A baseline U-Net architecture that serves as a standard encoder-decoder network. * U-Net + Latent Transformer: The proposed model combining the baseline encoder-decoder model with the latent-transformer module to exploit inter-frame dependencies. Table 1 summarises the results across both tasks for the three considered acceleration factors. Results demonstrate the ability of the latent transformer to effectively model the temporal correlations and improve the reconstruction. Figure 2 also qualitatively compares the reconstruction produced using Zero-filling, a U-Net model and the proposed U-Net + LT model. ### Multi-coil reconstruction To evaluate the proposed model for multi-coil MRI reconstruction, we compared two model configurations built upon MoDL [1] framework: * MoDL: The baseline MoDL model using fixed coil sensitivity maps and a standard single-scale network architecture. This serves as the standard MoDL implementation. * MoDL + Proposed Model: An enhanced MoDL pipeline incorporating our proposed single-coil model architecture with latent transformers and learnable coil sensitivity maps. Experiments were conducted on the multi-coil T1 and T2 mapping datasets. Quantitative metrics compare the two MoDL-based approaches to analyse the benefits of the proposed model enhancements, including the latent transformer's ability to exploit inter-frame dependencies. \begin{table} \begin{tabular}{l|c c c c} \hline \hline **Model** & **PSNR \(\uparrow\)** & **SSIM \(\uparrow\)** & **NMSE \(\downarrow\)** & **RMSE \(\downarrow\)** \\ \hline U-Net T1 4\(\times\) & 30.160 & 0.811 & 0.028 & 4.35E-05 \\ U-Net T1 8\(\times\) & 28.158 & 0.766 & 0.044 & 5.58E-05 \\ U-Net T1 10\(\times\) & 27.661 & 0.762 & 0.048 & 5.91E-05 \\ U-Net T2 4\(\times\) & 28.940 & 0.827 & 0.023 & 2.81E-05 \\ U-Net T2 8\(\times\) & 27.350 & 0.802 & 0.032 & 3.44E-05 \\ U-Net T2 10\(\times\) & 27.246 & 0.808 & 0.032 & 3.50E-05 \\ \hline U-Net T1 4\(\times\) + LT & 30.184 & 0.813 & 0.028 & 4.36E-05 \\ U-Net T1 8\(\times\) + LT & 28.431 & 0.774 & 0.040 & 5.39E-05 \\ U-Net T1 10\(\times\) + LT & 27.933 & 0.769 & 0.045 & 5.71E-05 \\ U-Net T2 4\(\times\) + LT & 29.067 & 0.831 & 0.022 & 2.76E-05 \\ U-Net T2 8\(\times\) + LT & 27.469 & 0.806 & 0.031 & 3.39E-05 \\ U-Net T2 10\(\times\) + LT & 27.210 & 0.807 & 0.033 & 3.51E-05 \\ \hline \hline \end{tabular} \begin{tabular}{l|c c c c} \hline \hline **Model** & **Map PSNR \(\uparrow\)** & **Map** & **SSIM \(\uparrow\)** & **Map** & **RMSE \(\downarrow\)** \\ \hline U-Net T1 4\(\times\) & 19.709 & 0.600 & 0.024 & 134.219 \\ U-Net T1 8\(\times\) & 18.593 & 0.534 & 0.031 & 152.737 \\ U-Net T1 10\(\times\) & 18.548 & 0.524 & 0.031 & 153.266 \\ U-Net T2 4\(\times\) & 11.884 & 0.433 & 0.505 & 64.175 \\ U-Net T2 8\(\times\) & 11.838 & 0.406 & 0.511 & 64.483 \\ U-Net T2 10\(\times\) & 11.860 & 0.410 & 0.508 & 64.350 \\ \hline U-Net T1 4\(\times\) + LT & 19.834 & 0.607 & 0.023 & 132.514 \\ U-Net T1 8\(\times\) + LT & 18.778 & 0.538 & 0.029 & 149.393 \\ U-Net T1 10\(\times\) + LT & 18.592 & 0.526 & 0.030 & 152.493 \\ U-Net T2 4\(\times\) + LT & 11.910 & 0.439 & 0.502 & 63.977 \\ U-Net T2 8\(\times\) + LT & 11.898 & 0.413 & 0.504 & 64.056 \\ U-Net T2 10\(\times\) + LT & 11.903 & 0.414 & 0.503 & 64.017 \\ \hline \hline \end{tabular} \end{table} Table 1: Single-coil model results for both the reconstructed MR acquisitions and the computed T1 and T2 maps. The table compares a U-Net based artifact-removal process with a pipeline that used a Latent Transformer-enhanced U-Net at its core. Our report underlined the best model for a given mapping task and acceleration factor. The results in Table 2 summarise the reconstruction performance for the two models across both mapping tasks. We observed consistent improvements from the proposed techniques for integrating learnable coil sensitivity estimation and advanced single-coil modelling into the MoDL framework. ## 4 Discussion The introduction of the Latent Transformer (LT) module demonstrates clear improvements for the vast majority of single coil results across all analysed metrics, tasks, and acceleration factors, for both images and derived T1 and T2 maps (Table 1). For multi-coil data, the improvements from the LT blocks are more modest. In particular, the LT addition degrades performance on reconstructed images but improves results for the computed T1 and T2 maps (Table 1). The difference in behaviour between the two tasks likely arises from two concurring causes. First, the network works as a regularisation term. The weighting between the data consistency layer of the CG-SENSE and network may even downgrade for heavy networks. When the network becomes too complex or contains too many parameters, it may prioritise fitting the training data over maintaining consistency with the acquired data, leading to smaller weighting and reduced effect of the LT module. This can result in degraded performance and lower accuracy in image reconstruction. From Table 2, the proposed method outperform the original MoDL at higher acceleration factors, indicating the potential Figure 2: Qualitative comparison between zero-filling reconstruction, U-Net-based reconstruction and the proposed U-Net + LT model. The figure shows both the reconstructed images and the T1 mapping associated with the shown slice. of the networks correcting for severe undersampling artifacts. Lighter variants on U-net with attention across channels may be taken into consideration. For the CSM update module, we also tried using the CSM generated on the fully-sampled k-space as a hard constrain to supervise, but got inferior performance. Networks with unrolled manner [13] or an additional J-SENSE module [2] can be incorporated to get a better data consistency performance. Second, the LT module seems to produce some image artifacts but ultimately captures inter-frame dynamics better than simpler models focused solely on de-noising, as evidenced by improved T1 and T2 map estimates. In summary, the proposed LT framework demonstrates clear utility in exploiting temporal correlations, especially for single coil acquisitions, further validating its use for reconstructing highly accelerated MRI data for T1 and T2 mapping. \begin{table} \begin{tabular}{l|c c c c} \hline \hline **Model** & **PSNR \(\uparrow\)** & **SSIM \(\uparrow\)** & **NMSE \(\downarrow\)** & **RMSE \(\downarrow\)** \\ \hline MoDL T1 4\(\times\) & 34.790 & 0.894 & 0.026 & 2.85E-05 \\ MoDL T1 8\(\times\) & 31.534 & 0.855 & 0.025 & 3.80E-05 \\ MoDL T1 10\(\times\) & 29.904 & 0.840 & 0.030 & 4.57E-05 \\ MoDL T2 4\(\times\) & 33.896 & 0.911 & 0.010 & 1.63E-05 \\ MoDL T2 8\(\times\) & 29.937 & 0.870 & 0.018 & 2.54E-05 \\ MoDL T2 10\(\times\) & 29.205 & 0.867 & 0.021 & 2.79E-05 \\ \hline MoDL T1 4\(\times\) + LT & 34.460 & 0.887 & 0.028 & 2.97E-05 \\ MoDL T1 8\(\times\) + LT & 30.763 & 0.837 & 0.028 & 4.15E-05 \\ MoDL T1 10\(\times\) + LT & 29.762 & 0.839 & 0.032 & 4.67E-05 \\ MoDL T2 4\(\times\) + LT & 32.866 & 0.899 & 0.011 & 1.83E-05 \\ MoDL T2 8\(\times\) + LT & 28.853 & 0.847 & 0.023 & 2.89E-05 \\ MoDL T2 10\(\times\) + LT & 28.005 & 0.844 & 0.028 & 3.22E-05 \\ \hline \hline \end{tabular} \begin{tabular}{l|c c c c} \hline \hline **Model** & **Map PSNR \(\uparrow\)** & **Map** & **SSIM \(\uparrow\)** & **Map** & **RMSE \(\downarrow\)** \\ \hline MoDL T1 4\(\times\) & 22.346 & 0.719 & 0.017 & 105.874 \\ MoDL T1 8\(\times\) & 19.934 & 0.623 & 0.021 & 131.508 \\ MoDL T1 10\(\times\) & 20.004 & 0.611 & 0.022 & 129.568 \\ MoDL T2 4\(\times\) & 12.365 & 0.577 & 0.455 & 60.838 \\ MoDL T2 8\(\times\) & 11.968 & 0.480 & 0.497 & 63.645 \\ MoDL T2 10\(\times\) & 11.946 & 0.473 & 0.499 & 63.805 \\ \hline MoDL T1 4\(\times\) + LT & 22.035 & 0.739 & 0.015 & 110.417 \\ MoDL T1 8\(\times\) + LT & 20.413 & 0.647 & 0.023 & 124.659 \\ MoDL T1 10\(\times\) + LT & 20.012 & 0.613 & 0.022 & 129.470 \\ MoDL T2 4\(\times\) + LT & 12.247 & 0.547 & 0.447 & 61.585 \\ MoDL T2 8\(\times\) + LT & 12.114 & 0.494 & 0.481 & 62.617 \\ MoDL T2 10\(\times\) + LT & 11.977 & 0.483 & 0.496 & 63.532 \\ \hline \hline \end{tabular} \end{table} Table 2: Multi-coil model results for both the reconstructed MR acquisitions and the computed T1 and T2 maps. The table compares a standard MoDL model with our proposed method for artifact-removal. Our report underlined the best model for a given mapping task and acceleration factor. ## 5 Conclusion This work proposes a deep learning approach for reconstructing undersampled MRI that incorporates our novel Latent Transformers module to model inter-frame dependencies. Experiments on accelerated cardiac T1/T2 mapping show improved image quality and parameter mapping compared to baseline models, demonstrating the importance of temporal modelling. While promising, limitations remain including the basic U-Net architecture used. For future work, we will explore integrating the latent transformers into more advanced models like Restormer [15] and unrolled networks [11] with better CSM estimation to further boost performance. Overall, this study validates explicitly modelling time correlations with transformers to enable accurate reconstructions from highly accelerated quantitative MRI. ## 6 Acknowledgement We want to show our gratitude to Zimu Huo, who provided support and advice on multi-coil reconstruction. This work was supported by the British Heart Foundation (RG/19/1/34160). Guang Yang was supported in part by the ERC IMI (101005122), the H2020 (952172), the MRC (MC/PC/21013), the Royal Society (IEC/NSFC/211235), the NVIDIA Academic Hardware Grant Program, the SABER project supported by Boehringer Ingelheim Ltd, and the UKRI Future Leaders Fellowship (MR/V023799/1). Fanwen Wang was supported by the UKRI Future Leaders Fellowship (MR/V023799/1). Michael Tanzer was supported by the UKRI CDT in AI for Healthcare (EP/S023283/1).
2309.16847
Small ideals in polynomial rings and applications
Let $\mathbf{k}$ be a field which is either finite or algebraically closed and let $R = \mathbf{k}[x_1,\ldots,x_n].$ We prove that any $g_1,\ldots,g_s\in R$ homogeneous of positive degrees $\le d$ are contained in an ideal generated by an $R_t$-sequence of $\le A(d)(s+t)^{B(d)}$ homogeneous polynomials of degree $\le d,$ subject to some restrictions on the characteristic of $\mathbf{k}.$ This yields effective bounds for new cases of Ananyan and Hochster's theorem A in arXiv:1610.09268 on strength and the codimension of the singular locus. It also implies effective bounds when $d$ equals the characteristic of $\mathbf{k}$ for Tao and Ziegler's result in arXiv:1101.1469 on rank and $U^d$ Gowers norms of polynomials over finite fields.
Amichai Lampert
2023-09-28T20:56:32Z
http://arxiv.org/abs/2309.16847v1
# Small ideals in polynomial rings and applications ###### Abstract. Let \(\mathbf{k}\) be a field which is either finite or algebraically closed and let \(R=\mathbf{k}[x_{1},\ldots,x_{n}].\) We prove that any \(g_{1},\ldots,g_{s}\in R\) homogeneous of positive degrees \(\leq d\leq\operatorname{char}(\mathbf{k})\) are contained in an ideal generated by an \(R_{t}\)-sequence of \(\leq A(d)(s+t)^{B(d)}\) homogeneous polynomials of degree \(\leq d.\) When \(\mathbf{k}\) is algebraically closed we can relax the degree restriction to \(\operatorname{char}(\mathbf{k})=0\) or \(d\leq\operatorname{char}(\mathbf{k})+1.\) This yields effective bounds for Ananyan and Hochster's theorem on strength and the codimension of the singular locus [1, Theorem A] in new cases. It also implies effective bounds in the case \(d=\operatorname{char}(\mathbf{k})\) for Tao and Ziegler's result [25] on rank of polynomials over finite fields and their \(U^{d}\) Gowers norm. The author is supported by the National Science Foundation under Grant No. DMS-1926686. \({}^{1}\)Ananyan and Hochster use the name _strength_. Our goal in this paper is to develop the notion of _relative rank_ for polynomials and to show that it retains many of the good properties of Schmidt rank while allowing for an efficient regularization process. Relative rank is a generalization of Schmidt rank and was introduced by the author and Ziegler in [16], where collections of high relative rank over a finite field \(\mathbf{k}\) were shown to be equidistributed as functions, as long as \(\operatorname{char}(\mathbf{k})\) is larger than all the degrees involved. This has many consequences described in that paper, and can in fact be used to prove a similar result to theorem 1.2 [16, Theorem 2.6]. We will present a simpler, more direct proof, which also yields a result which is less restrictive on the degrees in positive characteristic. We now state our main technical result. **Theorem 1.2** (Small ideals).: _For every positive integer \(d\) there exist constants \(A(d),B(d)\) such that the following holds. Let \(\mathbf{k}\) be a field and \(g_{1},\ldots,g_{s}\in\mathbf{k}[x_{1},\ldots,x_{n}]\) a collection of forms of positive degrees \(\leq d.\) Suppose either of the following holds:_ * \(\mathbf{k}\) _is algebraically closed of characteristic zero or_ \(d\leq\operatorname{char}(\mathbf{k})+1,\) _or_ * \(\mathbf{k}\) _is a finite field and_ \(d\leq\operatorname{char}(\mathbf{k}).\)__ _Then there exist forms \(G_{1},\ldots,G_{S}\in\mathbf{k}[x_{1},\ldots,x_{n}]\) of degrees \(\leq d\) such that:_ 1. \(g_{1},\ldots,g_{s}\in I(G_{1},\ldots,G_{S}),\)__ 2. \(S\leq A(s+t)^{B}\) _and_ 3. \(G_{1},\ldots,G_{S}\) _are an_ \(R_{t}\)_-sequence._ The hypotheses of the theorem naturally lead to two follow-up questions: **Question 1.3**.: Is the restriction on the degrees in positive characteristic necessary? **Question 1.4**.: Does the same result hold for any perfect field \(\mathbf{k}\)? Theorem 1.2 has two immediate applications. ### Codimension of the singular locus Assume for now that \(\mathbf{k}\) is algebraically closed. Given a homogeneous polynomial \(f\in\mathbf{k}[x_{1},\ldots,x_{n}],\) we set \(S(f)=(x:\nabla f(x)=0)\). Similarly, for a collection of forms \[S(f_{1},\ldots,f_{s})=(x:\nabla f_{1}(x),\ldots,\nabla f_{s}(x)\text{ are linearly dependent}).\] Let \(c(f_{1},\ldots,f_{s}):=\operatorname{codim}_{\mathbb{A}^{n}}S(f_{1},\ldots,f_ {s}).\) A simple argument gives \(\operatorname{rk}(f)\geq c(f)/2.\) Ananyan and Hochster proved [1, Theorem A] that if \(f\) has degree \(d\) then \(\operatorname{rk}(f)\leq F(c(f),d)\) for some function \(F\) and asked for reasonable bounds for \(F.\) In [15] the author, Kazhdan and Polishchuk proved the following effective version: **Theorem 1.5**.: _Let \(\mathbf{k}\) be algebraically closed and assume that \(\operatorname{char}(\mathbf{k})\) does not divide \(d\). Then_ \[\operatorname{rk}(f)\leq(d-1)\cdot c(f).\] _More generally, for a collection \(f_{1},\ldots,f_{s}\) of forms of degree \(d\) we have_ \[\operatorname{rk}(f_{1},\ldots,f_{s})\leq(d-1)\cdot(c(f_{1},\ldots,f_{s})+s-1).\] Using theorem 1.2 we obtain effective bounds in several more cases. **Theorem 1.6** (Rank and singularities).: _Let \(\mathbf{k}\) be an algebraically closed field of positive characteristic \(p\) and \(d\in\mathbb{N}\) such that either \(d=p\) or \(d=4\) and \(p=2.\) Then there exist constants \(A=A(d),B=B(d)\) such that for \(f\in\mathbf{k}[x_{1},\ldots,x_{n}]\) homogeneous of degree \(d\) we have_ \[\operatorname{rk}(f)\leq A(c(f))^{B}.\] _More generally, for a collection of forms \(f_{1},\ldots,f_{s}\in\mathbf{k}[x_{1},\ldots,x_{n}]\) of common degree \(d\) we have_ \[\operatorname{rk}(f_{1},\ldots,f_{s})\leq A(c(f_{1},\ldots,f_{s})+s-1)^{B}.\] ### Gowers norms The second application of theorem 1.2 is to Gowers norms of polynomials over finite fields. Let \(\mathbf{k}\) be a finite field and \(f:\mathbf{k}^{n}\to\mathbb{C}\) some function. Given a vector \(h\in\mathbf{k}^{n}\), we define the (multiplicative) discrete derivative \(\triangle_{h}f:\mathbf{k}^{n}\to\mathbb{C}\) by the formula \(\triangle_{h}f(x)=f(x+h)\overline{f(x)}\). **Definition 1.7**.: The _Gowers uniformity norm_ of \(f\) is \[\|f\|_{U^{d}}:=|\mathbb{E}_{x,h_{1},\ldots,h_{d}\in\mathbf{k}^{n}}\triangle_{ h_{1}}\ldots\triangle_{h_{d}}f(x)|^{1/2^{d}},\] where \(\mathbb{E}_{a\in A}=\frac{1}{|A|}\sum_{a\in A}\) for a finite set \(A\). Gowers introduced these uniformity norms (for cyclic groups) in [8] and used them in his analytic proof of Szemeredi's theorem on arithmetic progressions. They have since become a highly active subject of study with many applications, the most notable of which is Green and Tao's proof that the primes contain arbitrarily long arithmetic progressions [10]. Let \(\chi:\mathbf{k}\to\mathbb{C}\) be a non-trivial additive character. Given a polynomial \(f\in\mathbf{k}[x_{1},\ldots,x_{n}]\), we can compose with \(\chi\) to obtain \(\chi\circ f:\mathbf{k}^{n}\to\mathbb{C}.\) Beginning with the work of Green and Tao [9], and continuing through many improvements [12], [2], [19], [13], [4], [21], it has been shown that if \(f\) has degree \(d<\operatorname{char}(\mathbf{k})\) and \(\chi\circ f\) has large \(U^{d}\) norm, then \(f\) is low rank. In a breakthrough quantitative result [19], Milicevic proved: **Theorem 1.8**.: _Let \(\mathbf{k}\) be a finite field, \(\chi:\mathbf{k}\to\mathbb{C}\) a non-trivial additive character and \(f\in\mathbf{k}[x_{1},\ldots,x_{n}]\) of degree \(d<\operatorname{char}(\mathbf{k}).\) There exist constants \(A(d),B(d)\) such that if \(\|\chi\circ f\|_{U^{d}}\geq|\mathbf{k}|^{-t}\) then_ \[\operatorname{rk}(f)\leq A(1+t)^{B}.\] In [25], Tao and Ziegler proved that an analogous inequality holds qualitatively when \(d=\operatorname{char}(\mathbf{k})\), but so far no effective bounds have been obtained. By combining theorem 1.2 with the results of [16], we are able to obtain polynomial bounds in these cases. **Theorem 1.9** (Large \(U^{p}\) norm implies bounded rank).: _Let \(\mathbf{k}\) be a finite field of characteristic \(p,\)\(\chi:\mathbf{k}\to\mathbb{C}\) a non-trivial additive character and \(f\in\mathbf{k}[x_{1},\ldots,x_{n}]\) a polynomial of degree \(p.\) There exist constants \(A=A(p),B=B(p)\) such that if \(\|\chi\circ f\|_{U^{p}}\geq|\mathbf{k}|^{-t}\) then_ \[\operatorname{rk}(f)\leq A(1+t)^{B}.\] The organization of the paper is as follows. In the next section we will use theorem 1.2 to deduce theorems 1.6 and 1.9. In section 3 we will define relative rank and state our main results about it for algebraically closed fields, and then in section 4 we will use them to prove theorem 1.2 in the algebraically closed case. The remaining sections will be devoted to proving the results stated in section 3, before returning to finite fields in the final section. ## 2. Deduction of theorems 1.6 and 1.9 We start with a useful corollary of theorem 1.2 which we will need for both proofs. This corollary follows by applying a clever argument from [1, Proposition 2.6], which we will use in several places throughout this paper. **Corollary 2.1**.: _Let \(\mathbf{k}\) be a field of positive characteristic \(p\) and \(f\in\mathbf{k}[x_{1},\ldots,x_{n}]\) a form of degree \(d.\) Suppose either_ * \(p=2,d=4\) _and_ \(\mathbf{k}\) _is algebraically closed or_ * \(d=p\) _and_ \(\mathbf{k}\) _is algebraically closed or finite._ _There exist \(C(d),D(d)\) such that if \(g_{1},\ldots,g_{s}\) are forms of positive degree \(<d\) and \(\frac{\partial f}{\partial x_{i}}\in I(g_{1},\ldots,g_{s})\) for all \(i\) then \(\operatorname{rk}(f)\leq Cs^{D}.\)_ Note that when the characteristic doesn't divide the degree of \(f\) this follows trivially from Euler's formula since \(f=\frac{1}{d}\sum_{i=1}^{n}x_{i}\frac{\partial f}{\partial x_{i}}\in I(g_{1}, \ldots,g_{s}).\) Proof of corollary.: Applying theorem 1.2, we obtain forms \(G_{1},\ldots,G_{S}\) which are an \(R_{3}\)-sequence such that \(S\leq A(s+3)^{B}\) and \(\frac{\partial f}{\partial x_{i}}\in I(G_{1},\ldots,G_{S})\) for all \(i.\) Set \(J:=I(G_{1},\ldots,G_{S}).\) We now claim that the image of \(f\) in \(R:=\mathbf{k}[x_{1},\ldots,x_{n}]/J\) is reducible, which implies \(\operatorname{rk}(f)\leq A(s+3)^{B}+1.\) To get a contradiction, assume it's irreducible. \(\mathbf{k}[x_{1},\ldots,x_{n}]/J\) is a unique factorization domain by a theorem of Grothendieck [11, Corollaire XI 3.14] (see [3] for a simpler proof). Therefore, the image of \(f\) in \(R\) is prime, and in particular \(R/(f)\) is reduced. Since \(\mathbf{k}\) is perfect, we get that \(\bar{\mathbf{k}}[x_{1},\ldots,x_{n}]/I(G_{1},\ldots,G_{S},f)\) is reduced [24, Tag 030U]. Therefore \(X=V(G_{1},\ldots,G_{S},f)\) is an algebraic set with \(\operatorname{codim}_{\mathbb{A}^{n}}X=S+1\) and ideal \(I(X)=I(G_{1},\ldots,G_{S},f).\) But the derivatives of \(f\) vanish identically on \(X\) so the tangent space at any given point is cut out by the derivatives of \(G_{i},\) and therefore has codimension \(\leq S.\) This contradicts smoothness of the variety \(V(J+(f))\) in a nonempty open subset. Let us now prove theorem 1.6. We begin by introducing some notation for the Taylor expansion around a point. If \(f\in\mathbf{k}[x_{1},\ldots,x_{n}]\) and \(x_{0}\in\mathbb{A}^{n}\) then we write \[f(x+x_{0})=\sum_{j\geq 0}f_{x_{0}}^{j}(x),\] where each \(f_{x_{0}}^{j}(x)\) is homogeneous of degree \(j.\) Note that this is really a finite sum. Here is a key lemma from [15], which we will also use later in this paper. **Lemma 2.2**.: _Let \(X\subset\mathbb{A}^{n}\) be an irreducible closed subvariety of codimension \(c\) and \(x_{0}\in X\) a smooth point. Let \(g_{1},\ldots,g_{c}\) be a set of elements in the ideal \(I_{X}\) of \(X\), with linearly independent differentials at \(x_{0}\). Then any homogeneous \(f\in I_{X}\) of degree \(d\) belongs to the ideal in \(\mathbf{k}[x_{1},\ldots,x_{n}]\) generated by \(((g_{i})_{x_{0}}^{j})_{i=1,\ldots,c;1\leq j\leq d}.\)_ Proof of theorem 1.6.: Write \(c:=c(f).\) By the above lemma, we have \(\frac{\partial f}{\partial x_{i}}\in I\) for all \(i,\) where I is an ideal generated by \(\leq(d-1)c\) forms of positive degree \(<d.\) By corollary 2.1, we get that \(\operatorname{rk}(f)\leq Ac^{B}.\) Now we move on to theorem 1.9. Let \(V=\mathbf{k}^{n}\) and \(F:V^{p}\to\mathbf{k}\) be the multilinear form (polarization) associated to \(f,\) i.e. \(F(h_{1},\ldots,h_{p})=\nabla_{h_{1}}\ldots\nabla_{h_{p}}f(0).\) For multilinear forms (tensors) there is a more nuanced version of rank. **Definition 2.3**.: The _partition rank_ of \(F,\) denoted \(\operatorname{prk}(F),\) is the minimal \(r\) with \[F=\sum_{i=1}^{r}G_{i}(x_{I_{i}})H_{i}(x_{[p]\setminus I_{i}})\] where \(\emptyset\neq I_{i}\subsetneq[p]\) and \(G_{i}:V^{I_{i}}\to\mathbf{k},H_{i}:V^{[p]\setminus I_{i}}\to\mathbf{k}\) are multilinear for all \(i.\) We use the following result of Milicevic [19]. **Theorem 2.4**.: _Let \(\mathbf{k}\) be a finite field, \(\chi:\mathbf{k}\to\mathbb{C}\) a non-trivial additive character and \(f\in\mathbf{k}[x_{1},\ldots,x_{n}]\) a homogeneous polynomial of degree \(d.\) There exist constants \(A(d),B(d)\) such that if \(\|\chi\circ f\|_{U^{d}}\geq|\mathbf{k}|^{-t}\) then_ \[\operatorname{prk}(F)\leq A(1+t)^{B}.\] Proof of theorem 1.9.: By the above theorem, \(\operatorname{prk}(F)\leq A(1+t)^{B}.\) Let \(F=\sum_{i=1}^{r}Q_{i}(x_{I_{i}})R_{i}(x_{[p]\setminus I_{i}})\) with \(p\not\in I_{i}\) for all \(i\) and \(r\leq A(1+t)^{B}.\) Then for all \(j\) we have \(F(x_{[p-1]},e_{j})\in I(Q_{1},\ldots,Q_{r}).\) Plugging in the diagonal \(x_{[p-1]}=(x,\ldots,x)\) we have the following for all \(j\) \[(p-1)!\partial_{j}f(x)=F(x,\ldots,x,e_{j})\in I(Q_{1}(x,\ldots,x),\ldots,Q_{r}( x,\ldots,x)).\] By corollary 2.1, we get \(\operatorname{rk}(f)\leq At^{B}.\) ## 3. Relative rank and singularities We will define relative rank in the context of graded rings over a base field. The ones we will encounter are \(\mathbf{k}[x_{1},\ldots,x_{n}]\) and its quotients by homogeneous ideals. Let \(S=\bigoplus_{d\geq 0}S_{d}\) be a graded ring with \(S_{0}=\mathbf{k}\) a field. For \(f\in S\) homogeneous of positive degree, the definition of Schmidt rank naturally generalizes to \[\operatorname{rk}(f)=\inf\{r:f=\sum_{i=1}^{r}g_{i}\cdot h_{i}\}\] where \(g_{i},h_{i}\) are homogeneous of positive degree. **Definition 3.1** (Relative rank).: Given a homogeneous ideal \(I\subset S\) and a homogeneous element \(f\in S\) of degree \(d,\) let \(\bar{f}\in S/I\) be its image in the quotient ring. The _relative rank_ of \(f\) on \(I\) is \[\operatorname{rk}_{I}(f):=\operatorname{rk}(\bar{f})=\inf\{\operatorname{rk }(f+g):g\in I_{d}\}.\] For a collection of homogeneous elements \(f_{1},\ldots,f_{s}\in S_{d},\) we define \[\operatorname{rk}_{I}(f_{1},\ldots,f_{s}):=\inf\{\operatorname{rk}_{I}(\sum_ {i=1}^{s}a_{i}f_{i}):a_{1},\ldots,a_{s}\in\mathbf{k}\text{ not all zero}\}.\] Note that we always have \(\operatorname{rk}_{I}(f)\leq\operatorname{rk}(f).\) If \(I=(0)\) or is generated by homogeneous elements of degree \(>d,\) then \(\operatorname{rk}(f)=\operatorname{rk}_{I}(f).\) We will be interested in collections of polynomials built up of layers with large relative rank. To describe this precisely, we need some definitions. **Definition 3.2**.: 1. A _tower_\(\mathcal{F}=(\mathcal{F}_{i})_{i\in[h]}\) of degree \(\mathbf{d}=(d_{1},\ldots,d_{h})\) is composed of \(h\) collections of homogeneous elements \(\mathcal{F}_{i}=(f_{i,j})_{j\in[m_{i}]}\) which we call _layers_ such that the elements in each layer have common degree \(d_{i}.\) The _dimension_ of the tower is \(|\mathcal{F}|=m_{1}+\ldots+m_{h}.\) We denote the truncated tower \(\mathcal{F}_{<i}=(\mathcal{F}_{j})_{j\in[i-1]}.\) 2. The tower \(\mathcal{F}\) is \((A,B,t)\)-_regular_ if for every \(i\in[h]\) we have \[\operatorname{rk}_{I(\mathcal{F}_{<i})}(\mathcal{F}_{i})>A(m_{i}+m_{i+1}+\ldots+ m_{h}+t)^{B}.\] Here is our main result connecting relative rank and the singular locus. **Theorem 3.3** (Relative rank and singularities).: _Let \(\mathbf{k}\) be an algebraically closed field and \(\mathbf{d}=(d_{1},\ldots,d_{h}),e\) a degree sequence such that either \(\operatorname{char}(\mathbf{k})=0\) or \(\operatorname{char}(\mathbf{k})=p\) and \(\mathbf{d}\leq p,e\leq p+2.\) There exist \(A(\mathbf{d},e),B(\mathbf{d},e)\) such that whenever \(\mathcal{F}\) is an \((A,B,t)\)-regular tower in \(\mathbf{k}[x_{1},\ldots,x_{n}]\) of degree \(\mathbf{d}\) and \(f_{1},\ldots,f_{s}\) are forms of degree \(e\) with \(\operatorname{codim}_{X}(S(f_{1},\ldots,f_{s},\mathcal{F})\cap X)\leq t\) then we have_ \[rk_{I(\mathcal{F})}(f_{1},\ldots,f_{s})\leq A(t+s)^{B}.\] In the next section we explain how theorem 3.3 allows us to deduce theorem 1.2. Theorem 3.3 implies that sufficiently regular towers satisfy Serre's condition \(R_{t}.\) **Corollary 3.4**.: _Let \(\mathcal{F}\) be a tower in \(\mathbf{k}[x_{1},\ldots,x_{n}]\) of degrees \(\mathbf{d}=(d_{1},\ldots,d_{h}),\) where either \(\operatorname{char}(\mathbf{k})=0\) or \(\operatorname{char}(\mathbf{k})=p\) and \(d_{i}\leq p,d_{h}\leq p+2.\) Suppose \(\mathcal{F}\) is \((A,B,t)\)-regular, where \(A(d_{1},\ldots,d_{h}),B(d_{1},\ldots,d_{h})\) are the constants from the above theorem. Then \(\mathcal{F}\) is an \(R_{t}\)-sequence._ Proof.: Since any subtower of \(\mathcal{F}\) is \((A,B,t)\)-regular, it's enough that for all \(i\in[h]\) we have \[\operatorname{codim}_{X(\mathcal{F}_{<i})}S(\mathcal{F}_{\leq i})\cap X( \mathcal{F}_{<i})>t,\] and this is exactly the content of theorem 3.3. To prove theorem 3.3 inductively we will need to work with a slightly more general version for forms which are bi-homogeneous. Given \(f(x,y)\) bi-homogeneous we write \(S_{y}(f)=((x,y):\frac{\partial f}{\partial y_{i}}(x,y)=0\ \forall i).\) For a collection of bi-homogeneous functions \(f_{1},\ldots,f_{s}\) set \[S_{y}(f)=((x,y):\operatorname{rk}\left(\frac{\partial f_{i}}{\partial y_{j}} \right)(x,y)<s).\] **Theorem 3.5**.: _Let \(\mathbf{k}\) be an algebraically closed field. Let \(\mathbf{d}=(d_{1},\ldots,d_{h}),e\) such that either \(\operatorname{char}(\mathbf{k})=0\) or \(\operatorname{char}(\mathbf{k})=p\) and \(\mathbf{d},e\leq p.\) There exist \(A(\mathbf{d},e),B(\mathbf{d},e)\) such that the following holds. Let \(\mathcal{F}\) be a bi-homogeneous \((A,B,t)\)-regular tower in \(\mathbf{k}[x_{1},\ldots,x_{n},y_{1},\ldots,y_{n}]\) of degree \(\mathbf{d}\) with \(\mathcal{F}^{\prime}\subset\mathcal{F}\) the functions of positive degree in \(y\) and let \(f_{1},\ldots,f_{s}\) be bi-homogeneous functions of degree \(e\) and common bi-degree which is positive in \(y.\) Suppose \(\operatorname{codim}_{X(\mathcal{F})}X(\mathcal{F})\cap S_{y}(f_{1},\ldots,f_{s},\mathcal{F}^{\prime})\leq t.\) Then_ \[\operatorname{rk}_{I(\mathcal{F})}(f_{1},\ldots,f_{s})\leq A(t+s)^{B}.\] The reason we will need this generalization is for proposition 6.1. Note that for \(e\leq p,\) theorem 3.3 is simply the case when there are no \(x\) variables. ## 4. Regularization and proof of theorem 1.2 We now assume theorem 3.3 holds and use it to deduce theorem 1.2 for algebraically closed fields. This will follow from an efficient relative regularization process, which is the content of the next lemma. This process will also play a key role later in our proof of theorem 3.3. **Lemma 4.1** (Relative regularization).: _Let \(S=\bigoplus_{j\geq 0}S_{j}\) be a graded ring with \(S_{0}=\mathbf{k}\) a field. For any \(A,B\) there exist constants \(C(A,B,d),D(A,B,d)\) such that the following holds: If \((g_{1},\ldots,g_{s})\) is a collection of homogeneous elements of positive degree \(\leq d\) then there exists a tower \(\mathcal{G}\) such that:_ 1. \(I(g_{1},\ldots,g_{s})\subset I(\mathcal{G}).\)__ 2. \(\mathcal{G}\) _is of degree_ \((1,\ldots,d).\)__ 3. \(\mathcal{G}\) _is_ \((A,B,t)\)_-regular._ 4. \(|\mathcal{G}|\leq C(s+t)^{D}.\)__ Proof.: We begin by separating the \(g_{i}\) into \(d\) layers \(\mathcal{G}_{i}=(g_{i,j})_{j\in[m_{i}]}\) such that the \(i\)-th layer has degree \(i.\) To each layer \(i\in[d]\) we assign a number \(n_{i}\) by a formula we'll soon describe. Now we regularize: If for each \(i_{0}\in[d]\) we have \(\operatorname{rk}_{I(\mathcal{G}_{<i_{0}})}(\mathcal{G}_{i_{0}})>A(n_{i_{0}} +t)^{B}\) then we're done. Else, there exists some \(i_{0}\) and \(a_{1},\ldots,a_{m_{i_{0}}}\in\mathbf{k}\) not all zero with \[\sum_{j=1}^{m_{i_{0}}}a_{j}g_{i_{0},j}=\sum_{k=1}^{r}p_{k}q_{k}\mod I(\mathcal{ G}_{<i_{0}})\] with \(r\leq A(n_{i_{0}}+t)^{B}.\) Assume without loss of generality that \(a_{1}\neq 0\) and replace \(\mathcal{G}\) by the tower \(\mathcal{G}^{\prime}\) obtained by deleting \(g_{i_{0},1}\) and by adding each \(p_{k}\) to the layer of degree \(\deg(p_{k})<i_{0}.\) Since \(\mathcal{G}_{1}\) is either linearly dependent or of infinite rank, this process must eventually terminate, yielding \(\mathcal{G}\) satisfying requirements 1,2. Now we give the formula for \(n_{i}\) in terms of \(m_{i},\) the original number of forms in each degree, \[n_{d}=m_{d}\,\ n_{i}=m_{i}+n_{i+1}A(n_{i+1}+t)^{B}.\] Note that \(n_{i}\) bounds the total number of forms appearing in the layers \(i,i+1,\ldots,d\) throughout the regularization process. This can be seen by downward induction, where the base case \(i=d\) is clear. Assuming it holds for \(i+1,\) we get that each of the \(n_{i+1}\) forms appearing in degree \(i+1\) and above can contribute at most \(A(n_{i+1}+t)^{B}\) forms of degree \(i,\) proving the claim for degree \(i.\) This proves that requirement 3 is satisfied. It's easy to see that all of the \(n_{i}\) are polynomial in \(s,t\) and so \(\dim(\mathcal{G})\leq n_{1}\) satisfies the fourth requirement. Proof of theorem 1.2 for algebraically closed fields.: Let \(A(1,\ldots,d),B(1,\ldots,d)\) be the constants of theorem 3.3. By the lemma, there exist \(C(d),D(d)\) and \(\mathcal{G}\) of size \(|\mathcal{G}|\leq C(s+t)^{D}\) which is \((A,B,t)\)-regular with \(g_{1},\ldots,g_{s}\in I(\mathcal{G}).\) By corollary 3.4, \(\mathcal{G}\) is an \(R_{t}\)-sequence as desired. In the next several sections, we will prove theorem 3.5. ## 5. Low rank presentation on a Taylor tower From here until further notice assume \(\mathbf{k}\) is algebraically closed. We now assume theorem 3.5 (and therefore also theorem 3.3) holds for lower degrees and work towards proving it for the degree sequence \((d_{1},\ldots,d_{h},e).\) Since \[S_{y}(f_{1},\ldots,f_{s},\mathcal{F}^{\prime})=\bigcup_{a\in\mathbb{P}^{s}}S_ {y}(\sum_{i}a_{i}f_{i},\mathcal{F}^{\prime})\] it's enough to prove it for the case of a single polynomial \(f.\) For a polynomial \(g(x,y)\) we will use the notation \(\partial_{z}g(x,y)=\nabla_{(0,z)}g(x,y)\) thinking of this as a polynomial in \((x,y,z).\) For a bi-homogeneous tower \(\mathcal{G}(x,y)\) we write \(\partial_{z}\mathcal{G}\) for a new tower in the variables \((x,y,z)\) with the same degrees in which \(g\in\mathcal{G}\) is replaced by \(\partial_{z}g.\) Recall that given a bi-homogeneous tower \(\mathcal{F}(x,y)\), \(\mathcal{F}^{\prime}\subset\mathcal{F}\) is the subtower of functions of positive degree in \(y\). We write \(T_{z}\mathcal{F}\) for the tower \(T_{z}\mathcal{F}(x,y,z)=\mathcal{F}\cup\partial_{z}\mathcal{F}^{\prime}\). We begin with a useful relative analogue of lemma 2.1. **Lemma 5.1**.: _There exist constants \(A,B\) depending on the degrees such that if \(f,\mathcal{F}\) are bi-homogeneous, \(\mathcal{F}\) is \((A,B,t)\)-regular and \(\mathrm{rk}_{I(T_{z}\mathcal{F})}(\partial_{z}f)\leq t,\) then we have \(\mathrm{rk}_{I(\mathcal{F})}(f)\leq At^{B}\)._ Proof.: If \(\mathrm{char}(\mathbf{k})\) doesn't divide the \(y\)-degree of \(f\) then we can simply plug in \(z=y\) and get \(\mathrm{rk}_{I(\mathcal{F})}(f)\leq t\). The remaining case is when \(\mathrm{char}(\mathbf{k})=p\) and \(f=f(y)\) is of degree \(p\). By considering the double grading with respect to the variables \(x,z\), our assumption implies that \(\partial_{z}f\in I(T_{z}\mathcal{F},g_{1}(y),\ldots,g_{t}(y))\), and we may also assume without loss of generality that \(\mathcal{F}=\mathcal{F}(y)\). Let \(\bar{g}_{1},\ldots,\bar{g}_{t}\) be the images of the \(g_{i}\) in the ring \(S=\mathbf{k}[y]/I(\mathcal{F})\). We apply lemma 4.1 to \(\bar{g}_{i}\) and obtain a tower \(\bar{\mathcal{G}}\) in \(S\) which is \((A,B,3)\)-regular. Let \(\mathcal{G}\) be any tower on \(\mathbf{k}[y]\) of the same degrees as \(\bar{\mathcal{G}}\) with \(\mathcal{G}=\bar{\mathcal{G}}\mod I(\mathcal{F})\). Note that if \(\mathcal{F}\) is \((A,B,3+|\mathcal{G}|)\)-regular, then by definition the tower given by stacking \(\mathcal{G}\) on top, \((\mathcal{F},\mathcal{G})\), is \((A,B,3)\)-regular. This condition is satisfied so long as \(\mathcal{F}\) is \((C,D,t)\)-regular for sufficiently large \(C,D\). The tower \((\mathcal{F},\mathcal{G})\) is an \(R_{3}\)-sequence by corollary 3.4 and so \(R=\mathbf{k}[y]/I(\mathcal{F},\mathcal{G})\) is a unique factorization domain by [3]. Suppose, to get a contradiction, that \(f\) is irreducible in \(R\) and hence a prime. But by construction \(\nabla_{z}f(y)\) vanishes for any \((y,z)\) satisfying \(y\in X(\mathcal{G}),\nabla_{z}\mathcal{F}(y)=0\) but this contradicts the dimension of the tangent space to \(X(f,\mathcal{F},\mathcal{G})\) being one smaller than that of \(X(\mathcal{F},\mathcal{G})\). Given a vector space \(V\) over \(\mathbf{k}\), a point \(v_{0}\in V\) and a polynomial \(g\in\mathbf{k}[V]\) recall the Taylor expansion \[g(v_{0}+v)=\sum_{i\geq 0}g_{v_{0}}^{i}(v)\] with \(g_{v_{0}}^{i}(v)\) homogeneous of degree \(i\). For a tower \(\mathcal{G}\) we write \(\mathcal{G}_{v_{0}}^{i}\) for the collection of polynomials \(g_{v_{0}}^{i}\) for \(g\in\mathcal{G}\). We will later be more precise and organize the collection \((\mathcal{G}_{v_{0}}^{i})_{i\geq 0}\) into a tower. The following proposition is the first step towards the proof of theorem 3.5 for the degree sequence \((d_{1},\ldots,d_{h},e)\). **Proposition 5.2**.: _Suppose \(\mathcal{F},f\) are as in theorem 3.5 and set \(r=(e-1)t\). Then there exists a subvariety \(Y\subset X(T_{z}\mathcal{F})\) of codimension \(t\) such that for generic points \(w_{0}\in Y\) we have homogeneous polynomials \(h_{1}(x,y),\ldots,h_{r}(x,y)\) of positive degrees \(<e\) with_ \[\partial_{z}f\in I((T_{z}\mathcal{F})_{w_{0}}^{j})_{j\geq 0}+I(h_{1},\ldots,h_{r}).\] **Setup for proof of proposition 5.2**.: Let \(\pi:X(T_{z}\mathcal{F})\to X(\mathcal{F})\) be the projection \(\pi(x,y,z)=(x,y)\). Note that \(\pi\) is surjective since \(\pi(x,y,0)=(x,y)\). Let \(Z\subset X(\mathcal{F})\cap S_{y}(f,\mathcal{F}^{\prime})\) be an irreducible component with \(\mathrm{codim}_{X(\mathcal{F})}Z=t\) and choose an irreducible component \(Y\subset\pi^{-1}(Z)\) with \(\overline{\pi(Y)}=Z\). We will show that the proposition holds for any such \(Y\). Before proving this, we need a couple of preliminary lemmas. **Lemma 5.3**.: _There exist constants \(C,D\) such that if \(\mathcal{F}\) is \((C,D,t)\)-regular then \(T_{z}\mathcal{F}\) is \((A,B,t)\)-regular._ Proof.: Let \(r_{i}=A(2m_{i}+\ldots+2m_{h}+t)^{B}\) be the desired relative rank of the \(i\)-th layer. Suppose that for some \(i\in[h]\) we have a linear combination of elements of \((T_{z}\mathcal{F})_{i}\) which has relative rank \(\leq r_{i}\) on \((T_{z}\mathcal{F})_{<i},\) \[\sum_{j}a_{j}f_{i,j}(x,y)+\sum_{j}b_{j}\partial_{z}f_{i,j}(x,y).\] By plugging in \(z=0\) we get \[\operatorname{rk}_{I(\mathcal{F}_{<i})}\left(\sum_{j}a_{j}f_{i,j}(x,y)\right) \leq r_{i},\] which implies \(a_{j}=0\) for all \(j.\) Now we're left with \[\operatorname{rk}_{I((T_{z}\mathcal{F})_{<i})}\left(\partial_{z}\left(\sum_{j }b_{j}f_{i,j}\right)\right)\leq r_{i},\] which by lemma 5.1 implies \(\operatorname{rk}_{\mathcal{F}_{<i}}(\sum_{j}b_{j}f_{i,j})\leq C(m_{i}+\ldots+ m_{h}+t)^{D},\) for sufficiently large \(C,D.\) Therefore \(b_{j}=0\) for all \(j\) and we're done. **Lemma 5.4**.: _There exist \(g_{1}(x,y),\ldots,g_{t}(x,y)\in I(Z)\) such that for generic points in \(Y\) the differentials of \(T_{z}\mathcal{F},g_{1},\ldots,g_{t}\) are linearly independent._ Proof.: By theorem 3.3 the differentials of \(\mathcal{F}\) are linearly independent at generic points of \(Z.\) If we choose some smooth point \(q\) in this set, and choose \(g_{1},\ldots,g_{t}\in I(Z)\) such that the differentials of \(\mathcal{F},g_{1},\ldots,g_{t}\) at \(q\) are linearly independent, then they are linearly independent for generic points in \(Z.\) Thinking of these now as functions in \(I(Y)\subset\mathbf{k}[x,y,z],\) all their partial derivatives with respect to the \(z\) variables vanish. Now for \(f\in\mathcal{F}^{\prime},\) we have \[\nabla(\partial_{z}f(x,y))=(*,\ldots,*,\frac{\partial f}{\partial y_{1}}(x,y),\ldots,\frac{\partial f}{\partial y_{n}}(x,y)).\] By theorem 3.5, these differentials are linearly independent in the \(z\) coordinates for generic \((x,y)\in Z,\) so the entire collection \(T_{z}\mathcal{F},g_{1},\ldots,g_{t}\) has linearly independent differentials at points in \(Y\cap\pi^{-1}(U)\) where \(U\subset Z\) is a dense open subset. Since \(\pi(Y)\) is dense in \(Z,\) this is a dense open subset of \(Y.\) Proof of proposition 5.2.: We chose \(Y\) to be an irreducible component of the intersection \(X(\partial_{z}\mathcal{F}^{\prime})\cap\{(x,y,z):(x,y)\in Z\}.\) By the principal ideal theorem these varieties have codimension \(\leq|\partial_{z}\mathcal{F}^{\prime}|\) and \(\leq|\mathcal{F}|+t,\) so the codimension of \(Y\) in affine space is at most \(|T_{z}\mathcal{F}|+t.\) Lemma 5.4 shows that the opposite inequality holds as well so the codimension is exactly \(|T_{z}\mathcal{F}|+t.\) By lemma 5.3 and corollary 3.4, \(T_{z}\mathcal{F}\) is a prime sequence so \(\operatorname{codim}_{X(T_{z}\mathcal{F})}Y=t.\) Now note that \(\partial_{z}f\in I(Y).\) This is because theorem 3.5 implies that \(U:=Z\setminus S_{y}(\mathcal{F}^{\prime})\) is a nonempty open subset and so \(Y\cap\pi^{-1}(U)\) is a dense open subset of \(Y.\) The function \(\partial_{z}f\) vanishes on this dense open subset, hence it vanishes identically on \(Y.\) Combining lemma 5.4 with lemma 2.2 we get that for generic points \(w_{0}\in Y\) we have \[\partial_{z}f(x,y)\in I((T_{z}\mathcal{F})_{w_{0}}^{j})_{j\in[\epsilon]}+I((g_{ i})_{w_{0}}^{j})_{i\in[t],j\in[\epsilon-1]}.\] Why does \(j\) run up to \(e-1\) and not \(e\)? This is because the \(g_{i}\) are functions of \((x,y)\) only and \(\partial_{z}f(x,y)\) has degree \(e-1\) in \((x,y).\) Therefore any contribution from \((g_{i})_{w_{0}}^{e}\) must cancel out. This completes the proof of the proposition. The remaining obstacle is to remove the elements of the Taylor expansion of \(T_{z}\mathcal{F}\). To do this, we will use the fact that the above holds for _generic_ points \(w_{0}\in Y.\) It turns out that by combining the low rank representations obtained for different basepoints we can get rid of the unwanted terms. ## 6. The Taylor tower is generically regular The goal of this (somewhat technical) section is to prove that when we start with a sufficiently regular tower \(\mathcal{G}\subset\mathbf{k}[V]\) (in our case \(T_{z}\mathcal{F}\)) and choose a generic tuple of points in \(X(\mathcal{G}),\) then the tower \(\mathcal{G}\) along with the elements of its Taylor expansion at these points is regular. Before we state the proposition making this precise, we will need some notation and a convention for how to organize the tower of polynomials in the Taylor expansion. Let \(g\in\mathbf{k}[V]\) be homogeneous of degree \(d.\) The polynomial \(g(w+v)\in\mathbf{k}[V\oplus V]\) has Taylor expansion \[g(w+v)=g^{0}(w,v)+g^{1}(w,v)+\ldots+g^{d}(w,v)\] where \(g^{i}\) each has total degree \(d\) and is bi-homogeneous of degree \((d-i,i).\) Note that \(g^{d}(w,v)=g(v).\) Given a tower \(\mathcal{G}=(g_{i,j})_{i\in[h],j\in[m_{i}]}\subset\mathbf{k}[V]\) of homogeneous forms we define \(D\mathcal{G}\subset\mathbf{k}[V\oplus V]\) to be the bi-homogeneous tower given by \((D\mathcal{G})_{i}=(g_{i,j}^{k}(w,v))_{j\in[m_{i}],0\leq k\leq d_{i}},\) where \(d_{i}=\deg(\mathcal{G}_{i}).\) For a positive integer \(l,\) we set \(D_{l}\mathcal{G}\subset\mathbf{k}[V^{l}\oplus V]\) to be the bi-homogeneous tower with layers \[(D_{l}\mathcal{G})_{i}=(g_{i,j}(v))_{j\in[m_{i}]}\cup(g_{i,j}^{k}(w_{\lambda},v))_{j\in[m_{i}],0\leq k\leq d_{i}-1,\lambda\in[l]}.\] For forms \(\mathcal{G}=(g_{1},\ldots,g_{s})\) in \(\mathbf{k}[V]\) of degree \(d\) and fixed \(w\in V^{l}\) we obtain a tower \(\mathcal{G}(w)\) on \(\mathbf{k}[V]\) of degree \((1,2,\ldots,d)\) with top layer \((g_{1}(v),\ldots,g_{s}(v))\) and \(i\)-th layer \((g_{j}^{i}(w_{\lambda},v))_{j\in[s],\lambda\in[l]}.\) Doing this for each of the layers of \(\mathcal{G}\) we obtain a tower \(D_{l}\mathcal{G}(w)\) on \(\mathbf{k}[V]\) of degree \((1,2,\ldots,d_{1},1,2,\ldots,d_{2},\ldots,1,\ldots,d_{h})\) with \(d_{1}+\ldots+d_{r}\)-th layer \(\mathcal{G}_{r}\) and \(d_{1}+\ldots+d_{r-1}+i\)-th layer \((\mathcal{G}_{r}(w))_{i}.\) Note that the polynomials in this tower are the collection \((\mathcal{G}_{w_{\lambda}}^{k})_{k\geq 0,\lambda\in[l]}.\) We now have sufficient notation to state the central proposition of this section, which is a version for relative rank of [15, Theorem 1.6]. **Proposition 6.1**.: _Let \(\textbf{d}=(d_{1},\ldots,d_{h})\) be a sequence of degrees such that either \(\mathbf{k}\) has characteristic \(0\) or \(d_{i}\leq\operatorname{char}(\mathbf{k})\) for all \(i.\) Given \(A,B\) there exist constants \(C(\textbf{d},A,B,l),D(\textbf{d},A,B,l)\) such that if \(\mathcal{G}\) is a \((C,D,t)\)-regular tower of degree **d** then for generic \(w\in V(\mathcal{G})^{l}\) outside an algebraic set of codimension \(t,\) the tower \(D_{l}\mathcal{G}(w)\) is \((A,B,t)\)-regular._ **Note:** The restriction on the degree in positive characteristic cannot be relaxed and this is the main obstacle to obtaining Theorem 1.2 for higher degrees. We include a brief discussion at the end of this section. To prepare for the proof, we start with a few lemmas. If \(g\in\mathbf{k}[V]\) is homogeneous of degree \(d,\) then associated to it is a multi-linear form \(\tilde{g}:V^{d}\rightarrow\mathbf{k}\) given by \(\tilde{g}(v_{1},\ldots,v_{d})=\nabla_{v_{1}}\ldots\nabla_{v_{d}}g(0).\) Note that \(\tilde{g}(x,\ldots,x)=d!g(x).\) If \(\mathcal{G}\) is a tower with highest degree \(d,\) we get a tower of forms \(\tilde{\mathcal{G}}\) defined on \(V^{d}.\) For each function \(g\) in \(\mathcal{G}_{i}\) of degree \(d_{i}\) and subset \(E\in\binom{[d]}{d_{i}},\)\(\tilde{\mathcal{G}}_{i}\) contains a function \(g_{E}(v_{1},\ldots,v_{d})=\tilde{g}((v_{i})_{i\in E}).\) **Lemma 6.2**.: _If the degree of \(\mathcal{G}\) and characteristic of \(\mathbf{k}\) are as in proposition 6.1 and \(A,B\) are constants, then there exist constants \(C(\textbf{d},A,B),D(\textbf{d},A,B)\) such that if \(\mathcal{G}\) is \((C,D,t)\)-regular, then \(\tilde{\mathcal{G}}\) is \((A,B,t)\)-regular._ Proof.: Let \(d\) be the maximal degree in \(\mathcal{G}.\) By induction on the degree sequence, we may assume that \(\tilde{\mathcal{G}}_{<h}\) is \((A,B,t+2^{d}m_{h})\)-regular. Write \(r=A(t+2^{d}m_{h})^{B}\) and assume, to get a contradiction, that \(\operatorname{rk}_{I(\tilde{\mathcal{G}}_{<h})}(\tilde{\mathcal{G}}_{h}) \leq r.\) Then there exists a non-trivial linear combination \[g=\sum_{E\in\binom{[d]}{d_{h}}}\sum_{j\in[m_{h}]}a_{j,E}(g_{h,j})_{E} \tag{1}\] with \(\operatorname{rk}_{I(\tilde{\mathcal{G}}_{<h})}(g)\leq r,\) where \(d\) is the maximal degree in \(\mathcal{G}.\) First we handle the case where \(\operatorname{char}(\mathbf{k})=0\) or \(d_{h}<\operatorname{char}(\mathbf{k}).\) Choose \(E_{0}\in\binom{[d]}{d_{h}}\) such that \(a_{j,E_{0}}\) are not all zero. Plugging into equation (1) \(v_{k}=x\) for \(k\in E_{0}\) and \(v_{k}=0\) for \(k\not\in E_{0},\) we obtain \(\operatorname{rk}_{I(\mathcal{G}_{<h})}(\mathcal{G}_{h})\leq r.\) This is impossible when \(C,D\) are sufficiently large. The remaining case is when \(\operatorname{char}(\mathbf{k})=d_{h}=p.\) In this case we plug \(v_{1}=\ldots=v_{p-1}=x\) and \(v_{p}=z\) into equation (1). This yields \(\operatorname{rk}_{I(T_{x}\mathcal{G})}(\nabla_{z}\mathcal{G}_{h})\leq r.\) Assuming without loss of generality that \(A,B\) are at least as large as the constants in lemma 5.1, we apply it to get \(\operatorname{rk}_{I(\mathcal{G}_{<h})}(\mathcal{G}_{h})\leq Ar^{B},\) which is a contradiction when \(C,D\) are sufficiently large. **Lemma 6.3**.: _If the degree of \(\mathcal{G}\) and characteristic of \(\mathbf{k}\) are as in proposition 6.1, and \(A,B,l\in\mathbb{N},\) there exist constants \(C(\textbf{d},A,B,l),D(\textbf{d},A,B,l)\) such that if \(\mathcal{G}\) is \((C,D,t)\)-regular, then \(D_{l}\mathcal{G}\) is \((A,B,t)\)-regular._ Proof.: Let \(d\) be the maximal degree in \(\mathcal{G}\) and assume by induction that \(D_{l}\mathcal{G}_{<h}\) is \((A,B,t+(1+dl)m_{h})\)-regular. Set \(r=A(t+(1+dl)m_{h})^{B}\) and suppose, to get a contradiction, that \(\operatorname{rk}_{I(D_{l}\mathcal{G}_{<h})}(D_{l}\mathcal{G}_{h})\leq r.\) Then there is a non-trivial linear combination \[g=\sum_{j\in[m_{h}]}a_{j}g_{h,j}(v)+\sum_{j\in[m_{h}],0\leq k\leq d_{h}-1, \lambda\in[l]}b_{j,\lambda}^{k}g_{h,j}^{k}(w_{\lambda},v) \tag{2}\] with \(\operatorname{rk}_{I(D_{l}\mathcal{G}_{<h})}(g)\leq r.\) If the \(a_{j}\) are not all zero, then we can plug in \(w_{\lambda}=0\) for all \(\lambda\) and then equation (2) yields \(\operatorname{rk}_{I(\mathcal{G}_{<h})}(\mathcal{G}_{h})\leq r,\) a contradiction. If \(a_{j}=0\) for all \(j,\) there is some \(\lambda_{0}\in[l]\) for which \(b_{j,\lambda_{0}}^{k}\) are not all zero. We plug in \(w_{\lambda_{0}}=w\) and \(w_{\lambda}=0\) for all \(\lambda\neq\lambda_{0}.\) Thus \(g\) specializes to \[g^{\prime}(w,v)=\sum_{j\in[m_{h}],0\leq k\leq d_{h}-1}b_{j,\lambda_{0}}^{k}g_{ h,j}^{k}(w,v),\] and \(I(D_{l}\mathcal{G}_{<h})\) specializes to \(I(D\mathcal{G}_{<h}).\) Now choose \(0\leq k_{0}\leq d_{h}-1\) such that \(b_{j,\lambda_{0}}^{k_{0}}\) are not all zero and apply \(\nabla_{(0,v_{1})}\ldots\nabla_{(0,v_{k_{0}})}\nabla_{(w_{1},0)}\ldots\nabla_{( w_{d_{h}-k_{0}},0)}.\) This yields \(\operatorname{rk}_{I(\tilde{\mathcal{G}}_{<h})}(\sum_{j\in[m_{h}]}b_{j, \lambda_{0}}^{k_{0}}\tilde{g}_{j})\leq 2^{d_{h}}r.\) By lemma 6.2 this gives a contradiction when \(C,D\) are sufficiently large. **Lemma 6.4**.: _Suppose \(\mathcal{G}=(g_{1},\ldots,g_{s})\in\mathbf{k}[V]\) are homogeneous forms and \(\operatorname{rk}_{I(\mathcal{G})}(f)\leq r.\) Then_ \[\dim X(\mathcal{G})\cap S(f,\mathcal{G})\geq\dim X(\mathcal{G})-2r.\] Proof.: Write \(f(x)=\sum_{j=1}^{r}p_{j}(x)q_{j}(x)\mod(I(\mathcal{G})).\) The directional derivative satisfies \[\nabla_{v}f(x)=\sum_{j=1}^{r}p_{j}(x)\nabla_{v}q_{j}(x)+\nabla_{v}p_{j}(x)q_{j} (x)\mod(g_{i}(x),\nabla_{v}g_{i}(x)),\] so \(X(\mathcal{G})\cap(p_{j}=q_{j}=0)\subset X(\mathcal{G})\cap S(f,\mathcal{G}).\) Let \(X_{0}\subset X(\mathcal{G})\) be an irreducible component of dimension \(\dim X(\mathcal{G}).\) Then every irreducible component of the nonempty algebraic set \(X_{0}\cap(p_{j}=q_{j}=0)\) has dimension \(\geq\dim(X_{0})-2r.\) We need one more well-known lemma, which we prove for completeness. **Lemma 6.5**.: _Let \(\varphi:X\to W\) be a morphism of affine algebraic sets (not necessarily irreducible). Let \(W_{\eta}\) be the Zariski closure of_ \[\{w\in W:\dim\varphi^{-1}(w)\geq\dim X-\dim W+\eta\}.\] _Then \(\dim W_{\eta}\leq\dim W-\eta.\)_ Proof.: Without loss of generality we may assume that \(X\) is irreducible and \(\varphi\) is dominant. Let \(X_{\eta}=\varphi^{-1}(W_{\eta}).\) The map \(\psi=\varphi\upharpoonright_{X_{\eta}}:X_{\eta}\to W_{\eta}\) is dominant so for generic points \(w\in W_{\eta}\) we have \(\dim\varphi^{-1}(w)=\dim\psi^{-1}(w)=\dim X_{\eta}-\dim W_{\eta}\) (this is [20, Theorem 9.9], for example). Therefore, \[\dim X-\dim W+\eta\leq\dim\varphi^{-1}(w)=\dim X_{\eta}-\dim W_{\eta}\leq\dim X -\dim W_{\eta}.\] Rearranging, we get \(\dim W-\dim W_{\eta}\geq\eta.\) Proof of proposition 6.1.: By induction on the degree sequence, for generic \(w\in X(\mathcal{G}_{<h})^{l}\) outside an algebraic set of codimension \(t+lm_{h}\) we have that \(D_{l}\mathcal{G}_{<h}(w)\) is \((A,B,t+lm_{h}d_{h}))\)-regular. By interesecting, this holds for \(w\in X(\mathcal{G})^{l}\) outside an algebraic set of codimension \(t.\) We can finish by showing that for \(w\in X(\mathcal{G}_{<h})^{l}\) outside an algebraic set of codimension \(t+lm_{h}\) we have \[\operatorname{rk}_{I(D_{l}\mathcal{G}_{<h}(w))}D_{l}\mathcal{G}_{h}(w)>A(t+lm _{h}d_{h})^{B}.\] Set \(r=A(t+lm_{h}d_{h})^{B}\) and let \[X:=X(D_{l}\mathcal{G}_{<h}),\ Z:=X\cap S_{v}(D_{l}\mathcal{G}),\ W:=X( \mathcal{G}_{<h})^{l}.\] If \(C,D\) are sufficiently large then we can apply theorem 3.5 to \(D_{l}\mathcal{G}_{h}\) on the tower \(D_{l}\mathcal{G}_{<h}\) to get \(\operatorname{codim}_{X}Z>3r.\) Let \(\varphi:X\to W\) be the projection on the \(w\) coordinates and write \(X(w),Z(w)\) for \(\varphi^{-1}(w),\varphi^{-1}(w)\cap Z\) respectively. By corollary 3.4, \(X,W\) are irreducible complete intersections. For any \(w\in W,\) the principal ideal theorem yields \[\dim X(w) \geq\dim V-|D_{l}\mathcal{G}_{<h}(w)|=\dim V-(|D_{l}\mathcal{G}_ {<h}|-|\mathcal{G}_{<h}|^{l})\] \[=\dim(V^{l}\oplus V)-|D_{l}\mathcal{G}_{<h}|-(\dim V^{l}-| \mathcal{G}_{<h}|^{l})\] \[=\dim X-\dim W.\] By lemma 6.5 the set of \(w\in W\) with \(\dim(Z(w))\geq\dim Z-\dim W+r\) is contained in an algebraic set \(W_{r}\) with \(\operatorname{codim}_{W}(W_{r})\geq r.\) For any \(w\in W\setminus W_{r}\) we have \[\dim Z(w)<\dim Z-\dim W+r=\dim X-\dim W-2r\leq\dim X(w)-2r.\] Note that \(Z(w)=S(D_{l}\mathcal{G}(w))\cap X(D_{l}\mathcal{G}_{<h}(w))\) so by lemma 6.4 we have \(\operatorname{rk}_{I(D_{l}\mathcal{G}_{<h}(w))}D_{l}\mathcal{G}_{h}(w)>r\) for these \(w\), completing the proof. ### Necessity of the degree bound in Proposition 6.1 Proposition 6.1 doesn't hold when \(\mathbf{k}\) has characteristic \(p\geq 3\) and the tower \(\mathcal{F}\) contains forms of degree \(p+1.\) For example, the form \(f(v)=\sum_{i=1}^{n}v_{i}^{p+1}\) has \(\operatorname{rk}(f)\geq n/2\) and \(f^{2}(w,v)=0.\) This limits our proof method and prevents us from obtaining theorem 1.2 for higher degrees. Now we turn to characteristic \(2.\) In degree \(3,\) the results of [15] imply \(\operatorname{rk}(f^{i})\geq\operatorname{rk}(f)/4\) for any \(0\leq i\leq 3.\) In degree \(5,\) Proposition 6.1 fails for the form \(g(v)=\sum_{i=1}^{n}v_{i}^{5}\) which has \(\operatorname{rk}(g)\geq n/2\) and \(g^{2}(w,v)=0.\) What happens in degree \(4\) appears to be an interesting question. **Question 6.6**.: Does there exist a function \(\Gamma\) such that whenever \(\mathbf{k}\) is an algebraically closed field of characteristic \(2\) and \(f\in\mathbf{k}[V]\) is a homogeneous polynomial of degree \(4,\) we have \[\operatorname{rk}(f)\leq\Gamma(\operatorname{rk}(f^{2}(w,v)))?\] ## 7. Completing the proof of theorems 3.5 and theorem 3.3 We combine the results of the previous three sections to obtain: **Proposition 7.1**.: _Under the assumptions of theorem 3.5 there exist constants \(F(\boldsymbol{d},e),G(\boldsymbol{d},e)\) and collections of forms of positive degrees \(\mathcal{G}_{1},\ldots,\mathcal{G}_{e+1}\) and \(\mathcal{G}=(g_{1},\ldots,g_{\tau})\) such that_ 1. \(\mathcal{G}\) _depends only on_ \((x,y)\) _and is of degree_ \(<e,\)__ 2. \(\partial_{z}f\in I(\mathcal{G}_{i}\cup T_{z}\mathcal{F}\cup\mathcal{G})\) _for all_ \(i\in[e+1],\)__ 3. \(\cup_{i}\mathcal{G}_{i}\cup T_{z}\mathcal{F}\cup\mathcal{G}\) _is a regular sequence and_ 4. \(\tau\leq Ft^{G}.\)__ Proof.: By proposition 5.2 we have a subvariety \(Y\subset X(T_{z}\mathcal{F})\) of codimension \(t\) such that for generic points \(w_{0}\in Y\) there are homogeneous polynomials \(h_{1}^{w_{0}}(x,y),\ldots,h_{r}^{w_{0}}(x,y)\) of positive degrees \(<e\) with \[\partial_{z}f\in I((DT_{z}\mathcal{F})(w_{0}))+I(h_{1}^{w_{0}},\ldots,h_{r}^{ w_{0}}), \tag{3}\] where \(r=(e-1)t.\) The subvariety \(Y^{e+1}\subset X(T_{z}\mathcal{F})^{e+1}\) is of codimension \(t(e+1).\) Let \(A^{\prime}(\mathbf{d},e),B^{\prime}(\mathbf{d},e)\) be constants to be chosen later. By combining lemma 5.3 and proposition 6.1, there exist constants \(C(A^{\prime},B^{\prime},\mathbf{d},e),D(A^{\prime},B^{\prime},\mathbf{d},e)\) such that if \(\mathcal{F}\) is \((C,D,t)\)-regular then for a generic point \(w\in X(T_{z}\mathcal{F})^{e+1}\) outside a subvariety of codimension \(t(e+1)+1,\) the tower \((D_{e+1}T_{z}\mathcal{F})(w)\) is \((A^{\prime},B^{\prime},t)\)-regular. Therefore we can choose \(w\in Y^{e+1}\) such that both (3) and our condition on the regularity of \((D_{e+1}T_{z}\mathcal{F})(w)\) hold. Set \(\mathcal{G}_{i}=(DT_{z}\mathcal{F})(w_{i})\setminus T_{z}\mathcal{F}\) and let \(\mathcal{G}^{\prime}=(h_{\beta}^{w_{\alpha}})_{\alpha\in[e+1],\beta\in[\tau]}.\) Note that the first two conditions of the proposition hold with \(\mathcal{G}^{\prime}\) in the role of \(\mathcal{G},\) but not necessarily the third. Let \(S=\mathbf{k}[\bar{x},\bar{y},\bar{z}]/I((D_{e+1}T_{z}\mathcal{F})(w))\) and let \(A,B\) be the constants of theorem 3.3 for the degree sequence of \((D_{e+1}T_{z}\mathcal{F})(w)\) followed by \((1,\ldots,e-1).\) Applying lemma 4.1 to the image of \(\mathcal{G}^{\prime}\) in the graded ring \(S\) we obtain a tower \(\mathcal{G}\) of size \(\leq Ft^{G}\) which is \((A,B,1)\)-regular, where \(F(A,B,e),G(A,B,e)\) are constants. Note that \(\mathcal{G}\) is still composed of functions of \((x,y)\) since functions of positive degree in \(z\) don't contribute to the low rank representations appearing in the process described by lemma 4.1. Now if \(A^{\prime}(\mathbf{d},e),B^{\prime}(\mathbf{d},e)\) are chosen sufficiently large then \((D_{e+1}T_{z}\mathcal{F})(w)\) is \((A,B,1+Ft^{G})\)-regular, so the tower obtained by adding \(\mathcal{G}\) above \((D_{e+1}T_{z}\mathcal{F})(w)\) is \((A,B,1)\)-regular and therefore a regular sequence by corollary 3.4. To finish the proof, we need the following lemma from commutative algebra. This follows by induction on \(l\) from the case of two ideals in [6, Exercise A3.17]. **Lemma 7.2**.: _Let \((I_{j})_{j\in[l]}\) be ideals in a commutative ring such that the union of their generators forms a regular sequence in any order. Then \(\cap_{j\in[l]}I_{j}=\prod_{j\in[l]}I_{j}.\)_ ### Completing the proof of theorem 3.3 As we mentioned after stating theorem 3.5, it implies theorem 3.3 for the same degree sequences. The remaining cases to prove for theorem 3.3 are when \(\operatorname{char}(\mathbf{k})=p\) and \(e=p+1\) or \(e=p+2.\) To prove them we will need a suitable version of lemma 5.1. **Lemma 7.3**.: _Suppose theorem 3.3 holds for smaller degree sequences. Then there exist constants \(A,B\) depending on the degrees such that if \(f,\mathcal{F}\) are homogeneous and \(\deg f\in\{p+1,p+2\}\), \(\mathcal{F}\) is \((A,B,t)\)-regular and \(\operatorname{rk}_{I(T\mathcal{F})}(\nabla_{z}f)\leq t,\) then we have \(\operatorname{rk}_{I(\mathcal{F})}(f)\leq At^{B}.\)_ Proof.: If \(p\) doesn't divide \(\deg f\) we can plug in \(z=x\) and get \(\operatorname{rk}_{I(\mathcal{F})}(f)\leq t.\) In the remaining case (\(p=2,\deg f=4\)) we follow the proof of lemma 5.1 to obtain \(\mathcal{G}=(g_{1}(x),\ldots,g_{r}(x))\) such that \(T\mathcal{F},\mathcal{G}\) is an \(R_{3}\)-sequence and \(r\leq At^{B}.\) By the same argument given there, \(f\) is reducible in \(\mathbf{k}[\bar{x}]/I(\mathcal{F},\mathcal{G}).\) Proof of theorem 3.3 when \(\deg(f)\in\{p+1,p+2\}\).: Inductively, we prove this first for degree \(p+1\) and then for degree \(p+2.\) The proof we gave for theorem 3.5 (in the case where there is only one variable appearing) yields \(\operatorname{rk}_{I(T\mathcal{F})}(\nabla_{z}f)\leq Ft^{G}.\) Then we finish by the above lemma. ## 8. Theorem 1.2 for finite fields Throughout this section \(\mathbf{k}\) will denote a finite field of characteristic \(p.\) Our goal is to prove the following result, from which theorem 1.2 follows. **Theorem 8.1** (Regularity over the algebraic closure).: _Let \(A,B\) be constants and \(\boldsymbol{d}=(d_{1},\ldots,d_{h})\) a degree sequence with \(d_{i}\leq p\) for all \(i.\) There exist constants \(C(A,B,\boldsymbol{d}),D(A,B,\boldsymbol{d})\) such that if \(\mathcal{F}\) is a \((C,D,t)\)-regular tower of degree \(\boldsymbol{d}\) in \(\mathbf{k}[x_{1},\ldots,x_{n}]\) then \(\mathcal{F}\) is \((A,B,t)\)-regular as a tower in \(\overline{\mathbf{k}}[x_{1},\ldots,x_{n}].\)_ Proof of theorem 1.2 for finite fields, assuming theorem 8.1.: Let \(A,B\) be the constants of theorem 3.3 for degree sequence \(\mathbf{d}=(1,2,\ldots,d),\) and let \(C(A,B,\mathbf{d}),D(A,B,\mathbf{d})\) be the constants of theorem 8.1. By lemma 4.1, there exists a \((C,D,t)\)-regular tower \(\mathcal{G}\) in \(\mathbf{k}[x_{1},\ldots,x_{n}]\) with degrees \((1,\ldots,d)\) such that \((g_{1},\ldots,g_{s})\in I(\mathcal{G})\) and \(|\mathcal{G}|\leq F(s+t)^{G},\) where \(F(C,D),G(C,D)\) are constants. By theorem 8.1 the tower \(\mathcal{G}\) is \((A,B,t)\)-regular as a tower in \(\overline{\mathbf{k}}[x_{1},\ldots,x_{n}],\) which implies it is an \(R_{t}\)-sequence by corollary 3.4. The proof of theorem 8.1 will be, of course, by induction on the degree sequence. We now assume it holds for lower degree sequences than \(\mathbf{d}=(d_{1},\ldots,d_{h})\) and obtain some useful consequences. First, we have a version of lemma 5.1 for finite fields. **Lemma 8.2**.: _Assume theorem 8.1 holds for lower degree sequences than \((d_{1},\ldots,d_{h}).\) There exist constants \(A,B\) depending on the degrees such that if \(f\in\mathbf{k}[x_{1},\ldots,x_{n}]\) is homogeneous of degree \(d_{h},\)\(\mathcal{F}\) is an \((A,B,t)\)-regular tower in \(\mathbf{k}[x_{1},\ldots,x_{n}]\) of degrees \((d_{1},\ldots,d_{h-1})\) and \(\operatorname{rk}_{I(T\mathcal{F})}(\nabla_{z}f)\leq t,\) then we have \(\operatorname{rk}_{I(\mathcal{F})}(f)\leq At^{B}.\)_ Proof.: If \(d_{h}<p\) then we can just plug in \(z=x\) and get \(\operatorname{rk}_{I(\mathcal{F})}(f)\leq t.\) Otherwise, as in the proof of lemma 5.1 we get a tower \(\mathcal{G}\) in \(\mathbf{k}[x_{1},\ldots,x_{n}]\) of size \(|\mathcal{G}|\leq Ft^{G}\) and degree \((1,\ldots,p-1)\) such that \((\mathcal{F},\mathcal{G})\) is a \((C,D,3)\)-regular tower in \(\mathbf{k}[x_{1},\ldots,x_{n}]\) and \(\nabla_{z}f\in I(T\mathcal{F},\mathcal{G}).\) By theorem 8.1, \((\mathcal{F},\mathcal{G})\) an \((A,B,3)\)-regular tower in \(\bar{\mathbf{k}}[x_{1},\ldots,x_{n}]\) and so by corollary 3.4\((\mathcal{F},\mathcal{G})\) is an \(R_{3}\)-sequence. We claim that the image of \(f\) in \(R:=\mathbf{k}[x_{1},\ldots,x_{n}]/I(\mathcal{F},\mathcal{G})\) is reducible, which implies \(\operatorname{rk}_{I(\mathcal{F})}(f)\leq Ft^{G}+1\) as desired. This will be proved the same way as corollary 2.1. Suppose, to get a contradiction, that the image of \(f\) in \(R\) is irreducible. Then in particular \(R/(f)\) is reduced and so by [24, Tag 030U]\(\bar{\mathbf{k}}[x_{1},\ldots,x_{n}]/I(\mathcal{F},\mathcal{G},f)\) is also reduced. Then the algebraic set \(X:=V(\mathcal{F},\mathcal{G},f)\) has \(\operatorname{codim}_{\mathbb{A}^{n}}X=|\mathcal{F}|+|\mathcal{G}|+1\) and ideal \(I(X)=I(\mathcal{F},\mathcal{G},f).\)\(\nabla_{z}f(x)\) vanishes for any \(x\in X\) and \(z\in T_{x}X,\) so the codimension of the tangent space is always \(\leq|\mathcal{F}|+|\mathcal{G}|\) which contradicts the fact that \(X\) must contain smooth points. ### Reduction to towers of multi-linear forms Recall the tower \(\tilde{\mathcal{F}}\) of multi-linear forms obtained from \(\mathcal{F}\) which we defined in section 6. The next lemma says that if \(\mathcal{F}\) is sufficiently regular then so is \(\tilde{\mathcal{F}}\). **Lemma 8.3**.: _Assume theorem 8.1 holds for lower degree sequences than \((d_{1},\ldots,d_{h})\) and let \(A,B\) be constants. There exist constants \(C(\textbf{d},A,B),D(\textbf{d},A,B)\) such that if \(\mathcal{F}\) is a \((C,D,t)\)-regular tower in \(\mathbf{k}[x_{1},\ldots,x_{n}]\), then \(\tilde{\mathcal{F}}\) is an \((A,B,t)\)-regular tower over \(\mathbf{k}.\)_ Proof.: This follows from lemma 8.2 the same way that lemma 6.2 follows from lemma 5.1. The converse over \(\bar{\mathbf{k}}\) (and over any field) is also true and is proved easily. **Lemma 8.4**.: _Suppose \(\mathcal{F}\) is a tower in \(\bar{\mathbf{k}}[x_{1},\ldots,x_{n}]\) with degrees \((d_{1},\ldots,d_{h})\) and \(d=\max(d_{1},\ldots,d_{h}).\) If \(\tilde{\mathcal{F}}\) is \((2^{d}A,B,t)\)-regular then \(\mathcal{F}\) is \((A,B,t)\)-regular._ Proof.: Suppose \(\operatorname{rk}_{I(\mathcal{F}_{<i})}(\mathcal{F}_{i})\leq r\) for some \(i,r.\) Applying \(\nabla_{v_{1}}\ldots\nabla_{v_{d_{i}}},\) we get \(\operatorname{rk}_{I(\tilde{\mathcal{F}}_{<i})}(\tilde{\mathcal{F}}_{i})\leq 2 ^{d_{i}}r.\) Given these two lemmas, theorem 8.1 will follow from a corresponding theorem for towers of multi-linear forms. To define these multi-linear towers we set up some notation. Let \(V_{1},\ldots,V_{d}\) be finite dimensional vector spaces over \(\mathbf{k}.\) For \(I\subset[d]\) set \(V^{I}:=\prod_{i\in I}V_{i}.\) **Definition 8.5**.: A tower \(\mathcal{G}\) in \(\mathbf{k}[V^{[d]}]\) is a multi-linear tower if for all \(g_{i,j}\in\mathcal{G}_{i}\) there exists some \(I(g_{i,j})\in{[d]\choose d_{i}}\) such that \(g_{i,j}\) only depends on the \(V^{I}\) coordinates and \(g_{i,j}:V^{I}\to\mathbf{k}\) is multi-linear. For \(I\in{[d]\choose d_{i}}\) we write \(\mathcal{G}_{i}^{I}\) for the collection of forms \(g_{i,j}\) with \(I(g_{i,j})=I.\) For example \(\tilde{\mathcal{F}}\) is a multi-linear tower. We can now state the version of theorem 8.1 for multi-linear towers. Note that now there is no restriction on the characteristic. **Theorem 8.6**.: _Let \(A,B\) be constants and \(\textbf{d}=(d_{1},\ldots,d_{h})\) a degree sequence. There exist constants \(C(A,B,\textbf{d}),D(A,B,\textbf{d})\) such that if \(\mathcal{G}\) is a \((C,D,t)\)-regular multi-linear tower of degree **d** over \(\mathbf{k}\) then \(\mathcal{G}\) is \((A,B,t)\)-regular as a tower over \(\bar{\mathbf{k}}.\)_ Proof of theorem 8.1 assuming theorem 8.6.: By lemma 8.3, \(\tilde{\mathcal{F}}\) is regular over \(\mathbf{k}.\) By theorem 8.6 it's regular over \(\bar{\mathbf{k}}.\) By lemma 8.4 we get that \(\mathcal{F}\) is regular over \(\bar{\mathbf{k}}.\) ### Universality and proof of theorem 8.6 The proof of theorem 8.6 will be based on a universality result for sufficiently regular towers of multi-linear forms. **Theorem 8.7** (Universality of regular towers).: _Let \(\textbf{d}=(d_{1},\ldots,d_{h})\) be a degree sequence with \(d=\max_{i\in[h]}(d_{i})\) and let \(N=N_{1}>\ldots>N_{h}\) be positive integers. There exist constants \(A(\textbf{d}),B(\textbf{d})\) such that the following holds. Suppose \(\mathcal{G},\mathcal{H}\) are multi-linear towers in \(\mathbf{k}[V^{[d]}]\) of degree **d** and that for all \(i\in[h]\) and \(I\in\binom{[d]}{d_{i}}\) we have \(|\mathcal{G}_{i}^{I}|=|\mathcal{H}_{i}^{I}|.\) Suppose also that for all \(i\in[h]\) we have \(\operatorname{rk}_{I(\mathcal{G}_{<i})}(\mathcal{G}_{i})>A(N_{i}(m_{i}+\ldots+ m_{h}))^{B}.\) Then for all \(j\in[d]\) there exist linear maps \(T_{j}:\mathbf{k}^{N}\to V_{j}\) such that for all \(i\in[h]\) and \(I\in\binom{[d]}{d_{i}}\) we have_ \[\mathcal{G}_{i}^{I}\circ(T_{1}\times\ldots\times T_{d})=\mathcal{H}_{i}^{I} \mod(x_{k}(l))_{k\in[d],l>N_{i}}.\] To prove this, we need to look more closely at the collection of equations that we'd like the linear maps \(T_{i}\) to satisfy. Assume without loss of generality that \(V_{i}=\mathbf{k}^{n_{i}}.\) Then linear maps \(T_{i}:\mathbf{k}^{N}\to V_{i}\) are of the form \(T_{i}(x)=A_{i}\cdot x\) where \(A_{i}\in\mathcal{M}_{n_{i}\times N}(\mathbf{k}).\) For \(g_{i,j}\in\mathcal{G}_{i}^{I}\) and a choice of indices \(l\in(N_{i})^{I},\) the coefficient of \(\prod_{k\in I}x_{k}(l_{k})\) in \(g_{i,j}\circ(T_{1}\times\ldots\times T_{d})\) is given by a multi-linear form \[C_{i,j}^{l}:\prod_{k\in I}\mathcal{M}_{n_{k}\times N}(\mathbf{k})\to\mathbf{k},\] namely \(C_{i,j}^{l}((A_{k})_{k\in I})=(g_{i,j}\circ((A_{k})_{k\in I}))\left((e_{l_{k}} )_{k\in I}\right)\), where \(e_{j}\) are the standard basis vectors. This gives us a multi-linear tower \(\mathcal{C}\) in \(\mathbf{k}[\prod_{i\in[d]}\mathcal{M}_{n_{i}\times N}(\mathbf{k})]\) with the layer \(\mathcal{C}_{i}\) composed of \(C_{i,j}^{l}\) for all \(j\in[m_{i}]\) and relevant choices of \(l.\) **Claim 8.8**.: For all \(i\in[h]\) we have \(|\mathcal{C}_{i}|=m_{i}(N_{i})^{d_{i}}\) and \(\operatorname{rk}_{I(\mathcal{C}_{<i})}(\mathcal{C}_{i})\geq\operatorname{rk} _{I(\mathcal{G}_{<i})}(\mathcal{G}_{i}).\) Proof.: The statement about \(|\mathcal{C}_{i}|\) follows easily from our definitions. Now, suppose to get a contradiction, that the rank statement is false. Let \(r_{i}=\operatorname{rk}_{I(\mathcal{G}_{<i})}(\mathcal{G}_{i}).\) Then we have a non-trivial linear combination \[C=\sum_{j,l}\lambda_{j}^{l}C_{i,j}^{l}\] with \(\operatorname{rk}_{I(\mathcal{C}_{<i})}(\mathcal{C}_{i})<r_{i}.\) Then there is some \(I_{0},l^{0}\) such that the coefficients \(\lambda_{j}^{l^{0}}\) are not all zero. Now let \(x_{k}\) be the coordinates on \(V_{k}\) and make the linear substitution \[A_{k}(s,t)=\mathbb{1}_{t=(l^{0})_{k}}x_{k}(s).\] Under this linear substitution, \(\operatorname{rk}_{I(\mathcal{C}_{<i})}(\mathcal{C}_{i})<r_{i}\) implies \[\operatorname{rk}_{I(\mathcal{G}_{<i})}(\sum_{j,g_{i,j}\in\mathcal{G}_{i}^{I_{ 0}}}\lambda_{j}^{l^{0}}\,g_{i,j})<r_{i}\] which is a contradiction. **Corollary 8.9**.: _Given constants \(C,D\) there exist constants \(A(C,D,\textbf{d}),B(C,D,\textbf{d})\) such that if \(\mathcal{G}\) satisfies the hypothesis on rank with constants \(A,B\) then \(\mathcal{C}\) is \((C,D,t)\)-regular._ The desired equalities \[\mathcal{G}_{i}^{I}\circ(T_{1}\times\ldots\times T_{d})=\mathcal{H}_{i}^{I} \mod(x_{k}(l))_{k\in[d],l>N_{i}}.\] are in fact a system of equations of the form \[C_{i,j}^{l}(A_{1},\ldots,A_{d})=\alpha_{i,j}^{l}, \tag{4}\] where \(\alpha_{i,j}^{l}\) are the coefficients of \(h_{i,j}.\) The proof of theorem 8.7 is then completed by the following result [16, lemma 3.2]. **Theorem 8.10**.: _There exist constants \(C(\textbf{d}),D(\textbf{d})\) such that if \(\mathcal{G}\) is a \((C,D,1)\)-regular multi-linear tower in \(\mathbf{k}[V]\) then for any choice of scalars \(\alpha\in\prod_{i,j}\mathbf{k},\) there exists a choice of \(x\in V(\mathbf{k})\) with \(g_{i,j}(x)=\alpha_{i,j}.\) In other words, the map \(\mathcal{G}:V(\mathbf{k})\rightarrow\prod_{i,j}\mathbf{k}\) is surjective._ Now we are ready to prove theorem 8.6. Proof.: Let \(r_{j}=A(|\mathcal{G}_{\geq j}|+t)^{B}+1,\) and apply theorem 8.7 with \(N_{i}=\sum_{j\geq i}2m_{j}r_{j}.\) Note that if \(\mathcal{G}\) is \((C,D,t)\)-regular for sufficiently large constants \(C(A,B,\textbf{d}),D(A,B,\textbf{d})\) then it satisfies the hypotheses of theorem 8.7 for these values of \(N_{i}.\) Now we choose the tower \(\mathcal{H}\) that we obtain after linear substitution. First, identify \(\mathbf{k}^{N}\cong\prod_{i\in[h]}\mathcal{M}_{m_{i}\times 2r_{i}}(\mathbf{k}).\) Thus for \(k\in[d]\) and \(i\in[h],\)\(x_{k}(i)\in\mathcal{M}_{m_{i}\times 2r_{i}}(\mathbf{k}).\) For each \(g_{i,j}\in\mathcal{G}_{i}^{I},\) we choose \(h_{i,j}\in\mathcal{H}_{i}^{I}\) as follows \[h_{i,j}((x_{k})_{k\in I})=\sum_{s\in[2r_{j}]}\prod_{k\in I}(x_{k}(i))_{j,s}.\] By theorem 8.7, for all \(j\in[d]\) there exist linear maps \(T_{j}:\mathbf{k}^{N}\to V_{j}\) such that for all \(i\in[h]\) and \(I\in\binom{[d]}{d_{i}}\) we have \[g_{i,j}\circ(T_{1}\times\ldots\times T_{d})=h_{i,j}\mod(x_{k}(l))_{k\in[d],l<i}.\] Let \(T=T_{1}\times\ldots\times T_{d}.\) Suppose, to get a contradiction, that over \(\bar{\mathbf{k}}\) we have \(\operatorname{rk}_{I(\mathcal{G}_{<i})}(\mathcal{G}_{i})<r_{i}\) for some \(i\in[h].\) Then, of course, \(\operatorname{rk}_{I(\mathcal{G}_{<i}\circ T)}(\mathcal{G}_{i}\circ T)<r_{i}.\) Restricting to the subspace \(\{x_{k}(l)=0:k\in[d],l<i\},\) we obtain \(\operatorname{rk}(\mathcal{H}_{i})<r_{i}.\) So there exists a non-trivial linear combination with \(\operatorname{rk}(\sum_{j}\lambda_{j}h_{i,j})<r_{i}.\) Suppose without loss of generality that \(\lambda_{1}\neq 0\) and restrict further to the subspace \(\{x_{k}(i)_{j,s}=0:k\in[d],j>1\}\) to get \(\operatorname{rk}(h_{i,1})<r_{i}.\) By lemma 6.4 this implies that \(S(h_{i,1})\) has codimension \(<2r_{i},\) but this is easily seen to be a union of subspaces of codimension \(2r_{i},\) contradiction.
2309.14088
REPA: Client Clustering without Training and Data Labels for Improved Federated Learning in Non-IID Settings
Clustering clients into groups that exhibit relatively homogeneous data distributions represents one of the major means of improving the performance of federated learning (FL) in non-independent and identically distributed (non-IID) data settings. Yet, the applicability of current state-of-the-art approaches remains limited as these approaches cluster clients based on information, such as the evolution of local model parameters, that is only obtainable through actual on-client training. On the other hand, there is a need to make FL models available to clients who are not able to perform the training themselves, as they do not have the processing capabilities required for training, or simply want to use the model without participating in the training. Furthermore, the existing alternative approaches that avert the training still require that individual clients have a sufficient amount of labeled data upon which the clustering is based, essentially assuming that each client is a data annotator. In this paper, we present REPA, an approach to client clustering in non-IID FL settings that requires neither training nor labeled data collection. REPA uses a novel supervised autoencoder-based method to create embeddings that profile a client's underlying data-generating processes without exposing the data to the server and without requiring local training. Our experimental analysis over three different datasets demonstrates that REPA delivers state-of-the-art model performance while expanding the applicability of cluster-based FL to previously uncovered use cases.
Boris Radovič, Veljko Pejović
2023-09-25T12:30:43Z
http://arxiv.org/abs/2309.14088v1
REPA: Client Clustering without Training and Data Labels for Improved Federated Learning in Non-IID Settings ###### Abstract Clustering clients into groups that exhibit relatively homogeneous data distributions represents one of the major means of improving the performance of federated learning (FL) in non-independent and identically distributed (non-IID) data settings. Yet, the applicability of current state-of-the-art approaches remains limited as these approaches cluster clients based on information, such as the evolution of local model parameters, that is only obtainable through actual on-client training. On the other hand, there is a need to make FL models available to clients who are not able to perform the training themselves, as they do not have the processing capabilities required for training, or simply want to use the model without participating in the training. Furthermore, the existing alternative approaches that avert the training still require that individual clients have a sufficient amount of labeled data upon which the clustering is based, essentially assuming that each client is a data annotor. In this paper, we present REPA, an approach to client clustering in non-IID FL settings that requires neither training nor labeled data collection. REPA uses a novel supervised autoencoder-based method to create embeddings that profile a client's underlying data-generating processes without exposing the data to the server and without requiring local training. Our experimental analysis over three different datasets demonstrates that REPA delivers state-of-the-art model performance while expanding the applicability of cluster-based FL to previously uncovered use cases. ## Introduction Federated learning (FL) [18] is a distributed machine learning paradigm in which a model is trained locally over a set of clients, with only the model updates being observed and aggregated by a server. FL ensures that the clients' data remain on the devices and offloads the training burden from the server, thus representing an important step towards decentralized AI. Avoiding aggregating the data on the server to ensure privacy, on the other hand, hinders the model convergence in cases of diverging data distributions across clients that participate in FL. The problem of non-independent and identically distributed (non-IID) data is known to hamper the training convergence and the accuracy of the final model [10]. Consequently, approaches have been proposed to go beyond FedAvg [17] - a commonly used simple aggregation of local model updates. Among these approaches, of particular interest are those that rely on grouping clients into clusters exhibiting relatively homogeneous data distributions, such as [14, 15]. After clustering the clients the server can evolve different (flavors of) models for different clusters of clients. Such ensembles of models can improve convergence and inference accuracy in non-IID settings, while still ensuring that the clients' data remains fully private. Unfortunately, real-world applicability of clustering-based solutions to FL in non-IID settings remains limited, since most clustering methods rely on information stemming from local training. This information, such as the direction in which the model parameters are evolving during local training, is only obtainable by clients who have sufficient computational and energy resources to conduct on-device training. In real-world applications, however, FL-trained models will often be used by clients who may never get a chance to participate in the training themselves. For instance, Android's predictive on-screen keyboard - Gboard - is trained in a federated manner, yet in each round, only a few hundred high-end smartphones connected to a charger participate in the training [1]. Consequently, comprehensive clustering that would allow personalized model deployment across billions of Android users remains infeasible, if training is mandated for each client. While alternative approaches that alleviate the need for training have appeared [1, 13], they still require that each client collects and labels a substantial amount of data upon which the clustering is performed. Such a requirement implies that each client has to serve as a data annotator, even if only interested in using, not necessarily training, the model. In this paper, we present REPA, a novel method for client clustering in non-IID data settings. Unlike the existing solutions, REPA brings benefits of personalized FL modeling to a wide range of clients, including those who have no labeled data and those who cannot train, either temporarily, e.g. due to insufficient battery power, or at all, e.g. due to poor hardware capabilities. To enable this, REPA develops an encoder to embed the client's unlabeled data into a latent space. The clustering is then performed on profiles of the embedded data. We show that statistics calculated over the generated data embeddings reflect the actual data distribution suffi ciently well to allow robust clustering. Moreover, experimenting with different datasets we demonstrate that REPA matches the state-of-the-art inference in FL while making customized models available to a wide range of training-free clients. Specifically, the contributions of this paper include: * We devise a supervised autoencoder-based embedder through which we profile the data without the need for training and without the need for labeled data; * We propose a lightweight FL client clustering scheme that harnesses statistical profiles of the embedding distributions. Neither the data nor the actual data distributions need to be sent to the server; * We compare the performance of REPA against the state-of-the-art scheme for client clustering that relies on on-client training, and demonstrate that our approach delivers comparable performance while significantly expanding the usability of FL-trained models. ## Related Work The stochastic gradient used for FL training is an unbiased estimate of the full gradient only if gradients produced by individual clients are all calculated over data samples originating from the equivalent data-generating processes (DGPs) [22]. In practice, however, DGPs may vary across clients, for instance, due to the DGPs' tight relationship to individual human behavior, including one's movement patterns, keyboard typing habits, voice properties, and other factors. In case of significant non-IIDness, straightforward aggregation of local gradients at the server may lead to drastically underperforming FL models, with studies demonstrating a 55% reduction in model accuracy on the keyword spotting and 51% accuracy loss on CIFAR-10 image recognition tasks [22]. Individual clients "pulling" the global model in different directions prevent model convergence in FL. To counter this, Li et al. propose penalizing the deviation of local models from the global model through L2 regularization of model weights [10]. Alternatively, contrastive learning can be harnessed to pull the embeddings of data instances belonging to the same classes close to each other in the embedding space, and the embeddings of data instances belonging to different classes away from each other [23]. This is done across all clients, who are orchestrated to align these embeddings with class prototypes calculated at the global level. However, even if the above solutions manage to improve the convergence, a single global model may not perform well across the range of clients that train and/or use it. In such cases, clustering clients into groups exhibiting similar DGPs might be more appropriate, as it would allow for multiple model flavors to be trained - one for each cluster of clients. Nevertheless, such clustering is challenging in FL settings, as it has to satisfy what appear to be conflicting requirements: reflect individual clients' data distributions and maintain the clients' data privacy. Consequently, clustering solutions have appeared that rely on data that is an inherent part of federated training. The approach presented in [14], for example, first trains the joint model \(\theta^{T}\) on all the available clients up to round \(T\) and then computes the difference between the joint model and each locally evolved model \(\Delta\theta_{i}=\theta^{T}-\theta_{i}^{T+1}\) for every client. This vector is then used for hierarchically clustering the clients. Similarly, FeSEM [10] randomly initializes multiple models and iteratively assigns clients to a cluster whose model parameters are nearest to the client's locally-tuned model according to a certain distance measure. Other algorithms leverage information directly related to the gradient updates to cluster the clients as well [11, 13]. The above approaches rely on information accessible only to clients capable of training a model. Consequently, a potentially large number of clients unable to perform this action cannot be assigned to a cluster and, as a result, do not obtain the corresponding model flavor. Iterative Federated Clustering Algorithm [15] avoids this training requirement by letting the clients determine which cluster they belong to themselves. To do so, each client calculates the value of a certain loss function over its local dataset for all the model flavors and selects the model flavor that results in the lowest loss. For calculating the loss, however, this approach requires that each client collects a labeled dataset. Obtaining data labels is known to be expensive in many mobile sensing applications and avoiding this restriction would greatly improve the solution generalizability. To the best of our knowledge, the only clustering solution that requires neither labeled data nor on-device training is [16]. The approach, however, clusters individual data points, not individual clients. Embeddings of data points obtained through an autoencoder are clustered, and a separate model is trained for every cluster. Running multiple instances of FedAvg training in parallel across clients is feasible as the algorithm targets cross-silo FL settings, where the training is performed by a small number of relatively reliable clients with large datasets, for instance, hospitals. Nevertheless, the approach is unlikely to be suitable for resource-constrained devices, such as smartphones, and dynamic ubiquitous computing settings. ## Preliminaries and General Framework for Clustered FL The related work analysis from the previous section reveals no unified framework for considering clustering FL clients in non-IID settings. Such a framework is necessary should we wish to ensure that competing clustering solutions are evaluated on the same terms. Thus, in this section, we introduce a general framework for clustered FL. We assume that the set of clients \(C\) can be partitioned as follows: * \(C_{t}\subseteq C\)_training clients_, which are able to train a model on their local datasets; * \(C_{h}\subset C\)_holdout clients_, which wish to use an FL-trained model, yet, do not participate in model training, either because of the lack of local labeled datasets, insufficient computing capabilities, or other reasons; and that \(C_{t}\cup C_{h}=C\) and \(C_{t}\cap C_{h}=\emptyset\). Further, we assume that each client \(i\in C\) possesses a (not necessarily labeled) dataset \(D_{i}\) generated by sampling data from the single Data-Generating Process (DGP) \(\gamma_{i}\) associated with the client. That is, \[D_{i}=\{X_{i,j},y_{i,j}\}_{j=1}^{|D_{i}|}\sim_{i.i.d.}\gamma_{i}. \tag{1}\] The concept of DGP is abstract, as it represents an unknown stochastic process that encompasses all factors influencing the collected data. From a probabilistic perspective, any DGP \(\gamma_{i}\) is characterized by a distribution \(p_{i}(x,y)\), which may be rewritten as \(p_{i}(x)\cdot p_{i}(y|x)\) or \(p_{i}(y)\cdot p_{i}(x|y)\)1. If any of these probabilities differ between two DGPs, we say that the data generated by the two DGPs is non-IID. In particular, the non-IIDness types may be characterized as follows [10]: Footnote 1: For simplicity we use the shorthand notation \(p(x)=p_{X}(x)\), \(p_{X|Y}(x|y)=p(x|y)\), etc. * _Concept drift_, when \(p_{i}(x|y)\neq p_{j}(x|y)\); * _Concept shift_, when \(p_{i}(y|x)\neq p_{j}(y|x)\); * _Label skew_, when \(p_{i}(y)\neq p_{j}(y)\); * _Feature skew_, when \(p_{i}(x)\neq p_{j}(x)\); The complexity of DGPs prevents their straightforward use in client clustering. Thus, we introduce the notion of _client embedding_\(e_{i}=F(\gamma_{i})\), a semantically rich vector that summarizes the properties of a client \(i\)'s DGP \(\gamma_{i}\). Since these properties often remain unknown, clients need to compute their embeddings by leveraging solely their local dataset \(D_{i}\), the data sent from the server \(S\), and possibly other contextual information \(\kappa_{i}\) they have at their disposal: \[e_{i}=F(\gamma_{i})\approx f(D_{i},S,\kappa_{i}) \tag{2}\] The goal of clustering is to assign clients with similar DGPs to the same cluster so that a personalized model can be grown within each cluster. The clustering can be performed directly on embeddings, as these embeddings reflect the DGPs. Since holdout clients may request the model after the training has been completed, the algorithm used for clustering must be able to handle embeddings that were not seen during the training. Thus, the K-Means clustering algorithm [13] is a viable choice, while DBSCAN [12] is not. To summarize, by clustering the clients we transform the training task from training a single model \(\theta\) in conditions of data heterogeneity to the training of multiple models \(\theta_{k}\), each in a more homogeneous environment. Consequently, instead of minimizing the loss of a single global model across all clients, the training task boils down to training one model \(\theta_{k}\) for every cluster of clients: \[\theta_{k}^{*}=\arg\min_{\theta\in\mathbb{R}^{d}}\ \frac{1}{\sum_{i\in C} \mathds{1}_{\alpha(i)=k}}\sum_{i=1}^{|C|}\mathds{1}_{\alpha(i)=k}\cdot L_{i}( \theta), \tag{3}\] where \(L_{i}(\theta)=\mathbb{E}_{(x,y)\sim\gamma_{i}}[l(\theta,x;y)]\) is the expected loss over client \(i\)'s DGP of the model parametrized with \(\theta\), \(\mathds{1}_{\alpha(i)=k}\) is the indicator function taking value \(1\) if \(\alpha(i)=k\) else \(0\), and \(\alpha\) a function that assigns every client to a cluster. The function \(\alpha\) logically consists of two steps: first, it computes the client embeddings of every available client and second, it uses such client embeddings to fit a clustering model and hence partition the clients. After clients are clustered, training can proceed in a traditional FL way, i.e. at each server epoch \(t\) every client \(i\) selected for training receives the cluster model \(\theta_{\alpha(i)}^{t}\), fine-tunes it so as to obtain the model \(\theta_{\alpha(i),i}^{t+1}\), and returns it to the server. The latter then computes the new version of the cluster model as: \[\theta_{k}^{t+1}=W\Big{(}\bigcup_{i}\big{\{}(\theta_{\alpha(i),i}^{t+1},e_{i}) \mid\alpha(i)=k\big{\}}\Big{)}, \tag{4}\] where \(W\) is the weight aggregation function and \(\epsilon_{i}\) is the meta-information sent by the clients, e.g. the client's training dataset size. When using the FedAvg algorithm, the function \(W\) simply computes the weighted average of the received model parameters. ### Repa We now present REPA, a novel algorithm for computing client embeddings that _does not involve any training_ on the client's datasets and that _can also be used for computing the embeddings of clients that do not possess a labeled dataset_, i.e. whose \(y_{i,j}\) in Equation (1) are unknown. We first briefly discuss the rationale behind REPA. Aiming to devise an algorithm that may be used for computing embeddings of clients who do not possess a labeled dataset, we necessarily need to restrict ourselves to a set \(\{X_{i,j}\}_{j=1}^{|D_{i}|}\). Such a set is, from a probabilistic perspective, a collection of random variables following the distribution \(p_{i}(x)\). Note, that \(p_{i}(x)\), when mapped with a function \(\phi\) to another space, induces a probability distribution over that space2. Although \(p_{i}(x)\) is unknown, similar to the other distributions in the DGP, the available data forms an empirical distribution which, as the number of samples \(|D_{i}|\) increases, increasingly well approximates the true underlying distribution - this is formally defined by the limit theorems. Footnote 2: This holds only if the function \(\phi\) is Borel measurable. Input data points might be of very high dimensionality and not semantically rich enough, thus, our function \(\phi\) harnesses deep learning to map input data to a lower dimensional and semantically richer embedding space \(\mathbb{R}^{E}\). The empirical distribution of the original DGP, when mapped to the embedding space with a Borel measurable function, induces an empirical distribution over the embedding space. We can summarize the properties of the embedding space by computing a number of statistics and concatenating them into a single vector with a function \(\mathbf{S}\). Such statistics might include the mean vector, the cross-covariance matrix, the cross-correlation matrix, the principal components, or any statistics over the marginal distributions, e.g. the skewness, the kurtosis, and quantiles. To summarize, our hypothesis, formalized in Equation (5), is that statistics computed over the embedding space reasonably well reflect the DGPs. As we cannot say a priori which statistics best summarize the properties of the unknown embedding space, we identify such statistics experimentally later in the paper. \[e_{i}=F(\gamma_{i})\approx f(\mathcal{D}_{i},\kappa_{i},S)\stackrel{{ \text{REPA}}}{{=}}\mathbf{S}(\phi(\{X_{i,j}\}_{j=1}^{|\mathcal{D}_{i}|})) \tag{5}\] The function \(\phi\) used to map the individual data points \(X_{i,j}\) to the embedding space might be a pre-trained encoder network or the encoder part of a network trained with FL. There are various ways to train such an encoder. For instance, through a self-supervised task with clients jointly training an autoencoder in an FL setting. Or, for instance, in a fully supervised manner, with clients training the target inference model for a predefined number of epochs, and after such a pre-training stage, the lower layers in the network might be used for mapping the input data points to the embedding space. The positive side of the former approach is that the encoder would not be tied to a particular classification and regression task, while the latter brings the benefit of the embeddings reflecting both the distribution over the target variables and the properties of the input data. Another benefit of using the target inference model is the fact, that there would be no need for training any additional model. In REPA, however, we also tested the appropriateness of an intermediate path, i.e. the _supervised autoencoder_Le, Patterson, and White (2018). This model architecture is an example of the multi-loss training paradigm in which multiple losses are optimized at the same time - in our case, the classification or regression loss, and the reconstruction loss. The supervised autoencoder architecture consists therefore of _i)_ an encoder network, which takes the input data points and produces the embeddings, _ii)_ a classification or regression head, which takes the embeddings as produced by the encoder and uses them for predicting the target variables, and _iii)_ a decoder head, which takes the same embeddings and aims to reconstruct the input data points. The rationale behind this approach is that by introducing the reconstruction term, the embedding space is expected to better reflect the properties of the underlying features of the data points. ## Experimental Analysis The goal of our experimental analysis is twofold. First, we want to confirm that REPA client embeddings indeed reflect the actual data non-IIDness of federated datasets, and second, we want to assess the inference accuracy of clustered FL guided by REPA. ### Experimental setup In our analysis, we focus on two types of non-IIDness that can be observed even in unlabeled data: the concept drift and the label skew described in the Preliminaries section. We investigate three datasets: _a)_ CIFAR10Krizhevsky and Hinton (2009), for which we simulate the label skew by sampling data in a biased way (i.e. each client has a number of over- and other under-represented classes), and the concept drift by applying a different image augmentation pipeline on every client (e.g. on some clients the images are blurred, on others, the images are transformed to black and white, etc.); _b)_ FEMNIST Caldas et al. (2018), which naturally contains the two considered non-IIDness types, and _c)_ MNIST with pathological non-IIDness McMahan et al. (2017), which is obtained by partitioning the MNIST dataset Deng (2012) in such a way that each of the artificially-generated clients contains images belonging to only two out of the ten available classes. Unless otherwise noted, the generated clients are randomly partitioned into two disjoint sets. The first set is the training clients \(C_{t}\), which participate in the training procedure and therefore have both a training and validation set. The second set of clients \(C_{h}\) is composed of the holdout clients, which only have a validation set. For the purpose of clustering, the training clients compute their client embeddings using their training dataset, while holdout clients compute it using their validation dataset. Throughout this experimental section, we are going to be comparing the client embeddings as produced by the REPA algorithm with the method proposed by Briggs, Fan, and Andras (2020), i.e. computing client embeddings as \(e_{i}=\theta^{T}-\theta_{i}^{T+1}\), where \(\theta_{i}^{T+1}\) is the model obtained after training the model \(\theta^{T}\) on client \(i\)'s dataset \(D_{i}\) for \(f\) training epochs - for conciseness, we refer to such an algorithm as WD (weight-difference). However, note that the WD method requires on-device training with a labeled dataset, thus, its use for computing client embeddings on holdout clients might not be possible in reality. Regarding the model architecture, as a basis we use a simple encoder-decoder-classification head architecture with convolutional, ReLU, max pooling, and Softmax layers detailed in the Appendix. The encoder-classification head architecture is used for calculating WD's embeddings, while the encoder output is used for the REPA-based clustering. In case of REPA, the encoder is connected to either the classification head only (CLF), the decoder, thus forming an autoencoder (AE), or to both the classification head and the decoder, forming a supervised autoencoder (SAE). Unless stated otherwise, we use SAE. ### Correlation between Client Embeddings and Data-Generation Processes To estimate whether the embeddings reflect the actual non-IIDness, we first need to measure the similarity between two clients, i.e. the similarity of their underlying DGPs. Since we do not observe the DGPs directly we define the similarity between clients \(i,j\in\mathcal{C}\) as: \[\mathcal{S}(i,j)\stackrel{{\text{def}}}{{=}}\mathcal{S}(\gamma_{i },\gamma_{j})\approx\mathcal{S}(D_{i},D_{j}). \tag{6}\] How exactly is \(\mathcal{S}(D_{i},D_{j})\) calculated depends on the nature of the non-IIDness. For the proposed CIFAR10 setting, the similarity may be computed as follows: * _Label skew_: We first define the distribution vector \(d_{i}\) as a vector in which every dimension \(d_{i,k}\in\mathbb{N}_{0}\) states the number of data points in \(D_{i}\) that belong to class \(k\); then, the similarity between two datasets \(D_{i}\) and \(D_{j}\) can be estimated as the cosine similarity of the corresponding distribution vectors \(d_{i}\) and \(d_{j}\). That is, \(\mathcal{S}(D_{i},D_{j})=\frac{d_{i}^{T}d_{j}}{\|d_{i}\|_{2}\cdot\|d_{j}\|_{2}}\); * _Concept drift_: in our setting, the similarity of the two datasets \(D_{i}\) and \(D_{j}\) is equal to the similarity of their clients' image augmentation pipelines. Such similarity can be estimated by applying the \(i\)'s and \(j\)'s augmentation pipelines to the same set of images, mapping the resulting images to an embedding space with a pre-trained network, and applying the softmax function to the so-obtained image embeddings in order to obtain valid probability distributions. As a final step, we compute the Jensen-Shannon divergence between every pair of probabilities that correspond to the same image, average them out, and then we convert the obtained JS divergence to a similarity with the formula \(similarity=1-divergence\). We set the number of training clients to \(80\) and observe how the correlation between the similarity of clients \(\mathcal{S}(i,j)\) and the cosine similarity of the client embeddings produced by the two algorithms under observation - REPA and WD - evolve through the training on the CIFAR10 dataset. The results in Figure 1 show that WD is particularly successful at capturing the label skew type of non-IIDness as the correlation is, in some cases, even higher than \(0.6\). This is unsurprising, as the algorithm relies on information stemming from local training, i.e. with local label (distributions) known. For REPA we report the correlation of \(\mathcal{S}\) with selected client embeddings statistics - the mean and the quantiles (we used \(0.25\), \(0.5\), and \(0.75\)) - that yield the best overall correlation. The figure demonstrates that, compared to WD, REPA achieves a lower, but nevertheless, non-negligible correlation with the client similarity. ### Cluster Uniformity The goal of clustered FL is to bundle clients into groups exhibiting relative uniformity of the within-cluster clients' datasets. Thus, we now evaluate the ability of the REPA clustering algorithm to generate uniform clusters. We focus on the MNIST dataset with pathological non-IIDness and define the uniformity of clusters \(\mathbb{C}=\{c_{k}\}_{k=1}^{|\mathbb{C}|}\), where \(c_{k}\subseteq C\) represents a set of clients in cluster \(k\), as the average cosine similarity of the clustered clients' data distributions \(d_{i}\)s, i.e.: \[U(C)=\frac{1}{|\mathbb{C}|}\sum_{c\in\mathbb{C}}\frac{1}{\binom{|c|}{2}}\sum_{ i,j\in c,i\neq j}\frac{d_{i}^{T}d_{j}}{\|d_{i}\|_{2}\|d_{j}\|_{2}}. \tag{7}\] In Figure 2 we report the evolution of uniformity with respect to the number of clusters. We compare the values with the ones we get by randomly assigning the clients to one of the available clusters (Random) and the ones we get by clustering the distribution vectors directly with the K-Means clustering (Distribution given). We observe that REPA clusters remain highly uniform, occasionally reaching the optimal uniformity. We also plot the performance of WD and conclude that training data-based clustering does not bring additional improvements. ### Cluster Robustness We argue that cluster robustness as defined in Definition 1 is a key metric of practical importance in clustered FL: **Definition 1** (Cluster robustness): _Cluster robustness is the probability that two clients with similar dataset properties belong to the same cluster._ If we assume, that the cluster model \(\theta_{\alpha(i)}\) is, among all the trained cluster models, the model that yields the best performance on dataset \(D_{i}\), and that the performance of the cluster models \(\theta_{k}\) do not deviate significantly over similar datasets, then cluster robustness states the probability that a previously unseen client is served the cluster model best suited for its local dataset. This probability cannot be measured directly, thus we estimate the robustness of a single cluster \(r(c)\) empirically with Algorithm 1. The robustness of the whole clustering structure \(\mathbb{C}\) is then defined as \(r(\mathbb{C})=\frac{1}{|\mathbb{C}|}\sum_{c\in\mathbb{C}}r(c)\). Note that in line 4 of the algorithm, we generate a client with a DGP similar to the DGP of an existing client whose data distribution is described with vector \(d_{i}\). This is done by randomly sampling images so that the new client's data is distributed according to \((d_{i}+n)\cdot s\), where \(n\) is a noise vector and \(s\) is a scaling factor allowing the similar client to have a dataset of a different size. After sampling the images that compose the generated client's dataset, the reference client \(r\)'s image augmentation pipeline is applied to the generated dataset. The robustness of the clusters obtained by the REPA and WD is reported in Table 1. REPA manages to achieve overall robustness above 90%, irrespective of the non-IIDness type. WD, on the other hand, struggles with the concept drift type of non-IIDness, where its robustness remains barely above the robustness that would have been achieved by randomly assigning clients to one of the \(10\) available clusters. The table also shows that the supervised autoencoder (SAE) we embrace for REPA outperforms alternative architectures. Figure 1: Correlation of dataset similarities and the corresponding client embeddings obtained with WD and REPA with client datasets sampled from CIFAR10. Figure 2: Cluster uniformity when partitioning pathologically non-IID MNIST data. ### Classification Accuracy We now assess the classification accuracy of the clustered FL models constructed by our REPA approach as well as the alternative WD algorithm. As the baseline, we present the inference accuracy results achieved by the standard FedAvg approach. Throughout this section, we indicate with _VAL_ the classification accuracy (CA) of the model on the validation set of the training clients and with _HO_ the CA of the model on the validation set of the holdout clients. We conduct the evaluation on FEMNIST and MNIST with pathological non-IIDness, as these datasets naturally contain data distribution heterogeneity. **FEMNIST dataset:** In Figure 3 we see that REPA and WD outperform the plain FedAvg approach on both VAL and HO, thus confirming the importance of client clustering when it comes to improving inference accuracy in FL over non-IID data. Note, however, that in reality the WD method cannot actually be used for computing the client embeddings of holdout clients, as it requires on-device training and a local labeled dataset, which is against our definition of the holdout client. We nevertheless report WD results here in order to assess a potential loss of inference accuracy in case a clustering approach such as REPA is used on holdout clients that can actually perform the training. As evident from the figure, the loss is practically none. As discussed in the related work, training loss regularization represents another means of improving FL performance in non-IID settings. Since the regularization remains orthogonal to the actual clustering scheme, we introduce the FedProx regularization term Li et al. (2020) during the individual client model training - the loss \(l(\theta_{i})\) clients optimize over their datasets \(D_{i}\) is updated to: \[l_{reg}(\theta_{i},\theta)=l(\theta_{i})+\frac{\mu}{2}\|\theta_{i}-\theta\| _{2}^{2}, \tag{8}\] where \(\theta\) is the model sent by the server and \(\mu\) the regularization strength. In Figure 3 we see that such a regularization scheme significantly improves the inference accuracy, and hence in the remaining experiments we use FedProx regularization for all methods, unless stated otherwise. Finally, in Figure 4 we examine the importance of setting an appropriate number of clusters. In particular, we see that the WD algorithm achieves its highest accuracy when clustering the clients into \(2\) clusters, while REPA benefits from a higher number of clusters, such as \(20\). This behavior can be in part explained by the results we reported for the robustness, as REPA achieves significantly higher robustness than the WD algorithm. Note that, however, when clustering the clients there is a trade-off, as using a higher number of clusters results in increasingly uniform clusters, yet leads to fewer training clients within a cluster. **MNIST with pathological non-IIDness:** Recall that in Figure 2 we confirmed that both REPA and WD manage to construct uniform clusters of clients. However, the increased uniformity alone does not guarantee higher inference accuracy, especially in highly non-IID settings. To demonstrate this, in Figure 5 we plot the evolution of accuracy when \(100\) clients (we assume no holdout clients) are partitioned into \(10\) and \(30\) clusters using the two algorithms under observation. We see that, irrespective of the clustering algorithm used, the accuracy of clustered FL often does not improve over the accuracy of vanilla FL with FedAvg. This is due to two main reasons. First, because of the peculiarities of the pathological non-IID setting, the average accuracy suffers if not all clients participate in every server training epoch. If \begin{table} \begin{tabular}{l l|c c|c c|c c} \hline \hline & & \multicolumn{2}{c}{Concept drift,} & \multicolumn{2}{c}{Label skew} & \multicolumn{2}{c}{Concept drift} \\ & & \multicolumn{2}{c}{label skew} & \multicolumn{2}{c|}{Label skew} & \multicolumn{2}{c}{Concept drift} \\ & & Mean & SE & Mean & SE & Mean & SE \\ \hline \multirow{3}{*}{REPA} & SAE & **0.91** & 0.02 & **0.90** & 0.02 & **0.92** & 0.02 \\ & CLF & 0.87 & 0.03 & **0.90** & 0.02 & 0.89 & 0.03 \\ & AE & 0.89 & 0.02 & 0.79 & 0.03 & 0.89 & 0.03 \\ \cline{2-8} & \(f\)=1 & 0.75 & 0.08 & 0.68 & 0.05 & 0.25 & 0.10 \\ \cline{2-8} & \(f\)=2 & 0.84 & 0.03 & 0.77 & 0.03 & 0.15 & 0.05 \\ \hline \hline \end{tabular} \end{table} Table 1: Robustness of clusters generated by REPA and WD under various non-IIDness types. SAE stands for supervised autoencoder, AE for standard autoencoder, and CLF for classifier-based embeddings in REPA, and \(f\) is the number of fine-tuning epochs performed on the clients in case of WD. The number of clusters is in all cases set to 10. Figure 3: Evolution of accuracy when training a model on the FEMNIST dataset with Federated Learning with respect to the algorithm used for computing the client embeddings. The number of clusters was set to \(10\). only a fraction of clients participate (e.g. fraction fit set to 0.4 in the graph), in clusters with more than two classes it is highly probable that in a given training epoch the cluster model does not get exposed to all the potential target labels. Simultaneously, as we measure the accuracy of the cluster model on all the clients in the cluster, the model is evaluated on clients containing data belonging to classes that might not have been present in the training set of the previous epoch. Therefore, in a certain sense, the model "overfits" the training dataset, and hence the quality of the predictions on the clients, which possess labels unseen in the previous training epoch, decreases. Second, compared to vanilla FL, in highly non-IID settings the negative effects of the decreased amount of data when building clustered models outweigh the benefits of increased homogeneity of clients used for model training, if the number of clusters is not sufficiently high (in this case 30). ## Discussion and Future Work While the issue of poor model convergence and reduced inference accuracy due to non-IIDness has received substantial attention in the FL research community, the problem of assigning the most appropriate flavor of the joint model to clients without sufficiently large local datasets or to clients who merely want to use the join model has received, to the best of our knowledge, virtually no attention from the community. This is surprising, having in mind that in practice the majority of clients using FL-trained models do not participate in model training [1]. REPA tackles this uncomfortable issue and includes a novel approach to client clustering that focuses on profiling the underlying data-generating processes. The existing clustering methods almost exclusively rely on clustering local models and their properties, which, we believe, is not sustainable in the long run, as it restricts model personalization to clients that participate in the training. As the popularity of FL grows, the number of clients who merely wish to use the models is likely to grow as well. In this paper, we focused on devising and demonstrating REPA. For practical applications, however, the clustering method would need to be optimized to the operating conditions. One of the main aspects of such optimization is the number of clusters in which the clients should be split. When determining this number we should aim to balance the homogeneity of individual clusters and the population of each cluster, as both the heterogeneity of the data within a cluster and the insufficient number of clients within a cluster may lead to poor model convergence. In future work, we plan on exploring various heuristics for determining the number of clusters within REPA. Finally, while alleviating the need for labeled data, REPA still requires that FL clients collect a certain amount of unlabeled data for DGP profiling. In certain practical situations, even unlabeled data collection might not be possible, or the previously collected data might not accurately reflect the current DGP. This is especially true in mobile computing where a client might move to a different context (say, from a noisy outdoor area to a quiet indoor area), which might severely impact the underlying DGP. Nevertheless, contemporary mobile devices, such as smartphones, often host a range of embedded sensors that can be used to recognize context change. In forthcoming research, we intend to tap into this latent contextual data to infer information about the DGPs. This information, we believe, might enable client clustering even prior to data collection and, thus, further lower the entry bar for using personalized FL models. ## Conclusion In this paper we proposed REPA, a client clustering scheme for federated learning that does not require labeled data, nor on-device training, yet enables clients to obtain model flavors tailored to the data distributions these clients observe. Extensive experimentation across different datasets demonstrates that REPA brings inference accuracy comparable to that of the state-of-the-art FL model personalization efforts over a much wider range of clients, including those who did not participate in model training at all. Consequently, we believe that REPA can facilitate a wider proliferation of federated learning, and can also serve as a basis for future efforts towards dynamic, context-aware FL adaptation.
2309.13081
Transitioning To The Digital Generation Case Studies (Previous Digital Point Studies In Japan Cases:1993-2023)
In this paper, we discuss at The 8th International Workshop on Application of Big Data for Computational Social Science, October 26-29, 2023, Venice, Italy. To achieve the realization of the Global and Innovation Gateway for All (GIGA) initiative (2019), proposed in December 2019 by the Primary and Secondary Education Planning Division of the Elementary and Secondary Education Bureau of the Ministry of Education, Culture, Sports, Science and Technology, a movement has emerged to utilize information and communication technology (ICT) in the field of education. The history of ICT education in Japan dates back to the 100 Schools Project (1994), which aimed to provide network access environments, and the New 100 Schools Project (1997), which marked the beginning of full-scale ICT education in Japan. In this paper, we discuss the usage dynamics of smartphone-based learning applications among young people (analyzing data from January to September 2020) and their current status. Further, the results are summarized and future research topics and issues are discussed. The results show that there are situations in which ICT learning environments can be effectively utilized and others in which they cannot, depending on the differences between digital students and analog students who utilize ICT in their studies; this indicates that we are currently in a transition to a generation of digital natives. ICT education has both advantages and disadvantages, and it is expected that it will be used in combination with conventional educational methods while assessing the characteristics of ICT education in the future. Of course, there are many challenges. We plan to discuss how to appeal in this regard at the Workshop.
Yasuko Kawahata
2023-09-21T16:23:40Z
http://arxiv.org/abs/2309.13081v1
Transitioning To The Digital Generation Case Studies (Previous Digital Point Studies In Japan Cases:1993-2023) ###### Abstract This Paper is Part of the 22nd IEEE/WIC International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT'23), The 8th International Workshop on Application of Big Data for Computational Social Science,(WI=ArtificialIntelligenceinthe Connected World October 26-29, 2023, Venice, Italy). To achieve the realization of the Global and Innovation Gateway for All (GIGA) initiative (2019), proposed in December 2019 by the Primary and Secondary Education Planning Division of the Elementary and Secondary Education Bureau of the Ministry of Education, Culture, Sports, Science and Technology, a movement has emerged to utilize information and communication technology (ICT) in the field of education. The history of ICT education in Japan dates back to the 100 Schools Project (1994), which aimed to provide network access environments, and the New 100 Schools Project (1997), which marked the beginning of full-scale ICT education in Japan. However, Japan continues to face a social issue, namely, the serious shortage of digital human resources; simultaneously, the new curriculum guidelines have started to make programming compulsory in elementary and junior high schools in order to enhance digital literacy. Accordingly, the GIGA initiative (2019)--a policy to distribute one tablet terminal to every elementary and junior high school student in Japan--and online classes have been rapidly promoted following the outbreak of the COVID-19 pandemic in 2020. In this paper, we discuss the usage dynamics of smartphone-based learning applications among young people (analyzing data from January-September 2020) and their current status. Further, the results are summarized and future research topics and issues are discussed. The results show that there are situations in which ICT learning environments can be effectively utilized and others in which they cannot, depending on the differences between digital students and analog students who utilize ICT in their studies; this indicates that we are currently in a transition to a generation of digital natives. ICT education has both advantages and disadvantages, and it is expected that it will be used in combination with conventional educational methods while assessing the characteristics of ICT education in the future. GIGA Initiative, ICT Education, Smartphone Applications, Extracurricular Learning, Digital Natives Identify applicable funding agency here. If none, delete this. ## I Introduction In Japan, the term "information and communication technology (ICT) education" refers to the use of ICTs as an educational method. Accordingly, the concept of "everywhere computing" was formulated [1-8]. In response, the Advanced Information Technology Program was formulated in May 1994. This program included various principles as part of the informatization of education. Key among such principles is the realization of education and learning that transcend the limitations of classroom teaching. This goal is made possible by the use of computers and networks. Therefore, the Educational Software Development and Utilization Promotion Project was adopted as one of the Advanced Utilization of Specific Programs projects in the third supplementary budget for FY1993. One of the first major experiments involving ICT and elementary education in Japan was the 100 Schools Project, which began as part of the activities of the Network Operation Center, aiming to provide network access environments. This project focused on the procurement and installation of computer systems, software, modems, routers, and other equipment in target schools and the Network Operation Centers of the regional networks to which the target schools were connected; additionally, the telecommunication lines to be rented were all contracted by the IPA. From the end of FY1994 to early FY1995, the equipment was delivered and set up at the selected locations. In 1994, when the 100 Schools Project was launched, the above-mentioned academic and non-academic/non-profit regional networks were selected to connect approximately 100 target schools, which were distributed across all prefectures, to the Internet [33-36]. Subsequently, these regional networks were identified as connection points. Currently, ICT education can be broadly classified into the following five categories. As mentioned above, the 100 Schools Project (launched in 1994) was the beginning of full-fledged ICT education in Japan. Subsequently, the New 100 Schools Project was launched in 1997, covering 108 schools; this project emphasized diversity and also solicited research on children in various school settings, joint presentations, and other voluntary projects. Early on, initiatives such as the Educational Rating System Operation Experiment, the Utilization of Fixed Point Observation Data, and the Utilization of Existing Databases were undertaken as advanced projects, while themes such as international exchange, regional activities, and collaborative and integrated learning were also discussed, marking the beginning of intense activity. The movement to utilize ICT in the education field has been expanding toward the realization of the Global and Innovation Gateway for All (GIGA) initiative, proposed in December 2019 by the Primary and Secondary Education Planning Division of the Primary and Secondary Education Bureau of the Ministry of Education, Culture, Sports, Science and Technology (MEXT). However, in Japan, the shortage of digital human resources is becoming a serious social issue, while new curricula are starting to make programming compulsory in elementary and junior high schools in order to improve digital literacy. Moreover, the GIGA initiative, which aims to distribute one tablet terminal to every elementary and junior high school student in Japan, and online classes have been rapidly promoted following the outbreak of the COVID-19 pandemic in 2020 [9-20]. While these circumstances have made it possible to diversify learning and have been a step toward eliminating educational disparities, some scholars have questioned the effectiveness of the program in terms of its vision and impact on learning, leading to the emergence of two sides divided on this issue. As the Japanese educational system is rapidly changing, it is necessary to examine the effects and disadvantages of each approach and address them. Further, there is an urgent need to reduce the burden on teachers, who are at the frontline of the educational process. The purpose of this study is to summarize the past implementation status of ICT education in Japan, aiming to understand its future prospects and issues, as well as the related efforts in the industry. It also aims to conduct a questionnaire survey [2,3 ] to examine the usage dynamics of smartphone-based educational application (based on the Log Data Analysis of Smartphone Use (LDASU) project's data set, obtained from Fuller Corporation) as an example of effective ICT utilization in education. The results are summarized as a case study. Based on the discussion of the case study, we summarize future research issues regarding the role of ICTs in education to solve social problems in Japan, such as educational disparities and the lack of digital human resources, which can be complemented by the digital environment. Particularly, a major research question in this study concerns the usage and search dynamics of education-related applications listed on the Google Play store (which was analyzed based on the dataset), which play a part in ICT education. Subsequently, we summarize the current status of the usage dynamics (January-September 2020) of smartphone-based learning applications (Google Play) among young people engaged in learning. ## II The history of ICT education and cases of ICTs' use in private education before COVID-19 ### _The 100 Schools Project and the New 100 Schools Project_ ICT education in Japan has developed for several decades, facilitated by the widespread use of personal computers and the subsequent spread of wi-fi and the miniaturization of ICT devices [1-3]. This section summarizes the history of ICT education in Japan. The 100 Schools Project (which aims to establish network access environments across Japanese schools) and the New 100 Schools Project marked the beginning of full-scale ICT education in Japan. In order to develop and promote computer education in Japan, a plan was announced to introduce the Internet to all junior high schools, high schools, and schools for the disabled by 2001, as well as all elementary schools by 2003. The call for applications was distributed to the 47 prefectural boards of education nationwide in August 1994, under the guidance of the Ministry of Education [33-36]. Applications were divided into the following two groups: Group A consisted of approximately 20 to 30 schools that were particularly advanced and had the technical skills and real face of teachers; Group B consisted of approximately 70 to 80 schools that were able to actively plan and participate in network use and planning. Consequently, a total of 708 schools applied for Group A and 835 schools applied for Group B. Selection of eligible schools was based on a document-screening process in accordance with the Application Guidelines for the Network Usage Environment Provision Project. At least one eligible school was assigned to each prefecture; eligible schools included elementary schools, junior high schools, high schools for the blind, schools for the deaf, and schools for the disabled, as well as international schools, in-hospital classes for children undergoing long-term treatment in hospitals, and three audio-visual centers where advanced efforts are expected. In June 1995, the connection to the Internet was completed and the servers became operational. The main examples of the utilization of the network include the dissemination of information using the web, presentation of learning results, collection and exchange of information and opinions, joint observation of various events in collaboration with specific partner schools, and international exchange with schools abroad. With the completion of the 100 Schools Project, the New 100 Schools Project was launched in 1997, covering 108 schools. Initially, a plan was announced to introduce the Internet into all junior high schools, high schools, and schools for the disabled by 2001, as well as all elementary schools by 2003, in order to develop computer education in Japan. The results of the 100 Schools Project and the New 100 Schools Project have encouraged the introduction of the Internet into the educational field. ## III Initiatives in the private sector The first case of ICT education in Japan in general was the "Satellite Seminar," in which Yoyogi Seminar started real-time delivery of lectures nationwide in 1989. After that, the cram school industry pioneered ICT education, as seen in the video classes offered by Tojin High School and Kawajuku Manabis. This was very effective as a means of eliminating one of the educational disparities: the gap in the availability of education due to differences in residential areas. ICT education has various advantages in terms of cost, as it enables students to take classes at any time of the day, and reduces the total number of instructors. This educational method has attracted attention as a new form of learning, especially since many people have succeeded in passing the entrance examinations of the schools of their choice, and many have been accepted into good universities, including those that are considered difficult to enter. Further, opportunities for ICT education's utilization have increased. Correspondence courses are one of the private educational programs offered through online media. Benesse Corporation's Shinkenzemi (Challenge Touch), Z-kai of Z-kai Holdings, Inc. and Smile Zemi of Just System Inc. are particularly famous correspondence courses. Smile Zemi started its service in December 2012, and is characterized by the fact that it offers only one course that is completed entirely on a tablet device, without paper-based materials. Challenge Touch was launched in April 2014 and had more than 1 million users as of 2017. Compared with the Shinkenzemi Elementary School Course, the subscription fee is similar, but students can choose either Challenge Touch, which is taught on tablet devices, or Challenge, which is taught using paper materials. The fees vary depending on the course and grade. However, details on the number of users of Shinkenzemi (Challenge Touch), Zukai, and Smile Zemi are not disclosed. The Classroom of the Future demonstration project being promoted by the Ministry of Economy, Trade and Industry since FY2018 aims to share examples of learning and services utilizing Edtech. In addition to initiatives at schools across Japan, many educational services provided by private companies have been adopted. One such service is See-be, by Sanaru Corporation, a company in the cram school industry, which has been installed in all of Sanaru Prep School's buildings and classrooms. See-be is a tool that enables accurate and easy-to-understand animations and historical materials based on a vast amount of data, as well as simulations of experiments operated by the teacher, to be projected on the whiteboard. Moreover, Kumon's ICT education is being gradually introduced into other fields besides tutoring schools that offer video lessons. Uchida Yoko, which was originally an office-equipment trading company, is now involved in designing learning spaces and developing and selling products related to ICT education, in addition to its existing business. In addition, Uchida Yoko holds an annual event called New Education Expo as a place to share information on the education field. ### _Recent ICT education initiatives for primary education in Japan_ The Five-Year Plan for Environmental Improvement for ICT in Education (FY 2018-2022), formulated in April 2018, compiled policies for improving the ICT environment based on the contents of the new Courses of Study (i.e., curricula) to be revised in FY 2020-2022 for elementary through high schools. Specifically, these new study guidelines for elementary schools include mandatory programming education. The Policy for the Improvement of ICT Environment in Schools from FY 2018 Onward set the following seven goals for the improvement of the ICT environment in schools, requiring that 180.5 billion yen be allocated for a single fiscal year (FY 2018 to FY 2022). (1) Computers for learners: Provide one computer for every three classes. (2) Computers for instructors: one for each teacher in charge of a class (3) Large-screen presentation devices and actual projectors: 100Actual projectors will be installed in elementary schools and special-needs schools based on the actual maintenance status. (4) 100Additionally, various servers and security software are also covered. According to the 2018 PISA ICT utilization survey, Japan ranked last among Organisation for Economic Co-operation and Development-member countries, measured as the time spent using digital devices in class. The Five-Year Environmental Improvement Plan for ICT in Education (FY 2018-2022) is the first step in a policy to begin focusing on ICT education within this context, where ICT education is not widespread. ### _Revision of the law on digital textbooks for learners_ There are two types of digital textbooks: "instructor textbooks," which are used by teachers in class, and "learner textbooks," which are used by students on their own computers, tablets, etc., in the same way as paper textbooks. While digital textbooks for instructors have been introduced and are being used in the classroom, digital textbooks' introduction has lagged behind. This is due to the fact that digital textbooks for learners were not recognized as textbooks, while copyright fees, which are typically waived for paper textbooks, had not been waived for digital textbooks. Therefore, on June 1, 2018, the MEXT promulgated the Law Partially Amending the School Education Law, which allows learner digital textbooks containing the same content as paper textbooks to be used in place of and in conjunction with paper textbooks, starting on April 1, 2019. Thus, since digital textbooks for learners are now treated as textbooks under this law, copyright fees for digital textbooks for learners are now handled in the same way as for paper textbooks, in accordance with Article 33 of the Copyright Law. However, restrictions remain on the use of digital textbooks due to the lack of computers and tablet terminals for use by students and the fact that digital textbooks could only be used for less than one-half of the number of class hours for each subject under Article 34, Paragraph 2 of the School Education Law. Additionally, while paper textbooks were provided free of charge to students at government expense, digital textbooks were not, and the cost of 200 to 2,000 yen per subject was borne by the board of education. Therefore, the introduction of digital textbooks has not progressed adequately. According to a survey by the MEXT, digital textbooks for elementary school students were available in 20The Law Partially Amending the School Education Law, which went into effect in FY 2019, restricted digital textbooks' usage time, out of consideration for children's health, such as eye fatigue; however, this was criticized by the expert committee members as having no basis, while considering the spread of COVID-19 and the GIGA initiative. On December 22, 2020, a proposal to eliminate the standard limiting the usage time for digital textbooks to "less than one-half the number of class hours for each subject" was approved, and digital textbooks for learners became available for use in place of paper textbooks, starting on April 1, 2021. Regarding health concerns, the MEXT pointed out that "regardless of whether it is paper or digital, prolonged and continuous close gazing should be avoided from the perspective of vision deterioration," and requested that when using a terminal in class, students (1) take their eyes off the screen for about 20 seconds every 30 minutes and rest, (2) maintain a good posture and keep a distance of 30 to 50 cm between their eyes and the terminal, among other precautions. The MEXT is currently working on the GSE and considering the full-scale introduction of digital textbooks for learners to be used with the terminals developed under the GIGA initiative in FY2024, when the next revision of elementary school textbooks is scheduled. The government plans to raise the rate to 100 ### _The GIGA initiative and private sector initiatives in the COVID-19 vortex_ The GIGA initiative, launched by the MEXT in December 2019, is a policy to promote ICT education in elementary and junior high schools. According to the MEXT website, "By integrating one terminal per student and a high-speed, large-capacity communication network, we will realize an educational environment where diverse children, including those with special needs, receive optimized education fairly and individually without leaving anyone behind, and where their qualities and abilities can be further cultivated without fail. The best mix of Japan's existing educational practices and state-of-the-art technology will be used to maximize the abilities of teachers and students." The program mainly includes subsidies for the cost of tablet terminals to be distributed to each student, as well as subsidies for the cost of developing a wi-fi environment. In the case of private elementary and junior high schools, the subsidy is half the cost of the device, with an upper limit of 45,000 pen per device. Originally, the GIGA initiative aimed to develop this hardware environment over a five-year period, starting in FY2019; however, the need for online classes increased due to the closure of schools due to the COVID-19 pandemic, which led to its accelerated implementation. According to the Nikkei Shimbun, in distributing one tablet terminal per student, only one terminal per 6.1 students in elementary schools and one per 5.2 students in junior high schools had been deployed by 2019, before the GIGA initiative went into full swing. In July 2020, a total of 74 municipalities in the 23 wards of Tokyo, prefectural cities, and government ordinance cities were examined. When asked about the status of securing terminals for public elementary and junior high schools, only Shibuya Ward, in Tokyo, had already deployed them; two municipalities, Nara City and Toshima Ward, in Tokyo, were able to complete deployment by September; and nine municipalities, including Sakai City, indicated they would be able to complete deployment from October to December. Of the 62 municipalities (83 At the New Education Expo 2020, which took place in October 2020, a wide range of events were held, including a hands-on experience with Future Classroom, a product of Uchida Yoko, which comprised a seminar on the pros and cons of digital textbooks for learners, as well as a seminar on the creation of universities that continue to be chosen by the public. The results of ICT utilization in private education, which has been promoted ahead of public education, may have had some influence on the efforts and policies to realize ICT utilization in public education. Furthermore, as an example of original efforts in public education, some schools had already begun to use electronic blackboards and tablet terminals for education before the GIGA initiative was launched. At Dai-Ichi Gakuin High School, a nationwide correspondence high school, all students had and were using tablets by 2015. Further, N High School, a correspondence high school that has been attracting attention in recent years for its rapidly growing enrollment rates, has been providing education as an "online high school," with all students owning tablet devices since its opening in 2016. In addition to using the chat tool Slack for online homerooms and communicating class schedules to homeroom teachers, VR equipment is provided free of charge to students in the regular course at N High School, allowing them to use VR learning software to view original class materials (as of March 2022, my writing)[37-40]. ### _The national and international contexts and other initiatives_ Despite these ongoing efforts, Japan's usage time for digital devices in the classroom ranked last among Organisation for Economic Co-operation and Development-member countries in the 2018 PISA survey on the use of ICTs. Countries ranked higher include Sweden, Denmark, Australia, and New Zealand. Recently, it has been said that Japan's ICT use is also inferior to that of South Korea and Singapore. For example, in Denmark, some students have been tested to bring in tablet devices to foster the ability to efficiently and accurately gather information on the Internet, while the government of Queensland, Australia, has created and released digital textbooks that can be used free of charge. Science, technology, engineering and mathematics education began in the U.S. in the 2000s and was made famous by former US President Barack Obama. The school, a pioneer in problem-based learning, focuses especially on programming and non-cognitive skills development, and actively adopts e-learning. With no tuition, no textbooks, and no grade book, students can develop the ability to think for themselves and realize their own goals while working in teams on projects. Additionally, the potential of ICT for utilization in special-needs education has been recognized for some time, as shown by the selection of schools for the blind, the deaf, and schools for the disabled in the aforementioned 100 Schools Project since 1994. The concept of universal design (UD) has permeated the design of electronic devices. For example, the iPhone's accessibility features include voice reading, magnification, black and white inversion, grayscale, UD fonts, and contrast enhancement, which can be customized according to the user's needs. In 2016, the UD Digital Textbook Style was released, featuring a simple shape with a constant thickness, developed with low-vision and visually-sensitive children in mind. For example, in the case of a junior high school student with dyslexia, the read-aloud function of a digital textbook is used. The printouts that have not been digitized can also be read out loud by using an application that can digitize them. For writing, the students look up the dictionary application for writing practice, write while looking at the shapes of the characters, and use an application that judges the neatness of the written characters. In the case of an elementary school student with high-functioning autism and LD, tablet devices are used for notetaking via keyboard input, sharing worksheets created by the teacher using PowerPoint, and remote management of learning status. The Act on the Elimination of Discrimination against Persons with Disabilities, enacted in 2016, has made the concept of reasonable accommodation more prevalent, and an increasing number of students are being allowed to take regular examinations and take examinations by computer or by voice substitution. However, there are also challenges in terms of the conflict owed to individual differences and the higher level required. Particularly, children with developmental disabilities tend to be more dependent on the Internet and video games, and there are physical challenges such as crimes and charges through social networking services (SNSs), back and eye problems, and sleep issues. LITALICO, a private company that provides rehabilitation and education for children with developmental disabilities, is following up on cases that are beyond the reach of the government, while the LITALICO Research Institute is working to improve the company's services to guarantee information, and other mechanisms to achieve accessibility and reasonable accommodation [41-43]. ## IV Prospects of this analysis This paper examines the diffusion of ICT education in Japan and how it can be effectively used to solve social problems. In order to investigate policy and private education initiatives, as well as to clarify familiar personal use of ICT education, we analyzed the use of educational apps among junior and senior high school students. The analysis of LDASU data showed that the frequency of educational app use tended to increase among second- and third-year high school students during the college entrance exam period and during school commuting hours. Additionally, the analysis of the usage dynamics of other apps among users who frequently used educational apps showed that SNSs and news apps were used frequently, whereas game apps were not. This indicates that we are currently witnessing a transition to a digital native generation, and that there are situations in which ICTs can (or cannot) be effectively utilized in the learning of elementary, junior high, and high school students, with some students making use of ICT and others being analogists. It is assumed that ICT education will be used in combination with traditional educational methods, while carefully considering the characteristics of ICT education. While this paper found that students themselves tend to actively use learning applications in educational settings, it also highlighted the need to actively and safely utilize ICTs while using both ICT education and conventional educational methods as necessary. ICT education is expected to be a new form of education; however, at this stage, the main question is whether or not it can be used safely by children. It is clear that independent learning, which is emphasized in the new Courses of Study, can be realized only when its flexible use is allowed, and the benefits of ICT education's ability to provide individualized education can also be utilized. However, as in the case of NETS in the US, even if rules for use are established at each school, they may not function without a legal binding force. It can be said that the system and legislation for the safe use of ICTs--including its mobile (e.g., smartphone) and security aspects--in education must be rigorous. In terms of future prospects, it is desirable to verify what kind of results will be achieved when the current elementary and junior high school students, for whom programming has become compulsory, enter the workforce, and to use this as a reference for revising policies and curriculum guidelines in order to improve logical thinking skills and solve the shortage of digital human resources, which are expected to become a focal point of discussion when ICT education becomes widespread. As a point of focus for future analysis, it was also confirmed that middle and high school users who frequently used educational applications also frequently used communication applications such as SNSs (text- or image-centered platforms such as Twitter and Instagram) and blogs (Ameba). In terms of learning effectiveness in the digital environment, it can be assumed that the student generation also views the opinions of faculty and staff in the discourse space of SNSs, which can be easily accessed from their smartphones. In light of this, it will be necessary to analyze data from 2020 onward, especially within the context of the COVID-19 pandemic and the GIGA initiative, to see how elementary, middle, and high school teachers are increasingly intervening in the discourse space of SNSs. ## V Acknowledgment In conducting this research, we would like to express our sincere gratitude to all members of the Kawahata Seminar, Department of Sociology, Rikkyo University, for their discussion and research participation in this project from 2020 onward. We would like to express our sincere gratitude to T.M., a graduate of the Kawahata Seminar in the class of 2021, who worked with the author on the research, survey, and writing activities that form the basis of this paper. We would also like to express our sincere gratitude to Professor Shinichiro Wada and Professor Tadamasa Kimura of the Department of Media and Society, Faculty of Sociology, for providing the data set, UserLocal, Inc. and everyone who willingly cooperated in the interview and questionnaire surveys. We would also like to thank Fuller, Inc. for the LDASU2020 dataset and for their active research input and sharing of challenges during this study. We would also like to thank the JSPS/JSPS Grants-in-Aid for Scientific Research (JSPS/Grants-in-Aid for Scientific Research) for the fiscal years 2018-2021.Grant-in-Aid for Scientific Research Project Research Subject/Area No. 19K04881, "Construction of a New Theory of Opinion Dynamics that Can Describe the Real Image of Society by Introducing Trust and Distrust". Finally, we would like to express our sincere gratitude to Ishii Laboratory, Graduate School of Technology, Tottori University (until FY2022), who was our joint research team in the above Grant-in-Aid research project.
2309.03292
Scalable Learning of Intrusion Responses through Recursive Decomposition
We study automated intrusion response for an IT infrastructure and formulate the interaction between an attacker and a defender as a partially observed stochastic game. To solve the game we follow an approach where attack and defense strategies co-evolve through reinforcement learning and self-play toward an equilibrium. Solutions proposed in previous work prove the feasibility of this approach for small infrastructures but do not scale to realistic scenarios due to the exponential growth in computational complexity with the infrastructure size. We address this problem by introducing a method that recursively decomposes the game into subgames which can be solved in parallel. Applying optimal stopping theory we show that the best response strategies in these subgames exhibit threshold structures, which allows us to compute them efficiently. To solve the decomposed game we introduce an algorithm called Decompositional Fictitious Self-Play (DFSP), which learns Nash equilibria through stochastic approximation. We evaluate the learned strategies in an emulation environment where real intrusions and response actions can be executed. The results show that the learned strategies approximate an equilibrium and that DFSP significantly outperforms a state-of-the-art algorithm for a realistic infrastructure configuration.
Kim Hammar, Rolf Stadler
2023-09-06T18:12:07Z
http://arxiv.org/abs/2309.03292v2
# Scalable Learning of Intrusion Response ###### Abstract We study automated intrusion response for an IT infrastructure and formulate the interaction between an attacker and a defender as a partially observed stochastic game. To solve the game we follow an approach where attack and defense strategies co-evolve through reinforcement learning and self-play toward an equilibrium. Solutions proposed in previous work prove the feasibility of this approach for small infrastructures but do not scale to realistic scenarios due to the exponential growth in computational complexity with the infrastructure size. We address this problem by introducing a method that recursively decomposes the game into subgames with low computational complexity which can be solved in parallel. Applying optimal stopping theory we show that the best response strategies in these subgames exhibit threshold structures, which allows us to compute them efficiently. To solve the decomposed game we introduce an algorithm called Decompositional Fictitious Self-Play (dsfp), which learns Nash equilibria through stochastic approximation. We evaluate the learned strategies in an emulation environment where real intrusions and response actions can be executed. The results show that the learned strategies approximate an equilibrium and that dsfp significantly outperforms a state-of-the-art algorithm for a realistic infrastructure configuration. Cybersecurity, network security, intrusion response, reinforcement learning, game theory, game decomposition, Markov decision process, optimal control, digital twin, mdp. ## I Introduction A promising direction of recent research is to automatically find security strategies for an IT infrastructure through reinforcement learning methods, whereby the problem is formulated as a Markov decision problem and strategies are learned through simulation (see survey [1]). While encouraging results have been obtained following this approach (see e.g., [2] and [3]), key challenges remain. Most of the prior work, for example, follows a decision-theoretic formulation and aims at learning effective defender strategies against a static attacker with a fixed strategy [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26]. Only recently has the problem of learning effective security strategies against dynamic attackers been studied. This approach includes a game-theoretic framing, and the problem becomes one of learning Nash equilibria [27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37]. Chief among the remaining challenges is the complexity of the formal model, resulting from the need to describe the target infrastructure with sufficient detail and at a realistic scale. Learning effective strategies with currently known methods is infeasible for most realistic use cases. In this paper, we address the complexity challenge and present a scalable approach to automatically learn near-optimal defender strategies against dynamic attackers. We apply our approach to an _intrusion response_ use case that involves the IT infrastructure of an organization (see Fig. 1). We formalize the use case as a partially observed stochastic game between two players - the operator of the infrastructure, which we call the defender, and an attacker, which seeks to intrude on the infrastructure. To manage the complexity when formalizing the use case, we recursively decompose the game into simpler subgames, which allows detailed modeling of the infrastructure while keeping computational complexity low. The decomposition involves three steps. First, we partition the infrastructure according to workflows that are isolated from each other. This allows us to decompose the game into _independent subgames_ (one per workflow) that can be solved in parallel. Second, the graph structure of a workflow allows us to decompose the workflow games into node subgames. We prove that these subgames have _optimal substructure_[38, Ch. 15], which means that a best response of the original game can be obtained from best responses of the node subgames. Third, we show that the problem of selecting _which_ response action to apply on a node can be separated from that of deciding _when Fig. 1: The target infrastructure and the actors involved in the intrusion response use case. to apply the action, which enables efficient learning of best responses through the application of _optimal stopping_ theory [39]. We use this insight to design an efficient reinforcement learning algorithm, called Decompositional Fictitious Self-Play (dfsp), which allows scalable approximation of Nash equilibrium strategies. Our method for learning the equilibrium strategies and evaluating them is based on a _digital twin_ of the target infrastructure, which we use to run attack scenarios and defender responses (see Fig. 2) [3, 6, 31]. Such runs produce system measurements and logs, from which we estimate infrastructure statistics. We then use these statistics to instantiate simulations of the infrastructure's dynamics and learn strategies through dfsp. (Documentation of the software framework that implements the digital twin and the simulations is available at [6, 20]; the source code is available at [40]; and a video demonstration is available at [41].) We summarize the contributions in this paper as follows. 1. We formulate the intrusion response problem as a partially observed stochastic game and prove that, under assumptions often met in practice, the game decomposes into subgames whose best responses can be computed efficiently and in parallel. 2. We design dfsp, an efficient reinforcement learning algorithm for approximating Nash equilibria of the decomposed game. 3. For a realistic use case, we evaluate the learned response strategies against network intrusions on a digital twin. ## II Related Work Networked systems found in engineering and science often exhibit a modular topological structure that can be exploited for designing control algorithms [42]. System decomposition for the purpose of automatic control was first suggested by Siljak in 1978 [43] and approaches based on decomposition, such as divide and conquer, layering, and hierarchical structuring are well established in the design of large-scale systems, a notable example being the Internet [44]. Similar decomposition methods are frequently used in robotics and multi-agent systems, as exemplified by the subsumption architecture [45]. Within the fields of decision- and game-theory, decomposition is studied in the context of factored decision processes [46, 47, 48, 49], distributed decision processes [50], factored games [51, 52], and graph-structured games [53]. Decomposition as a means to automate intrusion response has been studied first in [54, 51, 55, 56]. The work in [51] formulates the interaction between a defender and an attacker on a cyber-physical infrastructure as a factored Markov game and introduces a decomposition based on linear programming. Following a similar approach, the work in [55] studies a Markov game formulation and shows that a multi-stage game can be decomposed into a sequence of one-stage games. In a separate line of work, [54] models intrusion response as a minimax control problem and develops a heuristic decomposition based on clustering and influence graphs. This approach resembles the work in [56], which studies a factored decision process and proposes a hierarchical decomposition. In all of the above works, decomposition is key to obtain effective strategies for large-scale systems. Compared to our work, some of them propose decomposition methods without optimal substructure [54], others do not consider partial observability [51, 55], or dynamic attackers [56]. Most importantly, all of the above works evaluate the obtained strategies in a simulation environment. They do not perform evaluation in an emulation environment as we report in this paper, which gives higher confidence that the strategies are effective on the target infrastructure. For a comprehensive review of prior research on automated intrusion response (beyond work that use decomposition), see [31, SSVII]. ## III The Intrusion Response Use Case We consider an intrusion response use case that involves the IT infrastructure of an organization. The operator of this infrastructure, which we call the defender, takes measures to protect it against an attacker while providing services to a client population (see Fig. 1). The infrastructure is segmented into _zones_ with virtual _nodes_ that run network services. Services are realized by _workflows_ that are accessed by clients through a gateway, which also is open to the attacker. The attacker's goal is to intrude on the infrastructure, compromise nodes, and disrupt workflows. It can take three types of actions to achieve this goal: (_i_) reconnaissance; (_ii_) brute-force attacks; and (_iii_) exploits (see Fig. 3). The defender continuously monitors the infrastructure through accessing and analyzing intrusion detection alerts and other statistics. It can take four types of defensive actions to respond to possible intrusions: (_i_) migrate nodes between zones; (_ii_) redirect or block network flows; (_iii_) shut down nodes; and (_iv_) revoke access to nodes (see Fig. 4). When deciding between these actions, the defender must balance two conflicting objectives: maximize workflow utility towards its clients and minimize the cost of intrusion. ## IV Formalizing the Intrusion Response Problem We formalize the above use case as an optimization problem where the goal is to select an optimal sequence of defender actions in response to a sequence of attacker actions. We Fig. 2: Our framework for finding and evaluating intrusion response strategies [3, 6, 31]. assume a dynamic attacker, which leads to a game-theoretic formulation of the intrusion response problem. The game is played on the IT infrastructure, which we model as a discrete-time dynamical system whose evolution depends on the actions by the attacker and the defender. Both actors have partial observability of the system state, and their observations depend on the traffic generated by clients requesting service, which we assume can be described by a stationary process. **Notations.** Boldface lower case letters (e.g., \(\mathbf{x}\)) denote row vectors and upper case calligraphic letters (e.g., \(\mathcal{V}\)) represent sets. The set of probability distributions over \(\mathcal{V}\) is written as \(\Delta(\mathcal{V})\). A random variable is denoted by upper case (e.g., \(X\)) and a random vector is denoted by boldface (e.g., \(\mathbf{X}=(X_{1},\ldots,X_{n})\)). \(\mathbb{P}\) is the probability measure and the expectation of \(f\) with respect to \(X\) is expressed as \(\mathbb{E}_{X}[f]\). When \(f\) includes many random variables that depend on \(\pi\) we simplify the notation to \(\mathbb{E}_{\pi}[f]\). We use \(x\sim f\) to mean that \(x\) is sampled from \(f\) and sometimes write \(\mathbb{P}[x|z,y]\) instead of \(\mathbb{P}[X=x|Z=z,Y=y]\) when \(X,Z,Y\) are clear from the context. Symbols used throughout the paper are listed in Table 1. ### _Modeling the Infrastructure and Services_ Following the description in SSIII, we consider an IT infrastructure with application servers connected by a communication network that is segmented into zones (see Fig. 1). Overlaid on this physical infrastructure is a virtual infrastructure with tree-structure that includes _nodes_, which collectively offer services to clients. A service is modeled as a _workflow_, which comprises a set of interdependent nodes. A dependency between two nodes reflects information exchange through service invocations. We assume that each node belongs to exactly one workflow. As an example of a virtual infrastructure, we can think of a microservice architecture where a workflow is defined as a tree of microservices (see Fig. 5). **Infrastructure.** We model the virtual infrastructure as a (finite) directed graph \(\mathcal{G}\triangleq\langle\{\mathrm{gw}\}\cup\mathcal{V},\mathcal{E}\rangle\). The graph has a tree structure and is rooted at the gateway \(\mathrm{gw}\). Each node \(i\in\mathcal{V}\) has three state variables. \(v_{i,t}^{(\mathrm{R})}\) represents the reconnaissance state and realizes the binary random variable \(V_{i,t}^{(\mathrm{R})}\). \(v_{i,t}^{(\mathrm{R})}=1\) if the attacker has discovered the node, \(0\) otherwise. \(v_{i,t}^{(\mathrm{I})}\) represents the intrusion state and realizes the binary random variable \(V_{i,t}^{(\mathrm{I})}\). \(v_{i,t}^{(\mathrm{I})}=1\) if the attacker has compromised the node, \(0\) otherwise. Lastly, \(v_{i,t}^{(\mathrm{Z})}\) indicates the zone in which the node resides and realizes the random variable \(V_{i,t}^{(\mathrm{Z})}\). We call a node _active_ if it is functional as part of a workflow (denoted \(\alpha_{i,t}=1\)). Due to a defender action (e.g., a shut down) a node \(i\in\mathcal{V}\) may become inactive (i.e., \(\alpha_{i,t}=0\)). **Workflows.** We model a workflow \(\mathbf{w}\in\mathcal{W}\) as a subtree \(\mathcal{G}_{\mathbf{w}}\triangleq\langle\{\mathrm{gw}\}\cup\mathcal{V}_{ \mathbf{w}},\mathcal{E}_{\mathbf{w}}\rangle\) of the infrastructure graph. Workflows do not overlap except for the gateway which belongs to all workflows. ### _Modeling Actors_ The intrusion response use case involves three types of actors: an attacker, a defender, and clients (see Fig. 1). **Attacker.** At each time \(t\), the attacker takes an action \(\mathbf{a}_{t}^{(\mathrm{A})}\), which is defined as the composition of the local actions on all nodes \(\mathbf{a}_{t}^{(\mathrm{A})}\triangleq(\mathbf{a}_{1,t}^{(\mathrm{A})}, \ldots,\mathbf{a}_{|\mathcal{V}|,t}^{(\mathrm{A})})\in\mathcal{A}_{\mathrm{A}}\), where \(\mathcal{A}_{\mathrm{A}}\) is finite. A local action is either the null action (denoted with \(\bot\)) or an offensive action (see examples in Fig. 3). An offensive action on a node \(i\) may change the reconnaissance state \(v_{i,t}^{(\mathrm{R})}\) or the intrusion state \(v_{i,t}^{(\mathrm{I})}\). A node \(i\) can only be compromised if it is discovered, i.e., if \(v_{i,t}^{(\mathrm{R})}=1\). We express this constraint as \(\mathbf{a}_{t}^{(\mathrm{A})}\in\mathcal{A}_{\mathrm{A}}(\mathbf{s}_{t}^{( \mathrm{A})})\). The attacker state \(\mathbf{S}_{t}^{(\mathrm{A})}\triangleq\langle V_{i,t}^{(\mathrm{I})},V_{i,t}^ {(\mathrm{R})}\rangle_{i\in\mathcal{V}}\in\mathcal{S}_{\mathrm{A}}\) evolves as \[\mathbf{s}_{t+1}^{(\mathrm{A})}\sim f_{\mathrm{A}}\big{(}\cdot \mid\mathbf{s}_{t}^{(\mathrm{A})},\mathbf{a}_{t}^{(\mathrm{A})},\mathbf{a}_{t }^{(\mathrm{D})}\big{)} \tag{1}\] where \(\mathbf{S}_{t}^{(\mathrm{A})}\), \(\mathbf{A}_{t}^{(\mathrm{A})}\), and \(\mathbf{A}_{t}^{(\mathrm{D})}\) are random vectors with realizations \(\mathbf{s}_{t}^{(\mathrm{A})}\), \(\mathbf{a}_{t}^{(\mathrm{A})}\), and \(\mathbf{a}_{t}^{(\mathrm{D})}\). (\(\mathbf{A}_{t}^{(\mathrm{D})}\) represents the defender action at time \(t\).) **Defender.** At each time \(t\), the defender takes an action \(\mathbf{a}_{t}^{(\mathrm{D})}\), which is defined as the composition of the local actions on all nodes \(\mathbf{a}_{t}^{(\mathrm{D})}\triangleq(\mathbf{a}_{1,t}^{(\mathrm{D})}, \ldots,\mathbf{a}_{|\mathcal{V}|,t}^{(\mathrm{D})})\in\mathcal{A}_{\mathrm{D}}\), where \(\mathcal{A}_{\mathrm{D}}\) is finite. A local action is either a defensive action or the null action \(\bot\) (see examples in Fig. 4). Each defensive action \(\mathbf{a}_{i,t}^{(\mathrm{D})}\neq\bot\) leads to \(\mathbf{S}_{i,t+1}^{(\mathrm{A})}=(0,0)\) and may affect \(V_{i,t+1}^{(\mathrm{Z})}\). The defender state \(\mathbf{S}_{t}^{(\mathrm{D})}\triangleq\big{(}V_{i,t}^{(\mathrm{Z})}\big{)}_{ i\in\mathcal{V}}\in\mathcal{S}_{\mathrm{D}}\) evolves according to \[\mathbf{s}_{t+1}^{(\mathrm{D})}\sim f_{\mathrm{D}}\big{(}\cdot \mid\mathbf{s}_{t}^{(\mathrm{D})},\mathbf{a}_{t}^{(\mathrm{D})}\big{)} \tag{2}\] where \(\mathbf{s}_{t}^{(\mathrm{D})}\) and \(\mathbf{a}_{t}^{(\mathrm{D})}\) realize the random vectors \(\mathbf{S}_{t}^{(\mathrm{D})}\) and \(\mathbf{A}_{t}^{(\mathrm{D})}\). Fig. 4: Defender actions: (_i_) migrate a node between two zones; (_ii_) redirect or block traffic flows to a node; (_iii_) shut down a node; and (_iv_) revoke access to a node. Fig. 3: Attacker actions: (_i_) reconnaissance actions; (_ii_) brute-force attacks; and (_iii_) code execution attacks. **Clients.** Clients consume services of the infrastructure by accessing workflows. We model client behavior through stationary stochastic processes, which affect the observations available to the attacker and the defender. ### _Observability and Strategies_ At each time \(t\), the defender and the attacker both observe \(\mathbf{o}_{t}\triangleq\big{(}\mathbf{o}_{1,t},\ldots,\mathbf{o}_{|\mathcal{V} |,t}\big{)}\in\mathcal{O}\), where \(\mathcal{O}\) is finite. (In our use case \(\mathbf{o}_{t}\) relates to the number of idps alerts per node.) \(\mathbf{o}_{t}\) is drawn from the random vector \(\mathbf{O}_{t}\triangleq(\mathbf{O}_{1,t},\ldots,\mathbf{O}_{|\mathcal{V}|,t})\) whose marginal distributions \(Z_{\mathbf{O}_{1}},\ldots,Z_{\mathbf{O}_{|\mathcal{V}|}}\) are stationary and conditionally independent given \(\mathbf{S}_{i,t}\triangleq(\mathbf{S}_{i,t}^{(\mathrm{D})},\mathbf{S}_{i,t}^{ (\mathrm{A})})\). (Note that \(Z_{\mathbf{O}_{i}}\) depends on the traffic generated by clients.) As a consequence, the joint conditional distribution \(Z\) is given by \[Z\big{(}\mathbf{O}_{t}=\mathbf{o}\mid\mathbf{s}_{t}\big{)}=\prod_{i=1}^{| \mathcal{V}|}Z_{\mathbf{O}_{i}}\big{(}\mathbf{O}_{i,t}=\mathbf{o}_{i}\mid \mathbf{s}_{i,t}\big{)}\quad\forall\mathbf{o}\in\mathcal{O} \tag{3}\] The sequence of observations and states at times \(1,\ldots,t\) forms the histories \(\mathbf{h}_{t}^{(\mathrm{D})}\in\mathcal{H}_{\mathrm{D}}\) and \(\mathbf{h}_{t}^{(\mathrm{A})}\in\mathcal{H}_{\mathrm{A}}\). These histories are realizations of the random vectors \(\mathbf{H}_{t}^{(\mathrm{D})}\triangleq(\mathbf{S}_{1}^{(\mathrm{D})},\)\(\mathbf{A}_{1}^{(\mathrm{D})},\mathbf{O}_{1},\ldots,\mathbf{A}_{t-1}^{(\mathrm{A})}, \mathbf{S}_{t}^{(\mathrm{D})},\mathbf{O}_{t})\) and \(\mathbf{H}_{t}^{(\mathrm{A})}\triangleq(\mathbf{S}_{1}^{(\mathrm{A})}, \mathbf{A}_{1}^{(\mathrm{A})},\mathbf{O}_{1},\)\(\ldots,\mathbf{A}_{t-1}^{(\mathrm{A})},\mathbf{S}_{t}^{(\mathrm{A})}, \mathbf{O}_{t})\). Based on their respective histories, the defender and the attacker select actions, which define the defender strategy \(\pi_{\mathrm{D}}\in\Pi_{\mathrm{D}}:\mathcal{H}_{\mathrm{D}}\rightarrow\Delta( \mathcal{A}_{\mathrm{D}})\) and the attacker strategy \(\pi_{\mathrm{A}}\in\Pi_{\mathrm{A}}:\mathcal{H}_{\mathrm{A}}\rightarrow\Delta( \mathcal{A}_{\mathrm{A}})\). ### _The Intrusion Response Problem_ When selecting the strategy \(\pi_{\mathrm{D}}\) the defender must balance two conflicting objectives: maximize the workflow utility towards its clients and minimize the cost of intrusion. The weight \(\eta\geq 0\) controls the trade-off between these two objectives, which results in the bi-objective \[J\triangleq\sum_{t=1}^{\infty}\gamma^{t-1}\Bigg{(}\sum_{\mathsf{w}\in \mathcal{W}}\sum_{i\in\mathcal{V}_{\mathsf{w}}}\underbrace{\eta u_{i,t}^{( \mathrm{W})}}_{\text{worldflows utility}}-\underbrace{c_{i,t}^{(\mathrm{I})}}_{ \text{ intrusion cost}}\Bigg{)} \tag{4}\] where \(\gamma\in[0,1)\) is a discount factor, \(c_{i,t}^{(\mathrm{I})}\) is the intrusion cost associated with node \(i\) at time \(t\), and \(u_{i,t}^{(\mathrm{W})}\) expresses the workflow utility associated with node \(i\) at time \(t\). For this paper, we assume that \(u_{i,t}^{(\mathrm{W})}\) is proportional to the number of active nodes in the subtree rooted at \(i\) and that \(c_{i,t}^{(\mathrm{I})}=v_{i,t}^{(\mathrm{I})}+c^{(\mathrm{A})}(\mathbf{a}_{i,t }^{(\mathrm{D})})\), where \(c^{(\mathrm{A})}\) is a non-negative function. Given (4) and an attacker strategy \(\pi_{\mathrm{A}}\), the intrusion response problem can be stated as \[\mathop{\mathrm{maximize}}_{\pi_{\mathrm{D}}\in\Pi_{\mathrm{D}}} \mathbb{E}_{(\pi_{\mathrm{D}},\pi_{\mathrm{A}})}\left[J\right]\] (5a) subject to \[\mathbf{s}_{t+1}^{(\mathrm{D})}\sim f_{\mathrm{D}}\big{(}\cdot \mid\mathbf{s}_{t}^{(\mathrm{D})},\mathbf{a}_{t}^{(\mathrm{D})}\big{)} \forall t \tag{5b}\] \[\mathbf{s}_{t+1}^{(\mathrm{A})}\sim f_{\mathrm{A}}\big{(}\cdot \mid\mathbf{s}_{t}^{(\mathrm{A})},\mathbf{a}_{t}^{(\mathrm{A})},\mathbf{a}_{t }^{(\mathrm{D})}\big{)} \forall t\] (5c) \[\mathbf{o}_{t+1}\sim Z\big{(}\cdot\mid\mathbf{s}_{t+1}^{(\mathrm{ D})},\mathbf{s}_{t+1}^{(\mathrm{A})}\big{)} \forall t\] (5d) \[\mathbf{a}_{t}^{(\mathrm{A})}\sim\pi_{\mathrm{A}}\big{(}\cdot \mid\mathbf{h}_{t}^{(\mathrm{A})}\big{)} \forall t\] (5e) \[\mathbf{a}_{t}^{(\mathrm{D})}\sim\pi_{\mathrm{D}}\big{(}\cdot \mid\mathbf{h}_{t}^{(\mathrm{D})}\big{)} \forall t\] (5f) \[\mathbf{s}_{t}^{(\mathrm{D})}\in\mathcal{S}_{\mathrm{D}},\quad \mathbf{s}_{t}^{(\mathrm{A})}\in\mathcal{S}_{\mathrm{A}},\quad\mathbf{o}\in \mathcal{O} \forall t\] (5g) \[\mathbf{a}_{t}^{(\mathrm{D})}\in\mathcal{A}_{\mathrm{D}},\quad \mathbf{a}_{t}^{(\mathrm{A})}\in\mathcal{A}_{\mathrm{A}}(\mathbf{s}_{t}^{( \mathrm{A})}) \forall t\] (5h) \[\mathbf{s}_{1}^{(\mathrm{A})}\sim\mathbf{b}_{1}^{(\mathrm{A})}\] (5i) \[\mathbf{s}_{1}^{(\mathrm{D})}\sim\mathbf{b}_{1}^{(\mathrm{D})} \tag{5j}\] where \(\mathbb{E}_{(\pi_{\mathrm{D}},\pi_{\mathrm{A}})}\) denotes the expectation over the random vectors \((\mathbf{H}_{t}^{(\mathrm{D})},\mathbf{H}_{t}^{(\mathrm{A})})_{t\in\{1,2,\ldots\}}\) when following the strategy profile \((\pi_{\mathrm{D}},\pi_{\mathrm{A}})\); (5b)-(5c) are the dynamics constraints; (5d) describes the observations; (5e)-(5f) capture the actions; (5g)-(5h) are the domain constraints; and (5i)-(5j) define the initial state distributions. (As a maximizer of (5) exists (see Thm. 1), we write \(\max\) instead of \(\sup\) throughout this paper). Solving (5) yields an optimal defender strategy against a _static_ attacker with a fixed strategy. Note that this defender strategy is generally not optimal against a different attacker strategy. For this reason, we aim to find a defender strategy that maximizes the minimum value of \(J\) (4) across all possible attacker strategies. This objective can be formally expressed as a maxmin problem: \[\mathop{\mathrm{maximize}}_{\pi_{\mathrm{D}}\in\Pi_{\mathrm{D}}}\mathop{ \mathrm{minimize}}_{\pi_{\mathrm{A}}\in\Pi_{\mathrm{A}}}\mathbb{E}_{(\pi_{ \mathrm{D}},\pi_{\mathrm{A}})}\left[J\right]\text{ subject to \eqref{eq their respective strategies, i.e., \(\mathbf{a}_{t}^{(\mathrm{D})}\sim\pi_{\mathrm{D}}(\cdot\mid\mathbf{h}_{t}^{( \mathrm{D})})\) and \(\mathbf{a}_{t}^{(\mathrm{A})}\sim\pi_{\mathrm{A}}(\cdot\mid\mathbf{h}_{t}^{( \mathrm{A})})\). As a result of these actions, five events occur at time \(t+1\): (\(\mathsf{i}\)) \(\mathbf{o}_{t+1}\) is sampled from \(Z\); (\(\mathsf{ii}\)) \(\mathbf{s}_{t+1}^{(\mathrm{D})}\) is sampled from \(f_{\mathrm{D}}\); (\(\mathsf{iii}\)) \(\mathbf{s}_{t+1}^{(\mathrm{A})}\) is sampled from \(f_{\mathrm{A}}\); (\(\mathsf{iv}\)) the defender receives the utility \(w(\mathbf{s}_{t},\mathbf{a}_{t}^{(\mathrm{D})})\); and (\(\mathsf{v}\)) the attacker receives the utility \(-u(\mathbf{s}_{t},\mathbf{a}_{t}^{(\mathrm{D})})\). **Belief states.** Based on their histories \(\mathbf{h}_{t}^{(\mathrm{D})}\) and \(\mathbf{h}_{t}^{(\mathrm{A})}\), both players form beliefs about the unobservable components of the state \(\mathbf{s}_{t}\), which are expressed through the belief states \(\mathbf{b}_{t}^{(\mathrm{D})}(\mathbf{s}_{t}^{(\mathrm{A})})\triangleq\mathbb{ P}[\mathbf{s}_{t}^{(\mathrm{A})}\mid\mathbf{H}_{t}^{(\mathrm{D})}=\mathbf{h}_{t}^{( \mathrm{D})}]\) and \(\mathbf{b}_{t}^{(\mathrm{A})}(\mathbf{s}_{t}^{(\mathrm{D})})\triangleq\mathbb{ P}[\mathbf{s}_{t}^{(\mathrm{D})}\mid\mathbf{H}_{t}^{(\mathrm{A})}=\mathbf{h}_{t}^{( \mathrm{A})}]\). The belief states are realizations of \(\mathbf{B}_{t}^{(\mathrm{D})}\) and \(\mathbf{B}_{t}^{(\mathrm{A})}\) and are updated at each time \(t>1\) via [58, Eq. 1] \[\mathbf{b}_{t}^{(\mathrm{k})}(\mathbf{s}_{t}^{(-\mathrm{k})})=C_{ \mathrm{k}}\sum_{\mathbf{s}_{t-1}^{(-\mathrm{k})}\in\mathcal{S}_{-\mathrm{k}} \rightarrow\mathbf{a}_{t-1}^{(-\mathrm{k})}\in\mathcal{A}_{-\mathrm{k}}( \mathbf{s}_{t})}\mathbf{b}_{t-1}^{(\mathrm{k})}(\mathbf{s}_{t-1}^{(-\mathrm{k })})\cdot\] \[\pi_{-k}^{(\mathrm{k})}(\mathbf{a}_{t-1}^{(-\mathrm{k})}\mid \mathbf{s}_{t-1}^{(-\mathrm{k})})Z(\mathbf{o}_{t}\mid\mathbf{s}_{t}^{(\mathrm{ D})},\mathbf{a}_{t-1}^{(\mathrm{A})})f_{-\mathrm{k}}(\mathbf{s}_{t}^{(-\mathrm{k })}\mid\mathbf{s}_{t-1}^{(-\mathrm{k})},\mathbf{a}_{t-1})\] where \(\mathrm{k}\in\{\mathrm{D},\mathrm{A}\}\), \(C_{\mathrm{k}}=1/\mathbb{P}[\mathbf{o}_{t}\mid\mathbf{s}_{t}^{(\mathrm{k})}, \mathbf{a}_{t-1}^{(\mathrm{k})},\pi_{-\mathrm{k}},\mathbf{b}_{t-1}^{(\mathrm{ k})}]\) is a normalizing factor that makes the components of \(\mathbf{b}_{t}^{(\mathrm{k})}\) sum to \(1\) and \(\pi_{-k}^{(\mathrm{k})}:\mathcal{S}_{-\mathrm{k}}\rightarrow\Delta(\mathcal{A}_ {-\mathrm{k}})\) is the stage strategy for the opponent, i.e., the strategy of the opponent in the current stage only, which is assumed known to both players at each stage [58]. The initial beliefs at \(t=1\) are the degenerate distributions \(\mathbf{b}_{1}^{(\mathrm{D})}(\mathbf{0}_{2\mid\mathcal{V}})=1\) and \(\mathbf{b}_{1}^{(\mathrm{A})}(\mathbf{s}_{1}^{(\mathrm{D})})=1\), where \(\mathbf{0}_{n}\) is the n-dimensional zero-vector and \(\mathbf{s}_{1}^{(\mathrm{D})}\) is given by the infrastructure configuration (see SSIV). **Best response strategies.** A defender strategy \(\tilde{\pi}_{\mathrm{D}}\in\Pi_{\mathrm{D}}\) is called a _best response_ against \(\pi_{\mathrm{A}}\in\Pi_{\mathrm{A}}\) if it _maximizes_\(J\) (4). Similarly, an attacker strategy \(\tilde{\pi}_{\mathrm{A}}\) is called a best response against \(\pi_{\mathrm{D}}\) if it _minimizes_\(J\) (4). Hence, the best response correspondences are \[B_{\mathrm{D}}(\pi_{\mathrm{A}}) \triangleq\operatorname*{arg\,max}_{\pi_{\mathrm{D}}\in\Pi_{ \mathrm{D}}}\mathbb{E}_{(\pi_{\mathrm{D}},\pi_{\mathrm{A}})}[J] \tag{9}\] \[B_{\mathrm{A}}(\pi_{\mathrm{D}}) \triangleq\operatorname*{arg\,min}_{\pi_{\mathrm{A}}\in\Pi_{ \mathrm{A}}}\mathbb{E}_{(\pi_{\mathrm{D}},\pi_{\mathrm{A}})}[J] \tag{10}\] **Optimal strategies.** An optimal defender strategy \(\pi_{\mathrm{D}}^{*}\) is a best response strategy against any attacker strategy that _minimizes_\(J\). Similarly, an optimal attacker strategy \(\pi_{\mathrm{A}}^{*}\) is a best response against any defender strategy that _maximizes_\(J\). Hence, when both players follow optimal strategies, they play best response strategies against each other: \[(\pi_{\mathrm{D}}^{*},\pi_{\mathrm{A}}^{*})\in B_{\mathrm{D}}(\pi_{\mathrm{A}}^ {*})\times B_{\mathrm{A}}(\pi_{\mathrm{D}}^{*}) \tag{11}\] Since no player has an incentive to change its strategy, \((\pi_{\mathrm{D}}^{*},\pi_{\mathrm{A}}^{*})\) is a Nash equilibrium [57, Eq. 1]. We know from game theory that \(\Gamma\) has a mixed Nash equilibrium [58, 60, 61] and we know from Markov decision theory that \(B_{\mathrm{D}}(\pi_{\mathrm{A}})\) and \(B_{\mathrm{A}}(\pi_{\mathrm{D}})\) are non-empty [39, 62]. Based on these standard results, we state the following theorem. **Theorem 1**.: 1. _A game_ \(\Gamma\) _with instantiation described in SSIV has a mixed Nash equilibrium._ 2. _The best response correspondences (_9_)-(_10_) in_ \(\Gamma\) _with the instantiation described in SSIV satisfy_ \(|B_{\mathrm{D}}(\pi_{\mathrm{A}})|>0\) _and_ \(|B_{\mathrm{A}}(\pi_{\mathrm{D}})|>0\)__\(\forall(\pi_{\mathrm{A}},\pi_{\mathrm{D}})\in\Pi_{\mathrm{A}}\times\Pi_{ \mathrm{D}}\)_._ Proof.: The statement in (A) follows from the following sufficient conditions: (\(i\)) \(\Gamma\) is stationary, finite, and zero-sum; (\(ii\)) \(\Gamma\) has public observations; and (\(iii\)) \(\gamma\in[0,1)\). Due to these conditions, the existence proofs in [60, 83, 61, Thm. 2.3], and [58, Thm. 1] apply, which show that \(\Gamma\) can be modeled as a finite strategic game, for which Nash's theorem applies [57, Thm. 1]. In the interest of space we do not restate the proof. To prove (B), we note that obtaining a pair of best response strategies \((\tilde{\pi}_{\mathrm{D}},\tilde{\pi}_{\mathrm{A}})\in B_{\mathrm{D}}(\pi_{ \mathrm{A}})\times B_{\mathrm{A}}(\pi_{\mathrm{D}})\) for a given strategy pair \((\pi_{\mathrm{A}},\pi_{\mathrm{D}})\in\Pi_{\mathrm{A}}\times\Pi_{\mathrm{D}}\) amounts to solving two finite and stationary pomdps (Partially Observed Markov Decision Processes) with discounted utilities. It then follows from Markov decision theory that a pair of pure best response strategies \((\tilde{\pi}_{\mathrm{D}},\tilde{\pi}_{\mathrm{A}})\) exists [62, Thm. 6.2.7][39, Thms. 7.6.1-7.6.2]. For the sake of brevity we do not restate the proof, which is based on Banach's fixed-point theorem [63, Thm. 6, p. 160]. ## VI Decomposing the Intrusion Response Game In this section we present the main contribution of the paper. We show how the game \(\Gamma\) (7) with the instantiation described in SSIV can be recursively decomposed into subgames with \begin{table} \begin{tabular}{l l} \hline _Notation(s)_ & _Description_ \\ \hline \(\mathcal{G},\mathcal{G}_{\mathsf{w}}\) & Infrastructure tree, subtree of \(\mathbf{w}\) \\ \(\mathcal{V},\mathcal{E}\) & Sets of nodes and edges in \(\mathcal{G}\) \\ \(\mathcal{V}_{\mathsf{w},\mathsf{w}}\) & Sets of nodes and edges in \(\mathcal{G}_{\mathsf{w}}\) \\ \(\mathbb{Z},\mathcal{W}\) & Sets of network zones and workflows \\ \(\mathcal{A}_{\mathrm{D}},\mathcal{A}_{\mathrm{A}}(\mathbf{s}_{t})\) & Defender and attacker action spaces at time \(t\) \\ \(\mathcal{A}_{\mathrm{D}}^{(\mathrm{V})},\mathcal{A}_{\mathrm{A}}^{(\mathrm{V})}( \mathbf{s}_{t})\) & Action spaces per node at time \(t\), \(\mathcal{A}_{\mathrm{k}}=(\mathcal{A}_{\mathrm{k}}^{(\mathrm{V})})^{[\mathcal{V optimal substructure [38, Ch. 15], which means that a best response (9)-(10) of the original game can be obtained from best responses of the subgames. We further show that best responses of the subgames can be computed in parallel and that the space complexity of a subgame is independent of the number of nodes \(|\mathcal{V}|\). Note that the space complexity of the original game increases exponentially with \(|\mathcal{V}|\) (see Fig. 6). **Theorem 2** (Decomposition theorem).: 1. _A game_ \(\Gamma\) _(_7_) with the instantiation described in SSIV can be decomposed into independent workflow subgames_ \(\Gamma^{(\mathbf{w}_{1})},\ldots,\Gamma^{(\mathbf{w}_{|\mathcal{W}|\mathcal{W} |})}\)_. Due to their independence, the subgames have optimal substructure._ 2. _Each subgame_ \(\Gamma^{(\mathbf{w})}\) _can be further decomposed into node subgames_ \((\Gamma^{(i)})_{i\in\mathcal{V}_{\mathbf{w}}}\) _with optimal substructure and space complexities independent of_ \(|\mathcal{V}|\)_._ 3. _For each subgame_ \(\Gamma^{(i)}\)_, a best response strategy for the defender can be characterized by switching curves, under the assumption that the observation distributions_ \(Z_{\mathbf{O}_{|\mathbf{s}^{(\Lambda)}}},\ldots,Z_{\mathbf{O}_{|\mathcal{V}| }|\mathbf{s}^{(\Lambda)}}\) _(_3_) are totally positive of order 2 (i.e.,_ \(\mathtt{TP-2}\) _[_39_, Def._ 10.2.1_]__)._ Statements A and B express that \(\Gamma\) decomposes into simpler subgames, which consequently can be solved in parallel (see Fig. 7). This decomposition implies that the largest game that is tractable on a given compute platform scales linearly with the number of processors. Further, statement C says that a best response strategy for the defender in each subgame can be characterized by switching curves, which can be estimated efficiently. In the following sections we provide proofs of Thm. 2.A-C. The requisite notations are given in Table 1. ### _Proof of Theorem 2.A_ Following the instantiation of \(\Gamma\) described in SSIV, the state, observation, and action spaces factorize as \[\mathcal{S}=(\mathcal{Z}\times\{0,1\}^{2})^{|\mathcal{V}|},\mathcal{O}=( \mathcal{O}^{(\mathrm{V})})^{|\mathcal{V}|},\mathcal{A}_{\mathrm{k}}=( \mathcal{A}_{\mathrm{k}}^{(\mathrm{V})})^{|\mathcal{V}|} \tag{12}\] for player \(\mathrm{k}\in\{\mathrm{D},\mathrm{A}\}\), where \(\mathcal{O}^{(\mathrm{V})}\), \(\mathcal{A}_{\mathrm{D}}^{(\mathrm{V})}\), and \(\mathcal{A}_{\mathrm{A}}^{(\mathrm{V})}\) denote the local observation and action spaces for each node. Since each node belongs to exactly one workflow, (12) implies that \(\Gamma\) can be decomposed into subgames \(\Gamma^{(\mathbf{w}_{1})},\ldots,\Gamma^{(\mathbf{w}_{|\mathcal{W}|})}\). To show that the subgames are independent, it suffices to show that they are observation-independent, transition-independent, and utility-independent [46, Defs. 32,33,35]. From (3) we have \[Z\big{(}\mathbf{o}_{i,t+1}\mid\mathbf{s}_{t+1}^{(\mathrm{D})},\mathbf{s}_{t+ 1}^{(\mathrm{A})}\big{)}=Z\big{(}\mathbf{o}_{i,t+1}\mid\mathbf{s}_{i,t+1}^{( \mathrm{D})},\mathbf{s}_{i,t+1}^{(\mathrm{A})}\big{)} \tag{13}\] for all \(\mathbf{o}_{i,t+1}\in\mathcal{O}\), \(\mathbf{s}_{t+1}\in\mathcal{S}\), and \(t\geq 1\), which implies observation independence across nodes \(i\in\mathcal{V}\) and therefore across workflows [46, Def. 33]. From the definitions in SSIV and (1)-(2) we have \[f_{\mathrm{D}}(\mathbf{s}_{i,t+1}^{(\mathrm{D})}|\mathbf{s}_{t +1}^{(\mathrm{D})},\mathbf{a}_{t}^{(\mathrm{D})}) =f_{\mathrm{D}}(\mathbf{s}_{i,t+1}^{(\mathrm{D})}|\mathbf{s}_{i,t }^{(\mathrm{D})},\mathbf{a}_{t}^{(\mathrm{D})})\] \[f_{\mathrm{A}}(\mathbf{s}_{i,t+1}^{(\mathrm{A})}|\mathbf{s}_{t }^{(\mathrm{A})},\mathbf{a}_{t}^{(\mathrm{A})},\mathbf{a}_{t}^{(\mathrm{D})}) =f_{\mathrm{A}}(\mathbf{s}_{i,t+1}^{(\mathrm{A})}|\mathbf{s}_{i,t }^{(\mathrm{A})},\mathbf{a}_{i,t}^{(\mathrm{A})},\mathbf{a}_{i,t}^{(\mathrm{D} )})\] for all \(\mathbf{s}_{i,t}\in\mathcal{S},\mathbf{a}_{i,t}\in\mathcal{A},i\in\mathcal{V}\), and \(t\geq 1\), which implies transition independence across nodes \(i\in\mathcal{V}\) and therefore across workflows [46, Def. 32]. Following (4) and the definition of \(u_{i,t}^{(\mathrm{W})}\) (see SSIV-D) we can rewrite \(u(\mathbf{s}_{t},\mathbf{a}_{t}^{(\mathrm{D})})\) as \[u(\mathbf{s}_{t},\mathbf{a}_{t}^{(\mathrm{D})}) =\sum_{\mathbf{w}\in\mathcal{W}}\overbrace{\sum_{i\in\mathcal{V}_{ \mathbf{w}}}\eta u_{i,t}^{(\mathrm{W})}-c_{i,t}^{(\mathrm{I})}(\mathbf{a}_{i,t }^{(\mathrm{D})},v_{i,t}^{(\mathrm{I})})}^{\triangleq_{\mathbf{w}_{\mathbf{ w}}}}\] \[=\sum_{\mathbf{w}\in\mathcal{W}}u_{\mathbf{w}}\Big{(}(\mathbf{s} _{i,t},\mathbf{a}_{i,t}^{(\mathrm{D})})_{i\in\mathcal{V}_{\mathbf{w}}}\Big{)} \tag{14}\] The final expression in (14) is a sum of workflow utility functions, each of which depends only on the states and actions of one workflow. Hence, \(\Gamma^{(\mathbf{w}_{1})},\ldots,\Gamma^{(\mathbf{w}_{|\mathcal{W}|})}\) are utility independent [46, Def. 35]. ### _Proof of Theorem 2.B_ Our goal is to show that a workflow subgame \(\Gamma^{(\mathbf{w})}\) decomposes into node-level subgames with optimal substructure. That is, we aim to show that a best response in \(\Gamma^{(\mathbf{w})}\) can be constructed from best responses of the subgames. Following the description in SSIV, we know that the nodes in a workflow are connected in a tree and that the utility generated by a node \(i\) depends on the number of active nodes in the subtree rooted at \(i\). Taking into account this tree structure and the definition of the utility function, we decompose \(\Gamma^{(\mathbf{w})}\) into node subgames \((\Gamma^{(i)})_{i\in\mathcal{V}_{\mathbf{w}}}\) where each subgame depends only on the local state and action of a single node. It follows from (12) that this decomposition is feasible and that the space complexity of a subgame is independent of \(|\mathcal{V}|\). Further, we know from Thm. 2.A that the subgames are transition-independent and observation-independent but utility-dependent. To prove optimal substructure it therefore suffices to show that it is possible to redefine the utility functions for the subgames such that at each time \(t\), the best response action in \(\Gamma^{(\mathbf{w})}\) for any node \(i\) is also a best response in \(\Gamma^{(i)}\) and vice versa. For the sake of brevity we give the proof for the defender only. The proof for the attacker is analogous. In this proof, for better readability, we omit the constants \(\gamma,\eta\) and use the shorthand notations \(\mathbf{s}_{\mathbf{w},t}^{(\mathrm{D})}\triangleq(\mathbf{s}_{j,t}^{(\mathrm{ D})})_{j\in\mathcal{V}_{\mathbf{w}}}\), \(\mathbf{b}_{\mathbf{w},t}^{(\mathrm{D})}\triangleq(\mathbf{b}_{j,t}^{(\mathrm{D})})_ {j\in\mathcal{V}_{\mathbf{w}}}\), \(\mathcal{V}\triangleq\mathcal{V}_{\mathrm{D},\pi_{\mathrm{A}}}^{*}\), and \(\tau\in\arg\min_{k>t}\mathbf{a}_{k}^{(\mathrm{D})}\neq\bot\), where \(\mathcal{V}\) is the value function [39, Thm. 7.4.1]. Further, we use \(\mathrm{an}(i)\) to denote the set of node \(i\) and its ancestors in the infrastructure graph \(\mathcal{G}\). From Bellman's optimality equation [64, Eq. 1], a best response action for node \(i\) at time \(t\) in \(\Gamma^{(\mathbf{w})}\) against an attacker strategy \(\pi_{\mathrm{A}}\) is given by \[\underset{\mathbf{a}_{i,t}^{(\mathrm{D})}\in\mathcal{A}_{\mathrm{D} }^{(\mathrm{V})}}{\arg\max}\Bigg{[}\mathbb{E}_{\pi_{\mathrm{A}}}\bigg{[} \mathbf{U}_{t}+\mathscr{V}(\mathbf{S}_{t+1}^{(\mathrm{D})},\mathbf{B}_{t+1}^{ (\mathrm{D})})\Big{|}\mathbf{s}_{t}^{(\mathrm{D})},\mathbf{b}_{t}^{(\mathrm{D })},\mathbf{a}_{i,t}^{(\mathrm{D})}\bigg{]}\Bigg{]}\] \[\overset{(a)}{=}\underset{\mathbf{a}_{i,t}^{(\mathrm{D})}\in \mathcal{A}_{\mathrm{D}}^{(\mathrm{V})}}{\arg\max}\Bigg{[}\mathbb{E}_{\pi_{ \mathrm{A}}}\bigg{[}-c_{i,t}^{(\mathrm{I})}+\sum_{k=t+1}^{\infty}\sum_{j\in \mathcal{V}_{\mathbf{w}}}\mathbf{U}_{j,k}\Big{|}\overline{\mathbf{s}_{\mathbf{ w},t}^{(\mathrm{D})},\mathbf{b}_{w,t}^{(\mathrm{D})},\mathbf{a}_{i,t}^{( \mathrm{D})}}\bigg{]}\Bigg{]}\] \[\overset{(a)}{=}\underset{\mathbf{a}_{i,t}^{(\mathrm{D})}\in \mathcal{A}_{\mathrm{D}}^{(\mathrm{V})}}{\arg\max}\Bigg{[}\mathbb{E}_{\pi_{ \mathrm{A}}}\bigg{[}-c_{i,t}^{(\mathrm{I})}+\sum_{k=t+1}^{\tau}\sum_{j\in \mathcal{V}_{\mathbf{w}}}\mathbf{U}_{j,k}\Big{|}\kappa\bigg{]}\Bigg{]}\] \[\overset{(a)}{=}\underset{\mathbf{a}_{i,t}^{(\mathrm{D})}\in \mathcal{A}_{\mathrm{D}}^{(\mathrm{V})}}{\arg\max}\Bigg{[}\mathbb{E}_{\pi_{ \mathrm{A}}}\bigg{[}-c_{i,t}^{(\mathrm{I})}+\sum_{k=t+1}^{\tau}\sum_{j\in \mathrm{an}(i)}u_{j,k}^{(\mathrm{W})}-c_{j,k}^{(\mathrm{I})}\Big{|}\kappa \bigg{]}\Bigg{]}\] \[\overset{(d)}{=}\underset{\mathbf{a}_{i,t}^{(\mathrm{D})}\in \mathcal{A}_{\mathrm{D}}^{(\mathrm{V})}}{\arg\max}\Bigg{[}\mathbb{E}_{\pi_{ \mathrm{A}}}\bigg{[}-c_{i,t}^{(\mathrm{I})}+\sum_{k=t+1}^{\tau}\sum_{j\in \mathrm{an}(i)}u_{j,k}^{(\mathrm{W})}-c_{j,k}^{(\mathrm{I})}\Big{|}\kappa \bigg{]}\Bigg{]}\] \[\overset{(d)}{=}\underset{\mathbf{a}_{i,t}^{(\mathrm{D})}\in \mathcal{A}_{\mathrm{D}}^{(\mathrm{V})}}{\arg\max}\Bigg{[}\mathbb{E}_{\pi_{ \mathrm{A}}}\bigg{[}-c_{i,t}^{(\mathrm{I})}+\sum_{k=t+1}^{\tau}\lvert\mathrm{ an}(i)|\alpha_{i,t+1}-c_{i,k}^{(\mathrm{I})}\Big{|}\kappa\bigg{]}\Bigg{]}\] \[\overset{(g)}{=}\underset{\mathbf{a}_{i,t}^{(\mathrm{D})}\in \mathcal{A}_{\mathrm{D}}^{(\mathrm{V})}}{\arg\max}\Bigg{[}\mathbb{E}_{\pi_{ \mathrm{A}}}\bigg{[}\omega\ \Big{|}\ \mathbf{s}_{i,t}^{(\mathrm{D})},\mathbf{b}_{i,t}^{(\mathrm{D })},\mathbf{a}_{i,t}^{(\mathrm{D})}\bigg{]}\Bigg{]} \tag{15}\] where \(\mathbf{U}_{t}\) denotes the vector of utilities for all nodes at time \(t\). (a) holds because \((\mathbf{U}_{j,t})_{j\in\mathcal{V}_{\setminus}\{i\}}\) and \(u_{i,t}^{(\mathrm{W})}\) are independent of \(\mathbf{a}_{i,t}^{(\mathrm{D})}\) and therefore does not affect the maximization; (b) follows from the utility independence across workflows (Thm. 2.A) and the definition of the value function \(\mathscr{V}\)[39, Thm. 7.4.1]; (c) holds because any \(\mathbf{a}_{i,t}^{(\mathrm{D})}\) except \(\bot\) leads to \(\mathbf{s}_{i,t+1}^{(\mathrm{A})}=(0,0)\), which means that all state variables at time \(k>\tau\) are independent of \(\mathbf{a}_{i,t}^{(\mathrm{D})}\) and can therefore be moved outside the \(\arg\max\) operator; (d) follows because \((\mathbf{U}_{j,t})_{j\in\mathcal{V}\setminus\mathrm{an}(i)}\) is independent of \(\mathbf{a}_{i,t}^{(\mathrm{D})}\); (e) is an expansion of \((\mathbf{U}_{j,k})_{j\in\mathrm{an}(i),k\in\{t+1,\dots,\tau\}}\) based on (4); and (f)-(g) follow because the terms in \((u_{j,k}^{(\mathrm{W})})_{j\in\mathrm{an}(i),k\in\{t+1,\dots,\tau\}}\) that depend on \(\mathbf{a}_{i,t}^{(\mathrm{D})}\) equal \(k|\mathrm{an}(i)|\alpha_{t+1,i}\), where \(k\) is the constant of proportionality (see SSIV). (Recall that \(\alpha_{i,t}=1\) if node \(i\) is active at time \(t\) and \(\alpha_{i,t}=0\) otherwise.) The final expression in (III-B) depends only on local information related to node \(i\). This means that we can use it to define utility functions of the subgames \((\Gamma^{(i)})_{i\in\mathcal{V}_{\mathbf{w}}}\) such that they become utility-independent. Further, since the maximizer of the final expression in (III-B) is also a maximizer of the first expression, it follows that a a best response in \(\Gamma^{(i)}\) is also a best response for node \(i\) in \(\Gamma^{(\mathbf{w})}\) and thus in \(\Gamma\) (Thm. 2.A). Hence the subgames \((\Gamma^{(i)})_{i\in\mathcal{V}_{\mathbf{w}}}\) have optimal substructure. ### _Proof of Theorem ii.c_ The idea behind this proof is that the problem of selecting _which_ defensive action to apply in a subgame \(\Gamma^{(i)}\) (Thm. 2.B) against a given attacker strategy can be separated from the problem of deciding _when_ to apply it. Through this separation, we can analyze the latter problem using optimal stopping theory. Applying a recent result by Krishnamurthy [39, Thm. 12.3.4], the optimal stopping strategy in \(\Gamma^{(i)}\) can be characterized by switching curves. We perform the above separation by decomposing \(\mathbf{a}_{i,t}^{(\mathrm{D})}\) into two subactions: \(\mathbf{a}_{i,t}^{(\mathrm{D},1)}\) and \(\mathbf{a}_{i,t}^{(\mathrm{D},2)}\) which realize \(\mathbf{A}_{i,t}^{(\mathrm{D},1)}\) and \(\mathbf{A}_{i,t}^{(\mathrm{D},2)}\). The first subaction \(\mathbf{a}_{i,t}^{(\mathrm{D},1)}\neq\bot\) determines the defensive action and the second subaction \(\mathbf{a}_{i,t}^{(\mathrm{D},2)}\in\{\mathrm{S},\mathrm{C}\}\) determines when to take it. Specifically, if \(\mathbf{a}_{i,t}^{(\mathrm{D},2)}=\mathrm{C}\), then \(\mathbf{a}_{i,t}^{(\mathrm{D})}=\bot\), otherwise \(\mathbf{a}_{i,t}^{(\mathrm{D})}=\mathbf{a}_{i,t}^{(\mathrm{D},1)}\). Using this action decomposition, at each time \(t\), a strategy \(\pi_{\mathrm{D}}^{(i)}\) in \(\Gamma^{(i)}\) is a joint distribution over \(\mathbf{A}_{i,t}^{(\mathrm{D},1)}\) and \(\mathbf{A}_{i,t}^{(\mathrm{D},2)}\), which means that it can be represented in an auto-regressive manner as \[\pi_{\mathrm{D}}^{(i)}(\mathbf{a}_{i,t}^{(\mathrm{D},1)},\mathbf{ a}_{i,t}^{(\mathrm{D},2)}\mid\mathbf{h}_{i,t}^{(\mathrm{k})})\] (16) \[\overset{(a)}{=}\pi_{\mathrm{D}}^{(i)}(\mathbf{a}_{i,t}^{( \mathrm{D},1)}\mid\mathbf{h}_{i,t}^{(\mathrm{D})})\pi_{\mathrm{D}}^{(i)}( \mathbf{a}_{i,t}^{(\mathrm{D},2)}\mid\mathbf{h}_{i,t}^{(\mathrm{D})},\mathbf{a}_{i,t}^{( \mathrm{D},1)})\] \[\overset{(a)}{=}\pi_{\mathrm{D}}^{(i)}(\mathbf{a}_{i,t}^{(\mathrm{D},1)} \mid\mathbf{b}_{i,t}^{(\mathrm{D},t},\mathbf{s}_{i,t}^{(\mathrm{D})})\pi_{ \mathrm{D}}^{(i)}(\mathbf{a}_{i,t}^{(\mathrm{D},2)}\mid\mathbf{b}_{i,t}^{( \mathrm{D},s}_{i,t}^{(\mathrm{D},1)},\mathbf{a}_{i,t}^{(\mathrm{D},1)})\] \[\overset where (a) follows from the chain rule of probability; (b) holds because \((\mathbf{s}_{i,t}^{(\mathrm{D})},\mathbf{b}_{i,t}^{(\mathrm{D})})\) is a sufficient statistic for \(\mathbf{H}_{i,t}^{(\mathrm{D})}\)[39, Thm 7.2.1]; and (c) follows because \[\underset{\mathbf{a}_{i,t}^{(\mathrm{D},1)}\in\mathcal{A}_{\mathrm{ D}}^{(\mathrm{V})}\backslash\perp}{\arg\max}\Bigg{[}\eta|\mathrm{an}(i)|\alpha_{i,t+1}-c _{i,t}^{(\mathrm{I})}(v_{i,t}^{(\mathrm{I})},\mathbf{a}_{i,t}^{(\mathrm{D},1)})+ \tag{17}\] \[\gamma\mathbb{E}\bigg{[}\mathscr{V}(\mathbf{S}_{i,t+1}^{(\mathrm{ D})},\mathbf{B}_{i,t+1}^{(\mathrm{D})})\biggm{|}\mathbf{s}_{i,t}^{(\mathrm{D})}, \mathbf{b}_{i,t}^{(\mathrm{D})},\mathbf{a}_{i,t}^{(\mathrm{D},1)}\bigg{]}\Bigg{]}\] \[\overset{\text{(a)}}{=}\underset{\mathbf{a}_{i,t}^{(\mathrm{D},1)} \in\mathcal{A}_{\mathrm{D}}^{(\mathrm{V})}\backslash\perp}{\arg\max}\Bigg{[} \eta|\mathrm{an}(i)|\alpha_{i,t+1}-c^{(\mathrm{A})}(\mathbf{a}_{i,t}^{(\mathrm{ D},1)})+\] \[\gamma\mathbb{E}\bigg{[}\mathscr{V}(\mathbf{S}_{i,t+1}^{(\mathrm{ D})},\mathbf{B}_{i,t+1}^{(\mathrm{D})})\biggm{|}\mathbf{s}_{i,t}^{(\mathrm{D})}, \mathbf{b}_{i,t}^{(\mathrm{D})},\mathbf{a}_{i,t}^{(\mathrm{D},1)}\bigg{]} \Bigg{]}\] \[\overset{\text{(a)}}{=}\underset{\mathbf{a}_{i,t}^{(\mathrm{D},1)} \in\mathcal{A}_{\mathrm{D}}^{(\mathrm{V})}\backslash\perp}{\arg\max}\Bigg{[} \eta|\mathrm{an}(i)|\alpha_{i,t+1}-c^{(\mathrm{A})}(\mathbf{a}_{i,t}^{(\mathrm{ D},1)})\] \[\gamma\mathbb{E}\bigg{[}\mathscr{V}(\mathbf{S}_{i,t+1}^{(\mathrm{ D})},\mathbf{e}_{1})\biggm{|}\mathbf{s}_{i,t}^{(\mathrm{D})},\mathbf{a}_{i,t}^{( \mathrm{D},1)}\bigg{]}\Bigg{]}\] which means that \(\mathbf{a}_{i,t}^{(\mathrm{D},1)}\neq\perp\) is independent of \(\mathbf{B}_{i,t}^{(\mathrm{D})}\). The first statement in (17) is the Bellman equation [64, Eq. 1]; (a) holds because \(V_{i,t}^{(\mathrm{I})}\) is independent of \(\mathbf{a}_{i,t}^{(\mathrm{D},1)}\); and (b) is true because any \(\mathbf{a}_{i,t}^{(\mathrm{D},1)}\neq\perp\) leads to \(\mathbf{S}_{i,t+1}^{(\mathrm{A})}=(0,0)\) and thus to \(\mathbf{B}_{i,t+1}^{(\mathrm{D})}=\mathbf{e}_{1}=(1,0,0)\). (Recall that the belief space \(\mathcal{B}_{\mathrm{D}}^{(i)}\) is the two-dimensional unit simplex.) The strategy decomposition in (16) means that we can obtain a best response strategy in \(\Gamma^{(i)}\) by jointly optimizing two substrategies: \(\pi_{\mathrm{D}}^{(i,1)}\) and \(\pi_{\mathrm{D}}^{(i,2)}\). The former corresponds to solving an mdp\(\mathscr{M}^{(\mathrm{D},1)}\) with state space \(\mathbf{s}_{i}^{(\mathrm{D})}\in\mathcal{Z}\) and the latter corresponds to solving a set of optimal stopping pomdps\((\mathscr{M}_{i,\mathrm{s}^{(\mathrm{D},2)},\mathrm{a}^{(\mathrm{D})}}^{(\mathrm{D})})_{ \mathcal{B}^{(\mathrm{D})}\in\mathcal{Z},\mathrm{a}^{(\mathrm{D})}\in\mathcal{ A}_{\mathrm{D}}^{(\mathrm{V})}}\) with state space \(\mathbf{s}_{i}^{(\mathrm{A})}\in\{(0,0),(1,0),(1,1)\}=\{0,1,2\}\). Each stopping problem can be defined with a _single_ stop action rather than multiple stop actions [3, SSIII.C] because \[\underset{\pi_{\mathrm{D}}\in\Pi_{\mathrm{D}}^{(i,2)}}{\arg\max} \Bigg{[}\mathbb{E}_{\pi_{\mathrm{D}}}\bigg{[}\sum_{t=1}^{\infty}\gamma^{t-1} \mathbf{U}_{i,2,t}\Bigm{|}\mathbf{B}_{i,1}^{(\mathrm{D})}=\mathbf{e}_{1} \bigg{]}\Bigg{]}\] \[=\underset{\pi_{\mathrm{D}}\in\Pi_{\mathrm{D}}^{(i,2)}}{\arg\max} \Bigg{[}\mathbb{E}_{\pi_{\mathrm{D}}}\bigg{[}\sum_{t=1}^{\tau_{1}}\gamma^{t-1} \mathbf{U}_{i,2,t}\Bigm{|}\mathbf{B}_{i,1}^{(\mathrm{D})}=\mathbf{e}_{1} \bigg{]}+\] \[=\underset{\pi_{\mathrm{D}}\in\Pi_{\mathrm{D}}^{(i,2)}}{\arg\max} \Bigg{[}\mathbb{E}_{\pi_{\mathrm{D}}}\bigg{[}\sum_{t=1}^{\tau_{1}}\gamma^{t-1} \mathbf{U}_{i,2,t}\Bigm{|}\mathbf{B}_{i,1}^{(\mathrm{D})}=\mathbf{e}_{1} \bigg{]}\Bigg{]} \tag{18}\] where \(\Pi_{\mathrm{D}}^{(i,2)}\), \(\mathbf{U}_{i,2,t}\), and \(\tau_{1},\tau_{2},\ldots\) denote the strategy space, utility, and stopping times in \(\mathscr{M}_{i,\mathrm{s}^{(\mathrm{D},2)},\mathrm{a}^{(\mathrm{D})}}^{( \mathrm{D},2)}\). Note that the belief space \(\mathcal{B}_{\mathrm{D}}^{(i)}\) for each stopping problem is the \(2\)-dimensional unit simplex and that \(\mathbf{B}_{i,\tau_{j}}^{(\mathrm{D})}=\mathbf{e}_{1}=(1,0,0)\) for each stopping time \(\tau_{j}\) since \(\mathbf{a}_{i,\tau_{j}}^{(\mathrm{D},2)}=\mathrm{S}\implies\mathbf{s}_{i,\tau_ {j}+1}^{(\mathrm{A})}=(0,0)\). The transition matrices for each stopping problem are of the form: \[\begin{bmatrix}1-p&p&0\\ 0&1-q&q\\ 0&0&1\end{bmatrix}\quad\text{and}\quad\begin{bmatrix}1&0&0\\ 1&0&0\\ 1&0&0\end{bmatrix} \tag{19}\] where \(p\) is the probability that the attacker performs reconnaissance and \(q\) is the probability that the attacker compromises the node. The left matrix in (19) relates to \(\mathbf{a}_{i,t}^{(\mathrm{D},2)}=\mathrm{C}\) and the right matrix relates to \(\mathbf{a}_{i,t}^{(\mathrm{D},2)}=\mathrm{S}\). The non-zero second order minors of the matrices are \((1-p)(1-q)\), \(pq\), \(1-q\), \(1-p\), \(p\), and \((1-p)q\), which implies that the matrices are tp-2[39, Def. 10.2.1]. Further, it follows from (4) that the utility function of the stopping problems satisfies \[\mathbf{u}_{j,2}(0,a)\geq\mathbf{u}_{j,2}(1,a)\geq\mathbf{u}_{j,2}(2,a)\] for \(a\in\{\mathrm{S},\mathrm{C}\}\) and that \[\mathbf{u}_{j,2}(s,\mathrm{S})-\mathbf{u}_{j}(s,\mathrm{C}) \geq\mathbf{u}_{j,2}(2,\mathrm{S})-\mathbf{u}_{j,2}(2,\mathrm{C})\] \[\mathbf{u}_{j,2}(0,\mathrm{S})-\mathbf{u}_{j}(0,C) \geq\mathbf{u}_{j,2}(s,\mathrm{S})-\mathbf{u}_{j,2}(s,\mathrm{C})\] for all \(s\in\mathcal{S}^{(j),2}\). Since the distributions \(Z_{\mathbf{O}_{1}|\mathbf{s}^{(\mathrm{A})}},\ldots,Z_{\mathbf{O}_{|\mathbf{s} ^{(\mathrm{A})}}}\) are tp-2 by assumption it then follows from [39, Thm. 12.3.4] that there exists a switching curve \(\Upsilon\) that partitions \(\mathcal{B}_{\mathrm{D}}^{(i)}\) into two individually connected regions: a stopping set \(\mathscr{S}_{\mathrm{D}}^{(i)}\) where \(\mathbf{a}_{i,t}^{(\mathrm{D},2)}=\mathrm{S}\) is a best response and a continuation set \(\mathscr{C}_{\mathrm{D}}^{(i)}\) where \(\mathbf{a}_{i,t}^{(\mathrm{D},2)}=\mathrm{C}\) is a best response (see Fig. 6(c)). The argument behind the existence of a switching curve is as follows [39, Thm. 12.3.4]. On any line segment \(\mathcal{L}(\mathbf{e}_{1},\widetilde{\mathbf{b}}^{(\mathrm{D})})\) in \(\mathcal{B}_{\mathrm{D}}^{(\mathrm{I})}\) that starts at \(\mathbf{e}_{1}\) and ends at the subsimplex joining \(\mathbf{e}_{2}\) and \(\mathbf{e}_{3} node subgames as constructed in the proof of Thm. 2.B (lines \(14\)-\(18\)), and then it combines them using the method described in SSVI-B (lines \(19\)-\(25\)). Finding best responses for node subgames amounts to solving pomdps. The principal method for solving pomdps is dynamic programming [39]. Dynamic programming is however intractable in our case, as demonstrated in Fig. (b)b. To find the best responses we instead resort to approximation algorithms. More specifically, we use the Proximal Policy Optimization (ppo) algorithm [68, Alg. 1] to find best responses for the attacker, and we use a combination of dynamic programming and stochastic approximation to find best responses for the defender. In particular, to find best responses for the defender, we first solve the mdp defined in SSVI-C via the value iteration algorithm [39, Eq. 6.21], which can be done efficiently due to full observability. After solving the mdp, we approximate the optimal switching curves defined in the proof of Thm. 2.C (SSVI-C) with the following linear approximation [39, Eq. 12.18]. \[\pi_{\mathrm{D}}(\mathbf{b}^{(\mathrm{D})})=\begin{cases}\mathrm{S}&\text{if } \begin{bmatrix}0&1&\boldsymbol{\theta}\end{bmatrix}\begin{bmatrix}(\mathbf{b}^{ (\mathrm{D})})^{T}\\ -1\end{bmatrix}>0\\ \mathrm{C}&\text{otherwise}\end{cases} \tag{20}\] The coefficients \(\boldsymbol{\theta}\) in (20) are estimated through the stochastic approximation algorithm in [39, Alg. 14] and [3, Alg. 1]. ## VIII Digital Twin and System Identification The dfsp algorithm described above approximates a Nash equilibrium of \(\Gamma\) (7) by simulating games and updating both players' strategies through reinforcement learning and dynamic programming. To identify the parameters required to instantiate these simulations and to evaluate the learned strategies, we use a digital twin of the target infrastructure (see Fig. 8). This section describes the digital twin (SSVIII-A) and the identification process (SSVIII-B). ### _Creating a Digital Twin of the Target Infrastructure_ We create a digital twin of the target infrastructure shown in Fig. 1 through an emulation system. Documentation of this emulation system is available in [6, 20]; the source code is available at [40]; and a video demonstration is available at [41]. The process of creating the digital twin involves two main tasks. The first task is to replicate relevant parts of the physical infrastructure that is emulated, such as physical resources, network interfaces, and network conditions. This task is described in SSVIII-A1. The second task is to emulate actors in the digital twin (e.g., the attacker, the defender, and the client population). We describe this task in SSVIII-A2. #### Vi-A1 Emulating physical resources The physical resources of the target infrastructure are emulated through the following steps. **Emulating physical hosts.** Physical hosts are emulated with Docker containers [69], i.e., lightweight executable packages that include runtime systems, code, system tools, system libraries, and configurations. Resource allocation to containers, e.g., cpu and memory, is enforced using cgroups. The software functions running inside the containers replicate important components of the target infrastructure, such as, web servers, databases, the snort idps[70], and the ryu sdn controller [71]. **Emulating physical switches.** Physical switches are emulated with Docker containers that run Open vSwitch (ovs) [72] and Fig. 8: The digital twin is a virtual replica of the target infrastructure and is used in our framework for evaluation and data collection [6, 20]. may connect to a controller through the openflow protocol [73]. (Since the switches are programmed through flow tables, they can act either as classical layer 2 switches or as routers, depending on the flow table configurations.) **Emulating physical network links.** Network connectivity is emulated with virtual links implemented by Linux bridges. Network isolation between virtual containers on the same physical host is achieved through network namespaces, which create logical copies of the physical host's network stack. To connect containers on different physical hosts, the emulated traffic is tunneled over the physical network using vxlan tunnels [74]. **Emulating network conditions.** Network conditions of virtual links are configured using the netem module in the Linux kernel [75]. This module allows fine-gained configuration of bit rates, packet delays, packet loss probabilities, jitter, and packet reordering probabilities. We emulate connections between nodes as full-duplex lossless connections of \(1\) Gbit/s capacity in both directions. We emulate connections between the gateway and the external client population as full-duplex connections of \(100\) Mbit/s capacity and \(0.1\%\) packet loss with random bursts of \(1\%\) packet loss. (These numbers are based on measurements on enterprise and wide-area networks [76, 77, 78].) #### V-A2 Emulating Actors in the Digital Twin In this section, we describe how actors of the intrusion response use case described in SSIII are emulated in the digital twin. **Emulating the client population.** The _client population_ is emulated by processes in Docker containers. Clients interact with application nodes through the gateway by consuming workflows. The workflow of a client is selected uniformly at random and its sequence of service invocations is decided uniformly at random. Client arrivals per time-step are emulated using a stationary Poisson process with rate \(\lambda=50\) and exponentially distributed service times with mean \(\mu=4\). The duration of a time-step in the emulation is \(30\) seconds. **Emulating the attacker.** The attacker's actions are emulated by executing scripts that automate exploits (see Table II). **Emulating the defender.** The four types of defender actions (see Fig. 4) are emulated as follows. To emulate the _node migration_ action, we remove all virtual network interfaces of the emulated node and add a new interface that connects it to the new zone. To emulate the _flow migration/blocking_ action we add rules to the flow tables of the emulated switches that match all flows towards the node and redirect them to a given destination. To emulate the _node shut down_ action, we shut down the virtual container corresponding to the emulated node. Finally, to emulate the _access control_ action, we reset all user accounts and certificates on the emulated node. ### _Estimating the Observation Distributions_ Following the intrusion response use case described in SSIII, we define the observation \(\mathbf{O}_{i,t}\) to be the number of idps alerts associated with node \(i\) at time \(t\), weighted by priority. As our target infrastructure consists of \(64\) nodes (see App. C and Fig. 1), there are \(64\) alert distributions \(Z_{\mathbf{O}_{1}},\ldots,Z_{\mathbf{O}_{64}}\) (3). We estimate these distributions based on empirical data from the digital twin. \begin{table} \begin{tabular}{l l} \hline \hline _Type_ & _Actions_ \\ \hline Reconnaissance & tcp syn scan, udp port scan, tcp xmas scan \\ & vulscan vulnerability scanner, ping-scan \\ \multirow{2}{*}{Brute-force} & telnet, ssh, ftp, cassandra, \\ & irc, monogodb, mysql, smtp, postgres \\ Exploit & cve-2010-7494, cve-2015-3306, \\ & cve-2010-0426, cve-2015-5602,cve-2015-1427 \\ & cve-2014-6271, cve-2016-10033 \\ & cve-89 weakness on the web app DVWA [79] \\ \hline \hline \end{tabular} \end{table} TABLE II: Attacker actions executed on the digital twin; actions that exploit vulnerabilities in specific software products are identified by the vulnerability identifiers in the Common Vulnerabilities and Exposures (cve) database [80]; actions that exploit vulnerabilities that are not described in the cve database are categorized according to the types of the vulnerabilities they exploit based on the Common Weakness Enumeration (cwe) list [81]; shell commands and scripts for executing the actions are listed in [82]; further details about the actions can be found in Appendix D. At the end of every time-step in the digital twin we collect the number of idps alerts that occurred during the time-step. These values are then used to compute the vector \(\mathbf{o}_{t}\), which contains the total number of idps alerts per node, weighted by priority. For the evaluation in this paper we collect measurements from \(10^{4}\) time-steps using the Snort idps[70]. (Each time-step in the digital twin is \(30\) seconds.) Based on these measurements, we compute the empirical distributions \(\widetilde{Z}_{\mathbf{O}_{1}},\dots,\widetilde{Z}_{O_{64}}\) as estimates of \(Z_{\mathbf{O}_{1}},\dots,Z_{O_{64}}\) (see Fig. 9). We observe in Fig. 9 that the distributions differ between nodes, which can be explained by the different services provided by the nodes (see App. C). We further observe that both the distributions when no intrusion occurs and the distributions during intrusion have most of their probability masses within \([0,300]\). The distributions during intrusion also have substantial probability mass at larger values. Remark: the stochastic matrices with the rows \(\widehat{Z}_{\mathbf{O}_{i}|\mathbf{s}_{i}^{(\mathrm{A})}=(0,0)}\) and \(\widehat{Z}_{\mathbf{O}_{i}|\mathbf{s}_{i}^{(\mathrm{A})}\neq(0,0)}\) have \(250\times 10^{9}\) second-order minors, which are almost all non-negative. This suggests that the tp-2 assumption in Thm. 2.C can be made. ## IX Experimental Evaluation Our approach to find near-optimal defender strategies includes learning Nash equilibrium strategies via the dfsp algorithm and evaluating these strategies on the digital twin (see Fig. 2). This section describes the evaluation results. **Experiment setup.** The instantiation of \(\Gamma\) (7) and the hyperparameters are listed in App. A. We evaluate dfsp both on a digital twin of the target infrastructure and in simulations of synthetic infrastructures. The topology of the target infrastructure is depicted in Fig. 1 and its configuration is available in App. C. The digital twin is deployed on a server with a \(24\)-core intel xeon gold \(2.10\) GHz cpu and \(768\) gb ram. Simulations of \(\Gamma\) and executions of dfsp run on a cluster with 2xtesla p100 gpus, 4xrtx8000 gpus, and 3x16-core intel xeon \(3.50\) GHz cpus. Code for replicating the experiments is available in [6, 40]. **Convergence metric.** To estimate the convergence of the sequence of strategy pairs generated by dfsp, we use the _approximate exploitability_ metric \(\widehat{\delta}\)[84]: \[\widehat{\delta}=\mathbb{E}_{\widehat{\pi}_{\mathrm{D}},\pi_{\mathrm{A}}}\left[ J\right]-\mathbb{E}_{\pi_{\mathrm{D}},\widehat{\pi}_{\mathrm{A}}}\left[J\right] \tag{21}\] where \(J\) is defined in (4) and \(\widehat{\pi}_{\mathrm{A}}\) denotes an approximate best response strategy for player k. The closer \(\widehat{\delta}\) becomes to \(0\), the closer \((\pi_{\mathrm{D}},\pi_{\mathrm{A}})\) is to a Nash equilibrium. **Baseline algorithms.** We compare the performance of our approach (\(\pi^{\text{decomposition}}\)) with two baselines: \(\pi^{\text{full}}\) and \(\pi^{\text{workflow}}\). Baseline \(\pi^{\text{full}}\) solves the full game without decomposition and \(\pi^{\text{workflow}}\) decomposes the game on the workflow-level only. We compare the performance of dfsp with that of Neural Fictitious Self-Play (nfsp)[85, Alg. 1] and ppo[68, Alg.1 ], which are the most popular algorithms among related work (see [31, SSVII] for a review of algorithms used in related work). **Baseline strategies.** We compare the defender strategies learned through dfsp with three baselines. The first baseline selects actions uniformly at random. The second baseline assumes prior knowledge of the opponent's actions and acts optimally based on this information. The last baseline acts according to the following heuristic: shut down a node \(i\in\mathcal{V}\) when an idps alert occurs, i.e., when \(\mathbf{o}_{i,t}>0\). Fig. 10: Best response learning via decomposition; (a) shows learning curves in simulation for synthetic infrastructures and the target infrastructure; the curves show the mean and 95% confidence interval for five random seeds; (b) shows execution times of computing best responses via dynamic programming and Sondik’s value iteration algorithm [83]; (c) shows the speedup of our approach when computing best responses with different number of parallel processes; the speedup is calculated as \(S_{n}=\frac{T_{1}}{T_{n}}\) where \(T_{n}\) is the completion time with \(n\) processes; and (d) shows an estimated switching curve (Thm. 2.C). ### _Learning Best Responses Against Static Opponents_ We first examine whether our method can discover effective strategies against a _static_ opponent strategy, which in game-theoretic terms is the problem of finding best responses (9)-(10). The static strategies are defined in App. B. To measure the scalability of \(\pi^{\text{decomposition}}\) we compare its performance with \(\pi^{\text{workflow}}\) and \(\pi^{\text{full}}\) on synthetic infrastructures with varying number of nodes \(|\mathcal{V}|\) and workflows \(|\mathcal{W}|\). To evaluate the optimal stopping approach described in SSVII we compare its rate of convergence with that of ppo. Figure 10 shows the learning curves. The red, purple, and pink curves represent the results obtained with \(\pi^{\text{decomposition}}\); the blue and beige curves represent the results obtained with \(\pi^{\text{workflow}}\); the orange and green curves represent the results obtained with \(\pi^{\text{full}}\); and the dashed black lines relate to the baseline strategy that assumes prior knowledge of the opponent's strategy. We note in Fig. 10 that all the learning curves of \(\pi^{\text{decomposition}}\) converge near the dashed black lines, which suggests that the learned strategies are close to best responses. In contrast, the learning curves of \(\pi^{\text{workflow}}\) and \(\pi^{\text{full}}\) do not converge near the dashed black lines within the measured time. This is expected as \(\pi^{\text{workflow}}\) and \(\pi^{\text{full}}\) can not be parallelized like \(\pi^{\text{decomposition}}\). (The speedup of parallelization is shown in Fig. 10.) Lastly, we note in the rightmost plot of Fig. 10 that the optimal stopping approach, which exploits the statement in Thm. 2.C, converges significantly faster than ppo. An example of a learned optimal stopping strategy based on the linear approximation in (20) is shown in Fig. 10. ### _Learning Equilibrium Strategies through Self-Play_ Figures 11a-11b show the learning curves of the strategies obtained during the dfsp self-play process and the baselines introduced above. The red curves represent the results from the simulator; the blue curves show the results from the digital twin; the green curve give the performance of the random baseline; the orange curve relate to the \(\mathbf{o}_{i,t}>0\) baseline; and the dashed black line gives the performance of the baseline strategy that assumes prior knowledge of the attacker actions. We note that all learning curves in Fig. 11a converge, which suggests that the learned strategies converge as well. Specifically, we observe that the approximate exploitability (21) of the learned strategies converges to small values (left plot), which indicates that the learned strategies approximate a Nash equilibrium both in the simulator and in the digital twin. Further, we see from the middle plot that both baseline strategies show decreasing performance as the attacker updates its strategy. In contrast, the defender strategy learned through dfsp improves its performance over time. This shows the benefit of a game-theoretic approach where the defender strategy is optimized against a dynamic attacker. Figure 11b compares dfsp with nfsp on the simulator. nfsp implements fictitious self-play and can thus be compared with dfsp with respect to approximate exploitability (21). We observe that dfsp converges significantly faster than nfsp. The fast convergence of dfsp in comparison with nfsp is expected as dfsp is parallelizable while nfsp is not. ### _Discussion of the Evaluation Results_ In this work, we propose a formal framework based on recursive decomposition for solving the intrusion response use case, which we evaluate experimentally on a digital twin. The key findings can be summarized as follows. 1. Our framework approximates optimal defender strategies for a practical IT infrastructure (see Fig. 11a). While we have not evaluated the learned strategies on the target infrastructure due to safety reasons, the fact that they achieve almost the same performance on the digital twin as on the simulator gives us confidence in the strategies' expected performance on the target infrastructure. 2. Decomposition provides a scalable approach to automate intrusion response for IT infrastructures (see Fig. 10a and Fig. 11b). The intuition behind this finding is that decomposition allows to design efficient divide-and-conquer algorithms that can be parallelized (see Thm. 2.A-B and Alg. 1). 3. The theory of optimal stopping provides insight about optimal defender strategies, which enables efficient computation of best responses (see the rightmost plot in Fig. 10a). This property can be explained by the threshold structures of the best response strategies, which drastically reduce the search space of possible strategies (Thm. 2.C). 4. Static defender strategies' performance deteriorate against a dynamic attacker whereas defender strategies learned through dfsp improve over time (see the right plot in Fig. 11a). This finding is consistent with previous studies that use game-theoretic approaches (e.g., [51] and [31]) and suggests fundamental limitations of static intrusion response systems, such as the Snort ldps. ## X Conclusions and Future Work We include elements of game theory and reinforcement learning in a framework to address the problem of automated intrusion response for a realistic use case. We formalize the use case as a partially observed stochastic game. We prove a decomposition theorem stating that the game decomposes recursively into subgames that can be solved efficiently and in parallel, and that the best response defender strategies exhibit threshold structures. This decomposition provides us with a scalable approach to learn near-optimal defender strategies. We develop Decompositional Fictitious Self-Play (dfsp) - a fictitious self-play algorithm for finding Nash equilibria. To assess the learned strategies for a target infrastructure, we evaluate them on a digital twin. The results demonstrate that dfsp converges in reasonable time to near-optimal strategies, both in simulation and on the digital twin, while a state-of-the-art algorithm makes little progress toward an optimal strategy within the same time frame. ## XI Acknowledgments The authors would like to thank Pontus Johnson and Quanyan Zhu for useful inputs to this research. The authors are also grateful to Forough Shahab Samani and Xiaoxuan Wang for their constructive comments on a draft of this paper. ## Appendix A Hyperparameters and Game Instantiation We instantiate \(\Gamma\) (7) for the experimental evaluation as follows. Client arrivals are sampled from a stationary Poisson process \(Po(\lambda=50)\) and service times are exponentially distributed with mean \(\mu=4\). In addition to migrate a node, the defender can shut it down or redirect its traffic to a honeynet, which we model with the zones \(\mathfrak{S},\mathfrak{R}\in\mathcal{Z}\). A node \(i\in\mathcal{V}\) is shutdown if \(v_{i,t}^{(\mathrm{Z})}=\mathfrak{S}\) and have its traffic redirected if \(v_{i,t}^{(\mathrm{Z})}=\mathfrak{R}\). The set of local attacker actions is \(\mathcal{A}_{\mathrm{A}}^{(\mathrm{V})}=\{\bot,\mathrm{reconnaissance}, \mathrm{brute}\text{-}\mathrm{force},\mathrm{exploit}\}\), which we encode as \(\{0,1,2,3\}\). These actions have the following effects on the state \(\mathbf{s}_{t}\): \(\mathbf{a}_{i,t}^{(\mathrm{A})}=1\implies v_{i,t}^{(\mathrm{R})}=1\), \(\mathbf{a}_{i,t}^{(\mathrm{A})}=2\implies v_{i,t}^{(\mathrm{I})}=1\) with probability \(0.3\), and \(\mathbf{a}_{i,t}^{(\mathrm{A})}=3\implies v_{i,t}^{(\mathrm{I})}\) with probability \(0.4\). We enforce a tree structure on the target infrastructure in Fig. 1 by disregarding the redundant edges in the R&d zone. The remaining parameters are listed in Table 3. ## Appendix B Static Defender and Attacker Strategies The static defender and attacker strategies for the evaluation described in SSIX-A are defined in (22)-(23). (w.p is short for "with probability".) \[\pi_{\mathrm{D}}(\mathbf{h}_{t}^{(\mathrm{D})})_{i}=\begin{cases}\bot&\text{ w.p }0.95\\ j\in\mathcal{Z}&\text{w.p }\frac{0.05}{|\mathcal{Z}|+1}\end{cases} \tag{22}\] \[\pi_{\mathrm{A}}(\mathbf{h}_{t}^{(\mathrm{A})})_{i}=\begin{cases}\bot&\text{ if }v_{i,t}^{(\mathrm{I})}=1\\ \bot&\text{ w.p }0.8\text{ if }v_{i,t}^{(\mathrm{R})}=0\\ \bot&\text{ w.p }0.7\text{ if }v_{i,t}^{(\mathrm{R})}=1,v_{i,t}^{(\mathrm{I})}=0 \\ \text{recon}&\text{ w.p }0.2\text{ if }v_{i,t}^{(\mathrm{R})}=0\\ \text{brute}&\text{ w.p }0.15\text{ if }v_{i,t}^{(\mathrm{R})}=1,v_{i,t}^{( \mathrm{I})}=0\\ \text{exploit}&\text{ w.p }0.15\text{ if }v_{i,t}^{(\mathrm{R})}=1,v_{i,t}^{( \mathrm{I})}=0\end{cases} \tag{23}\] ## Appendix C Configuration of the Infrastructure in Fig. 1 The configuration of the target infrastructure (Fig. 1) is available in Tables 4 and 5. ## Appendix D Attacker Actions The attacker actions and their descriptions are listed in Table 6.
2310.20528
Higher-order reductions of the Mikhalev system
We consider the 3D Mikhalev system, $$ u_t=w_x, \quad u_y= w_t-u w_x+w u_x, $$ which has first appeared in the context of KdV-type hierarchies. Under the reduction $w=f(u)$, one obtains a pair of commuting first-order equations, $$ u_t=f'u_x, \quad u_y=(f'^2-uf'+f)u_x, $$ which govern simple wave solutions of the Mikhalev system. In this paper we study {\it higher-order} reductions of the form $$ w=f(u)+\epsilon a(u)u_x+\epsilon^2[b_1(u)u_{xx}+b_2(u)u_x^2]+..., $$ which turn the Mikhalev system into a pair of commuting higher-order equations. Here the terms at $\epsilon^n$ are assumed to be differential polynomials of degree $n$ in the $x$-derivatives of $u$. We will view $w$ as an (infinite) formal series in the deformation parameter $\epsilon$. It turns out that for such a reduction to be non-trivial, the function $f(u)$ must be quadratic, $f(u)=\lambda u^2$, furthermore, the value of the parameter $\lambda$ (which has a natural interpretation as an eigenvalue of a certain second-order operator acting on an infinite jet space), is quantised. There are only two positive allowed eigenvalues, $\lambda=1$ and $\lambda=3/2$, as well as infinitely many negative rational eigenvalues. Two-component reductions of the Mikhalev system are also discussed. We emphasise that the existence of higher-order reductions of this kind is a reflection of {\it linear degeneracy} of the Mikhalev system, in particular, such reductions do not exist for most of the known 3D dispersionless integrable systems such as the dispersionless KP and Toda equations.
E. V. Ferapontov, V. Novikov, I. Roustemoglou
2023-10-31T15:11:51Z
http://arxiv.org/abs/2310.20528v2
# Higher-order reductions of the Mikhalev system ###### Abstract We consider the 3D Mikhalev system, \[u_{t}=w_{x},\quad u_{y}=w_{t}-uw_{x}+wu_{x},\] which has been discussed in the context of KdV-type hierarchies. Under the reduction \(w=f(u)\), one obtains a pair of commuting first-order equations, \[u_{t}=f^{\prime}u_{x},\quad u_{y}=(f^{\prime 2}-uf^{\prime}+f)u_{x},\] which govern simple wave solutions of the Mikhalev system. In this paper we study _higher-order_ reductions of the form \[w=f(u)+\epsilon a(u)u_{x}+\epsilon^{2}[b_{1}(u)u_{xx}+b_{2}(u)u_{x}^{2}]+...,\] which turn the Mikhalev system into a pair of commuting higher-order equations. Here the terms at \(\epsilon^{n}\) are assumed to be differential polynomials of degree \(n\) in the \(x\)-derivatives of \(u\). We will view \(w\) as an (infinite) formal series in the deformation parameter \(\epsilon\). It turns out that for such a reduction to be non-trivial, the function \(f(u)\) must be quadratic, \(f(u)=\lambda u^{2}\), furthermore, the value of the parameter \(\lambda\) (which has a natural interpretation as an eigenvalue of a certain second-order operator acting on an infinite jet space), is quantised. There are only two positive allowed eigenvalues, \(\lambda=1\) and \(\lambda=3/2\), as well as infinitely many negative rational eigenvalues. Two-component reductions of the Mikhalev system are also discussed. We emphasise that the existence of higher-order reductions of this kind is a reflection of _linear degeneracy_ of the Mikhalev system, in particular, such reductions do not exist for most of the known 3D dispersionless integrable systems such as the dispersionless KP and Toda equations. MSC: 35B06, 35C05, 35L10, 35Q51, 37K10 Keywords: dispersionless integrable system, Mikhalev system, hydrodynamic reduction, higher-order reduction, Burgers, KdV and Camassa-Holm equations. ###### Contents * 1 Introduction * 2 Proof of Theorem 1 * 3 Eigenvalues \(\lambda\) * 4 Higher-order deformations of one-component reductions * 4.1 One-component dissipative reductions (\(\lambda=1\)) * 4.2 One-component dispersive reductions (\(\lambda=3/2\)) * 5 Higher-order deformations of two-component reductions * 5.1 Two-component dissipative reductions * 5.2 Two-component dispersive reductions * 6 Non-existence of higher-order reductions for the dKP system * 7 Concluding remarks ## 1 Introduction We consider the 3D Mikhalev system, \[u_{t}=w_{x},\qquad u_{y}=w_{t}-uw_{x}+wu_{x}, \tag{1}\] which was apparently first introduced by V. Mikhalev [14] in the context of Hamiltonian formalism of KdV type hierarchies, and subsequently investigated in the framework of multi-dimensional dispersionless integrability, see e.g. [18, 9, 8]. We will be interested in the classification of \((1+1)\)-dimensional reductions of (1) obtained by setting \[w=f(u)+\epsilon a(u)u_{x}+\epsilon^{2}[b_{1}(u)u_{xx}+b_{2}(u)u_{x}^{2}]+... \tag{2}\] where the terms at \(\epsilon^{n}\) are differential polynomials of degree \(n\) in the \(x\)-derivatives of \(u\) (whose coefficients are some functions of \(u\)). We will view (2) as an infinite formal series in the deformation parameter \(\epsilon\). A reduction will be called nontrivial if at least one of the \(\epsilon\)-dependent terms is nonzero. The substitution of (2) into (1) implies \[u_{t}=w^{*}u_{x},\qquad u_{y}=((w^{*})^{2}-uw^{*}+w)u_{x}, \tag{3}\] where \(w^{*}=w_{u}+w_{u_{x}}\frac{d}{dx}+w_{u_{xx}}\frac{d^{2}}{dx^{2}}+\dots\) is the operator of formal linearisation. Thus, we obtain two equations for the scalar variable \(u\). Requiring their compatibility, \(u_{yt}=u_{ty}\) (which should be satisfied at all orders of the deformation parameter \(\epsilon\)), one obtains strong constraints for the coefficients of expansion (2). There are only two cases of terminating expansions, leading to the well-known integrable hierarchies. **Example 1.** The ansatz \(w=u^{2}+\epsilon u_{x}\) reduces (1) to the first two flows of the Burgers' hierarchy, \[u_{t}=2uu_{x}+\epsilon u_{xx},\qquad u_{y}=3u^{2}u_{x}+3\epsilon(u_{x}^{2}+uu _{xx})+\epsilon^{2}u_{xxx}.\] **Example 2.** The ansatz \(w=\frac{3}{2}u^{2}+\epsilon^{2}u_{xx}\) reduces (1) to the first two flows of the KdV hierarchy, \[u_{t}=3uu_{x}+\epsilon^{2}u_{xxx},\qquad u_{y}=\frac{15}{2}u^{2}u_{x}+5 \epsilon^{2}(uu_{xxx}+2u_{x}u_{xx})+\epsilon^{4}u_{xxxxx}.\] In the language of the 'universal hierarchy of hydrodynamic type' and 'integrable hydrodynamic chains', terminating reductions of this kind were studied in [11, 12, 13, 18]. Our first main result is as follows (see Section 2 for the proof): **Theorem 1**: _For a non-trivial reduction (2) of system (1), the function \(f(u)\) must be quadratic._ Thus, one can set \(f(u)=\alpha+\beta u+\lambda u^{2}\). Here the parameters \(\alpha,\beta\) are not essential and can be eliminated using the following symmetries of system (1): (a): \(\tilde{x}=x+\alpha y,\ \tilde{y}=y,\ \tilde{t}=t,\ \tilde{u}=u,\ \tilde{w}=w-\alpha\); (b): \(\tilde{x}=x+\beta^{2}y+\beta t,\ \tilde{y}=y,\ \tilde{t}=t+2\beta y,\ \tilde{u}=u,\ \tilde{w}=w-\beta u\). Therefore, without any loss of generality one can assume \(f(u)=\lambda u^{2}\). Further analysis of Section 3 shows that the allowed values of the parameter \(\lambda\) are 'quantised' and cannot be arbitrary. There are only two positive eigenvalues, \(\lambda=1\) are \(\lambda=\frac{3}{2}\). Computer experiments suggest that there exist infinitely many series of negative rational eigenvalues \(\lambda\), in particular, \[\begin{array}{ll}\lambda=\frac{2}{2(2-n)}\ (n\geq 3),&\lambda=\frac{3}{2(3-n)} \ (n\geq 4),&\lambda=\frac{4}{2(4-n)}\ (n\geq 5),\\ \lambda=\frac{5}{2(5-n)}\ (n\geq 6),&\lambda=\frac{6}{2(6-n)}\ (n\geq 8),& \lambda=\frac{7}{2(7-n)}\ (n\geq 10),\end{array}\] etc, although we were not able to find a general formula giving all of them. In Section 4 we study in more detail deformations (2) corresponding to the two positive values of \(\lambda\). Thus, the general expansion (2) for \(\lambda=1\) (dissipative case) has the form \[w=u^{2}+\epsilon au_{x}+\frac{\epsilon^{2}}{2}(a^{2})_{xx}+\frac{\epsilon^{3} }{2}\Big{[}(a^{2}a^{\prime})_{xx}-(aa^{\prime}a^{\prime\prime}+\frac{1}{6}a^{ 2}a^{\prime\prime\prime})u_{x}^{2}\Big{]}_{x}+\ldots,\] where \(a(u)\) is an arbitrary function. For \(a=1\) it reproduces the Burgers' reduction from Example 1. For \(a=u\) one obtains another well-known case, the so-called viscous Camassa-Holm reduction, see Section 4.1 for more details. Similarly, the general expansion (2) for \(\lambda=\frac{3}{2}\) (dispersive case) has the form \[\begin{array}{l} w=\frac{3}{2}u^{2}+\epsilon^{2}(au_{xx}+\frac{5}{4}a^{ \prime}u_{x}^{2})+\frac{\epsilon^{4}}{128}\left(64aa^{\prime}u_{xxxx}+32(9{a^ {\prime}}^{2}+7aa^{\prime\prime})u_{x}u_{xxx}+\right.\\ \left.+16(14{a^{\prime}}^{2}+9aa^{\prime\prime})u_{xx}^{2}+96\left(13a^{ \prime}a^{\prime\prime}+3aa^{\prime\prime\prime}\right)u_{x}^{2}u_{xx}+5(52a^ {\prime\prime\prime}a^{\prime}+44{a^{\prime\prime}}^{2}+9aa^{(4)})u_{x}^{4} \right)+\ldots,\end{array}\] where \(a(u)\) is an arbitrary function. For \(a=1\) it reproduces the KdV reduction from Example 2. For \(a=u\) one obtains the Camassa-Holm reduction, see Section 4.2 for more details. In Section 5 we study higher-order (dissipative and dispersive) deformations of two-component hydrodynamic reductions of the Mikhalev system (1). Finally, in Section 6 we prove non-existence of reductions of type (2) for the dispersionless KP equation. ## 2 Proof of Theorem 1 Here we prove our first main result. **Theorem 1**_For a non-trivial reduction (2) of system (1), the function \(f(u)\) must be quadratic._ **Proof:** If \(f\) is not quadratic, direct calculations show that terms at \(\epsilon,\epsilon^{2},\epsilon^{3},\dots\) in the expansion (2) must vanish identically. Suppose that the first nonzero term occurs at \(\epsilon^{n}\), \[w=f(u)+\epsilon^{n}w_{n}+...,\] where \(w_{n}\) is a homogeneous differential polynomial of differential degree \(n\) in the \(x\)-derivatives of \(u\). Equations (1) take the form \[\begin{array}{l} u_{t}=f^{\prime}u_{x}+\epsilon^{n}w_{n,x}+\ldots, \\ u_{y}=(f^{\prime 2}-uf^{\prime}+f)u_{x}+\epsilon^{n}[w_{n}u_{x}+(f^{\prime}-u)w_{n,x} +w_{n,t}]+\ldots.\end{array} \tag{4}\] Calculating the compatibility condition of (4), \(u_{yt}=u_{ty}\), at \(\epsilon^{n}\) one obtains that \(w_{n}\) must satisfy the linear equation \[w_{n,tt}-w_{n,xy}-uw_{n,xt}+fw_{n,xx}+(1-f^{\prime\prime})u_{x}(w_{n,t}-f^{ \prime}w_{n,x})=0, \tag{5}\] where one has to use dispersionless limits of equations (4), \(u_{t}=f^{\prime}u_{x}\) and \(u_{y}=(f^{\prime 2}-uf^{\prime}+f)u_{x}\), to calculate \(t\)- and \(y\)-derivatives of \(w_{n}\). Introducing the corresponding commuting operators of total differentiation, \[X=u_{x}\frac{\partial}{\partial u}+u_{xx}\frac{\partial}{\partial u_{x}}+\dots,\] \[T=f^{\prime}u_{x}\frac{\partial}{\partial u}+(f^{\prime}u_{x})_{x}\frac{ \partial}{\partial u_{x}}+\dots,\] \[Y=(f^{\prime 2}-uf^{\prime}+f)u_{x}\frac{\partial}{\partial u}+((f^{\prime 2}-uf^ {\prime}+f)u_{x})_{x}\frac{\partial}{\partial u_{x}}+\dots,\] one can rewrite equation (5) in the form \(Lw_{n}=0\) where \(L\) is a second-order operator, \[L=T^{2}-YX-uTX+fX^{2}+(1-f^{\prime\prime})u_{x}(T-f^{\prime}X);\] note that \(L\) is independent of \(n\). It is convenient to introduce the operators \[\tilde{T}=T-f^{\prime}X,\quad\tilde{Y}=Y-(f^{\prime 2}-uf^{\prime}+f)X,\quad S =(2f^{\prime}-u)\tilde{T}-\tilde{Y},\] note that \(\tilde{T}\) and \(\tilde{Y}\) do not involve differentiation \(\partial_{u}\), furthermore, \(S\) does not involve differentiations \(\partial_{u}\) and \(\partial_{u_{x}}\). In this notation, \(L\) takes the form \[L=\tilde{T}^{2}+SX+(1-f^{\prime\prime})u_{x}\tilde{T}; \tag{6}\] We also note that \(L\) does not explicitly depend on \(f\) and \(f^{\prime}\). Let us introduce the standard notation for the jet variables: \((u,u_{x},u_{xx},\dots)=(u,u_{1},u_{2},\dots)\). Each differential monomial has two naturally defined degrees: differential degree \(n\) and algebraic degree \(k\). Thus, the differential monomial \(u_{\sigma_{1}}\dots u_{\sigma_{k}}\) has differential degree \(n=\sigma_{1}+\dots+\sigma_{k}\) and algebraic degree k. Let \(P_{k}(n)\) be the space of differential polynomials of differential degree \(n\) and algebraic degree \(k\), spanned by the corresponding differential monomials. Then \(\dim P_{k}(n)=p_{k}(n)\) where \(p_{k}(n)\), the number of differential monomials of differential degree \(n\) and algebraic degree \(k\), equals the number of partitions of \(n\) into \(k\) parts. We will agree that the variable \(u\) has both differential and algebraic degrees equal to zero. We will also need the following properties of operators \(\tilde{T},S\) and \(X\) entering formula (6). First of all, each of these operators increases by 1 the differential degree of any differential monomial. It will be convenient to represent \(X\) in the form \(X=X_{0}+u_{1}\frac{\partial}{\partial u}\) where \(X_{0}=u_{2}\frac{\partial}{\partial u_{1}}+u_{3}\frac{\partial}{\partial u_{2 }}+\dots\) preserves algebraic degree of any differential monomial, while \(u_{1}\frac{\partial}{\partial u}\) increases it by 1. Similarly, we have \(\tilde{T}=\tilde{T}_{0}+\dots\) where \[\tilde{T}_{0}=f^{\prime\prime}\left(u_{1}^{2}\frac{\partial}{\partial u_{1}}+ 3u_{1}u_{2}\frac{\partial}{\partial u_{2}}+(4u_{1}u_{3}+3u_{2}^{2})\frac{ \partial}{\partial u_{3}}+\dots\right);\] note that \(\tilde{T}_{0}\) increases by 1 both algebraic and differential degrees of any differential monomial. Here dots in the formula \(\tilde{T}=\tilde{T}_{0}+\dots\) denote an operator which increases algebraic degree by more than 1. Finally, \(S=S_{0}+\dots\) where \[S_{0}=f^{\prime\prime}(1-2f^{\prime\prime})\left(u_{1}^{3}\frac{\partial}{ \partial u_{2}}+6u_{1}^{2}u_{2}\frac{\partial}{\partial u_{3}}+\dots\right);\] note that \(S_{0}\) increases by 2 the algebraic degree, and by 1 the differential degree of any differential monomial. Here dots in the formula \(S=S_{0}+\dots\) denote an operator which increases algebraic degree by more than 2. Thus, one can represent the second-order operator \(L\) given by (6) in the form \(L=L_{0}+\dots\) where \[L_{0}=\tilde{T}_{0}^{2}+S_{0}X_{0}+(1-f^{\prime\prime})u_{1}\tilde{T_{0}}; \tag{7}\] note that \(L_{0}\) increases by 2 both algebraic and differential degrees of any differential monomial. Here dots in the formula \(L=L_{0}+\dots\) denote a second-order operator which increases algebraic degree by more than 2. Finally, let us note that \(L_{0}\) depends on \(f\) only via the second derivative \(f^{\prime\prime}\). Our goal is to show that, if \(f^{\prime\prime}\neq const\), the kernel of \(L\) is trivial: \(Lw_{n}=0\implies w_{n}=0\). To prove this we represent \(w_{n}\), which is a differential polynomial of differential degree \(n\), in the form \[w_{n}=w_{n}^{(1)}+w_{n}^{(2)}+w_{n}^{(3)}+\dots, \tag{8}\] where \(w_{n}^{(i)}\) are differential polynomials of differential degree \(n\) and algebraic degree \(i\): \[w_{n}^{(1)}=\mbox{span}\langle u_{n}\rangle,\quad w_{n}^{(2)}=\mbox{span} \langle u_{n-s}u_{s}\rangle,\] etc. Here coefficients in the span are allowed to be arbitrary functions of \(u\). Suppose \(w_{n}^{(k)}\) is the first nonzero term in expansion (8). Then the term of lowest algebraic degree in the expression \(Lw_{n}\) will be \(L_{0}w_{n}^{(k)}\). As \(Lw_{n}=0\) implies \(L_{0}w_{n}^{(k)}=0\), it is sufficient to show that \(L_{0}w_{n}^{(k)}=0\implies w_{n}^{(k)}=0\). The condition \(L_{0}w_{n}^{(k)}=0\) gives a linear homogeneous system for the coefficients of differential polynomial \(w_{n}^{(k)}\); the matrix of this system depends on \(f^{\prime\prime}\). The requirement of non-trivial solvability of this system leads to two cases: either there is an algebraic constraint for \(f^{\prime\prime}\) (in which case we are done as \(f^{\prime\prime}\) will be constant), or the system is solvable for every value of \(f^{\prime\prime}\). The latter case leads to a contradiction: if the system is non-trivially solvable for every value of \(f^{\prime\prime}\), let us set \(f^{\prime\prime}=\frac{1}{2}\). In this case the operator \(S_{0}\) vanishes identically, and the condition \(L_{0}w_{n}^{(k)}=0\) simplifies to \((\tilde{T}_{0}^{2}+\frac{1}{2}u_{1}\tilde{T}_{0})w_{n}^{(k)}=0\). Note however that the lexicographically top monomial in the expression \((\tilde{T}_{0}^{2}+\frac{1}{2}u_{1}\tilde{T}_{0})w_{n}^{(k)}\) equals the lexicographically top monomial of \(w_{n}^{(k)}\) multiplied by \(cu_{1}^{2}\) where \(c\) is some positive constant (as all coefficients of the operator \(\tilde{T}_{0}^{2}+\frac{1}{2}u_{1}\tilde{T}_{0}\) are positive), and therefore can't be zero. This contradiction finishes the proof. ## 3 Eigenvalues \(\lambda\) Upon setting \(f(u)=\lambda u^{2}\), it follows from the proof of Theorem 1 that the allowed values of the parameter \(\lambda\) are exactly those for which the operator \(L_{0}\) given by (7) possesses a nontrivial kernel. The corresponding \(\lambda\)'s can be interpreted as the eigenvalues of \(L_{0}\). As the operator \(L_{0}\) increases by 2 both the differential and algebraic degrees of every differential monomial, one needs to study a finite-dimensional restriction of the operator \(L_{0}\) to the space \(P_{k}(n)\): \[L_{0}:\,P_{k}(n)\to P_{k+2}(n+2). \tag{9}\] Our first observation is that a sufficient condition for the kernel of \(L_{0}\) to be non-trivial is the equality of dimensions: \[p_{k}(n)=p_{k+2}(n+2); \tag{10}\] note that there is an obvious general inequality \(p_{k}(n)\leq p_{k+2}(n+2)\), indeed, every differential monomial from \(P_{k}(n)\) produces a differential monomial from \(P_{k+2}(n+2)\) via multiplication by \(u_{1}^{2}\). One can easily show that the equality (10) holds if and only if \(k\geq n/2\). This follows from the well-known identity for partitions, \(p_{k}(n)=p_{k}(n-k)+p_{k-1}(n-1)\). Applying this identity twice gives \[p_{k+2}(n+2)=p_{k+2}(n-k)+p_{k+1}(n+1)=p_{k+2}(n-k)+p_{k+1}(n-k)+p_{k}(n),\] where the first two terms on the right-hand side vanish if and only if \(k\geq n/2\) (recall that \(p_{k}(n)=0\) if \(k>n\)). Our results are summarised in Table 1 below: the left column lists the first few allowed eigenvalues \(\lambda\); the right column contains the first two terms in the expansion of \(w(u)\). Here \(a,b\) are arbitrary function of \(u\) and dots denote higher-order terms in \(\epsilon\). Note that the value \(\lambda=-1\) appears at the orders \(\epsilon^{3}\) and \(\epsilon^{6}\), the value \(\lambda=-\frac{1}{3}\) appears at the orders \(\epsilon^{4}\) and \(\epsilon^{5}\), etc. This means that the spectrum of the operator \(L_{0}\) is not simple. Furthermore, for \(\lambda=-\frac{1}{5}\) two arbitrary functions, \(a\) and \(b\), appear in the expression for \(w\) at the order \(\epsilon^{6}\). This means that the operator \(L_{0}\) does not have simple spectrum even when restricted to the subspace of differential polynomials of a fixed degree \(n\). Based on Table 1, we have found the following deformations that work for arbitrary value of \(n\). Computer experiments suggest that there exist infinitely many series of negative rational eigenvalues \(\lambda\), in particular, \[\lambda=\frac{2}{2(2-n)}\ (n\geq 3),\qquad\lambda=\frac{3}{2(3-n)}\ (n\geq 4), \qquad\lambda=\frac{4}{2(4-n)}\ (n\geq 5),\] \[\lambda=\frac{5}{2(5-n)}\ (n\geq 6),\qquad\lambda=\frac{6}{2(6-n)}\ (n\geq 8), \qquad\lambda=\frac{7}{2(7-n)}\ (n\geq 10),\] etc, although we were not able to find a general formula that would give all of them. ## 4 Higher-order deformations of one-component reductions Here we study in more detail higher-order reductions corresponding to the two positive values of \(\lambda\). ### One-component dissipative reductions \((\lambda=1)\) In this case expansion (2) takes the form \[w=u^{2}+\epsilon au_{x}+\frac{\epsilon^{2}}{2}(a^{2})_{xx}+\frac{\epsilon^{3} }{2}\Big{[}(a^{2}a^{\prime})_{xx}-(aa^{\prime}a^{\prime\prime}+\frac{1}{6}a^ {2}a^{\prime\prime\prime})u_{x}^{2}\Big{]}_{x}+\ldots,\] here \(a(u)\) is an arbitrary function, and the coefficients of all subsequent terms are certain explicit expressions in \(a(u)\) and its derivatives. The corresponding equation \(u_{t}=w_{x}\) belongs to the class of 'viscous conservation laws' studied in [1, 2]. The coefficient \(a(u)\) is referred to as the 'viscous central invariant' (note that our normalisation is different from that of [1, 2]). This expansion does not terminate unless \(a(u)=const\), which corresponds to the Burgers' hierarchy (Example 1 of Section 1). Another familiar case is \(a(u)=u\) which gives \[w=u^{2}+\frac{\epsilon}{2}(u^{2})_{x}+\frac{\epsilon^{2}}{2}(u^{2})_{xx}+ \frac{\epsilon^{3}}{2}(u^{2})_{xxx}+\ldots.\] Although this expansion does not terminate, it can be represented in compact form as \[w=u^{2}+\frac{\epsilon}{2}\frac{\partial_{x}}{1-\epsilon\partial_{x}}(u^{2}). \tag{11}\] Introducing \(m=u-\epsilon u_{x}\), one can rewrite the corresponding equations (1) in the form \[m_{t}=(mu)_{x},\quad m_{y}=(mw)_{x}. \tag{12}\] Note that the first equation (12) can be written in the following second-order non-evolutionary form, \[u_{t}-\epsilon u_{xt}=2uu_{x}-\epsilon(uu_{xx}+u_{x}^{2}).\] This equation was discussed in [2] as a viscous analogue of the Camassa-Holm equation. It has first appeared in [16], see also [6]. To obtain non-evolutionary form of the second equation (12), one can use the relation \(w-\epsilon w_{x}=um\), which is equivalent to (11), to eliminate \(w\). This results in the following third-order non-evolutionary form: \[\epsilon m_{xy}-\frac{\epsilon m_{xx}+m_{x}}{\epsilon m_{x}+m}(\epsilon m_{y} +um^{2})=m_{y}-3umm_{x}-m^{2}u_{x}.\] Commuting flows (12) belong to the hierarchy of the second-order linearisable PDE \[u_{\tau}=\left(\frac{1}{u-\epsilon u_{x}}\right)_{x},\quad\text{equivalently},\quad m_{\tau}=(\partial_{x}-\epsilon\partial_{x}^{2})\left(\frac{1}{m} \right),\] which has appeared in [3], see also [15], eq. (17). Unfortunately, already the case \(a(u)=u^{2}\) is not representable in any reasonably compact form, furthermore, our calculations suggest that the corresponding commuting flows (3) do not belong to the hierarchy of any finite-order integrable PDE. ### One-component dispersive reductions (\(\lambda=3/2\)) In this case expansion (2) takes the form \[w=\frac{3}{2}u^{2}+\epsilon^{2}(au_{xx}+\frac{5}{4}a^{\prime}u_{ x}^{2})+\frac{\epsilon^{4}}{128}\left(64aa^{\prime}u_{xxxx}+32(9{a^{\prime}}^{2}+7 aa^{\prime\prime})u_{x}u_{xxx}+\right.\\ \left.+16(14{a^{\prime}}^{2}+9aa^{\prime\prime})u_{xx}^{2}+96 \left(13a^{\prime}a^{\prime\prime}+3aa^{\prime\prime\prime}\right)u_{x}^{2}u_ {xx}+5(52a^{\prime\prime\prime}a^{\prime}+44{a^{\prime\prime}}^{2}+9aa^{(4)}) u_{x}^{4}\right)+\ldots,\] where \(a(u)\) is an arbitrary function, and all subsequent terms are certain explicit expressions in \(a(u)\) and its derivatives. This expansion does not terminate unless \(a(u)=const\), which corresponds to the KdV hierarchy (Example 2 of Section 4). Another familiar case is \(a(u)=2u\) ([18], Sect. 5) which gives \[w=\frac{3}{2}u^{2}+\epsilon^{2}\left(2uu_{xx}+\frac{5}{2}u_{x}^{2}\right)+\ldots.\] Although this expansion does not terminate, it can be represented in compact form as \[w=\left(1-\epsilon^{2}\partial_{x}^{2}\right)^{-1}\left(-\epsilon^{2}uu_{xx}- \frac{\epsilon^{2}}{2}u_{x}^{2}+\frac{3}{2}u^{2}\right). \tag{13}\] Introducing \(n=u-\epsilon^{2}u_{xx}\), one can rewrite the corresponding equations (1) in the form \[n_{t}=(nu)_{x}+nu_{x},\quad n_{y}=(nw)_{x}+nw_{x}. \tag{14}\] Note that the first equation (14) can be written in the Camassa-Holm non-evolutionary form, \[u_{t}-\epsilon^{2}u_{xxt}=3uu_{x}-\epsilon^{2}(uu_{xxx}+2u_{x}u_{xx}).\] To obtain non-evolutionary form of the second equation (14), one can differentiate it by \(x\) and use the relation \(w-\epsilon^{2}w_{xx}=un+\frac{1}{2}(u^{2}-\epsilon^{2}u_{x}^{2})\), which is equivalent to (13), to eliminate \(w\). Commuting flows (14) belong to the hierarchy of the third-order integrable PDE \[u_{\tau}=\left(\frac{1}{\sqrt{u-\epsilon^{2}u_{xx}}}\right)_{x},\quad\text{ equivalently},\quad n_{\tau}=(\partial_{x}-\epsilon^{2}\partial_{x}^{3})\left(\frac{1}{ \sqrt{n}}\right),\] which has appeared in [4]. ## 5 Higher-order deformations of two-component reductions Multi-component hydrodynamic reductions of system (1) can be obtained by setting \(u=u(R^{1},\ldots,R^{n})\), \(w=w(R^{1},\ldots,R^{n})\) where \(R^{i}\) satisfy a pair of commuting \(n\)-component hyperbolic diagonal systems of hydrodynamic type, \[R^{i}_{t}=\eta^{i}(R)R^{i}_{x},\quad R^{i}_{y}=\mu^{i}(R)R^{i}_{x}, \tag{15}\] whose characteristic speeds satisfy the commutativity conditions [19], \[\frac{\partial_{j}\eta^{i}}{\eta^{j}-\eta^{i}}=\frac{\partial_{j}\mu^{i}}{\mu ^{j}-\mu^{i}},\quad i\neq j; \tag{16}\] here \(\partial_{i}=\partial_{R^{i}}\), see [7] for the general theory of hydrodynamic reductions of multi-dimensional dispersionless integrable systems. The requirement that the so defined \(u\) and \(w\) solve system (1) imposes the following constraints for the unknown functions: \(\partial_{i}w=\eta^{i}\partial_{i}u\) (no summation), as well as the dispersion relation \(\mu^{i}=(\eta^{i})^{2}-u\eta^{i}+w\). Upon substitution into commutativity conditions (16) these constraints imply \(\partial_{j}\eta^{i}=\partial_{j}u\), while the compatibility conditions \(\partial_{i}\partial_{j}w=\partial_{j}\partial_{i}w\) imply \(\partial_{i}\partial_{j}u=0\). These equations are straightforward to solve: with a suitable choice of Riemann invariants \(R^{i}\), which are defined up to reparametrisations \(R^{i}\to\varphi^{i}(R^{i})\), one has \[\begin{array}{c}u=\sum R^{k},\quad w=\frac{1}{2}\left(\sum R^{k}\right)^{2} +\sum f_{k}(R^{k}),\\ \\ \eta^{i}=\sum R^{k}+f^{\prime}_{i}(R^{i}),\quad\mu^{i}=[f^{\prime}_{i}(R^{i}) ]^{2}+f^{\prime}_{i}(R^{i})\sum R^{k}+\frac{1}{2}\left(\sum R^{k}\right)^{2}+ \sum f_{k}(R^{k});\end{array} \tag{17}\] here \(f_{i}(R^{i})\) are \(n\) arbitrary functions of the indicated variables and all summations are from \(1\) to \(n\) (equivalent formulae were obtained in [17], Sect. 7). Below we discuss higher-order deformations of two-component hydrodynamic reductions. Thus, we seek solutions of system (1) in the form \[u=R^{1}+R^{2},\quad w=\frac{1}{2}(R^{1}+R^{2})^{2}+f_{1}(R^{1})+f_{2}(R^{2}),\] where \(R^{1},R^{2}\) satisfy equations of the form \[R^{i}_{t}=\eta^{i}(R)R^{i}_{x}+\epsilon(\ldots)+\epsilon^{2}(\ldots)+\ldots, \quad R^{i}_{y}=\mu^{i}(R)R^{i}_{x}+\epsilon(\ldots)+\epsilon^{2}(\ldots)+\ldots, \tag{18}\] where dots at \(\epsilon^{i}\) denote differential polynomials of degree \(i+1\) whose coefficients can be arbitrary functions of \(R^{1},R^{2}\) (\(u\) and \(w\) can be assumed undeformed due to the invariance of the construction under the Miura group, see, e.g., [5]). The characteristic speeds \(\eta^{i},\ \mu^{i}\) are as follows: \[\eta^{i}=R^{1}+R^{2}+f_{i}^{\prime}(R^{i}),\qquad\mu^{i}=[f_{i}^{\prime}(R^{i} )]^{2}+f_{i}^{\prime}(R^{i})(R^{1}+R^{2})+\tfrac{1}{2}\left(R^{1}+R^{2}\right) ^{2}+f_{1}(R^{1})+f_{2}(R^{2}),\] here \(i=1,2\). The substitution of this ansatz into (1) implies that equations (18) can be written as \[R_{t}^{1}=\eta^{1}(R)R_{x}^{1}+P,\quad R_{t}^{2}=\eta^{2}(R)R_{x}^{2}-P \tag{19}\] and \[R_{y}^{1}=\mu^{1}(R)R_{x}^{1}+f_{1}^{\prime}(R^{1})P+Q,\quad R_{y}^{2}=\mu^{2 }(R)R_{x}^{2}-f_{2}^{\prime}(R^{2})P-Q, \tag{20}\] respectively. Here \(P\) and \(Q\) are expansions of the form \[P=\epsilon(\dots)+\epsilon^{2}(\dots)+\dots,\quad Q=\epsilon(\dots)+\epsilon^ {2}(\dots)+\dots,\] where the terms at \(\epsilon^{i}\) are differential polynomials of degree \(i+1\) whose coefficients can be arbitrary functions of \(R^{1},R^{2}\). We will see that the requirement of compatibility of systems (19) and (20), \(R_{ty}^{i}=R_{yt}^{i}\), imposes strong constraints on the functions \(f_{i}(R^{i})\). ### Two-component dissipative reductions Here we assume that the expansions start with non-trivial \(\epsilon\)-terms, \[P=\epsilon[p_{1}R_{xx}^{1}+p_{2}R_{xx}^{2}+p_{3}(R_{x}^{1})^{2}+p_{4}R_{x}^{1 }R_{x}^{2}+p_{5}(R_{x}^{2})^{2}]+\dots,\] \[Q=\epsilon[q_{1}R_{xx}^{1}+q_{2}R_{xx}^{2}+q_{3}(R_{x}^{1})^{2}+q_{4}R_{x}^{1 }R_{x}^{2}+q_{5}(R_{x}^{2})^{2}]+\dots,\] where \(p_{i},q_{i}\) are some functions of \(R^{1},R^{2}\) (not all identically zero). At the order \(\epsilon\), compatibility conditions of systems (19) and (20) result in eighteen constraints for the functions \(f_{i}(R^{i})\) and \(p_{i},q_{i}\). Classification results - up to symmetries of system (1) and the interchange of indices \(1\) and \(2\) - are summarised below (five different cases). **Case 1:**\(f_{1}(R^{1})=\frac{1}{2}(R^{1})^{2}\) and \(f_{2}(R^{2})=\frac{1}{2}(R^{2})^{2}\). Then \[u=R^{1}+R^{2},\ \ w=(R^{1})^{2}+R^{1}R^{2}+(R^{2})^{2},\] \[R_{t}^{1}=(2R^{1}+R^{2})R_{x}^{1}+P, R_{y}^{1}=(3(R^{1})^{2}+2R^{1}R^{2}+(R^{2})^{2})R_{x}^{1}+R^{1}P+Q,\] \[R_{t}^{2}=(R^{1}+2R^{2})R_{x}^{2}-P, R_{y}^{2}=(3(R^{2})^{2}+2R^{1}R^{2}+(R^{1})^{2})R_{x}^{2}-R^{2}P-Q,\] where \[P=\frac{\epsilon}{R^{1}-R^{2}}\left((R_{x}^{1})^{2}\alpha^{ \prime}+\alpha R_{xx}^{1}-(R_{x}^{2})^{2}\beta^{\prime}-\beta R_{xx}^{2} \right)+\dots,\] \[Q=\frac{\epsilon}{R^{1}-R^{2}}\left((2\alpha+\alpha^{\prime}(2R ^{1}+R^{2}))\,(R_{x}^{1})^{2}+(\alpha-\beta)R_{x}^{1}R_{x}^{2}+\alpha(2R^{1}+ R^{2})R_{xx}^{1}-\right.\] \[\left.-\beta(R^{1}+2R^{2})R_{xx}^{2}-(2\beta+\beta^{\prime}(R^{1 }+2R^{2}))(R_{x}^{2})^{2}\right)+\dots,\] and \(\alpha=\alpha(R^{1}),\ \beta=\beta(R^{2})\) are arbitrary functions. **Case 2:**\(f_{1}(R^{1})=\frac{1}{2}(R^{1})^{2}\) and \(f_{2}(R^{2})=-\frac{1}{2}(R^{2})^{2}\). Then \[u=R^{1}+R^{2},\ \ w=(R^{1})^{2}+R^{1}R^{2},\] \[R_{t}^{1}=(2R^{1}+R^{2})R_{x}^{1}+P, R_{y}^{1}=(3(R^{1})^{2}+2R^{1}R^{2})R_{x}^{1}+R^{1}P+Q,\] \[R_{t}^{2}=R^{1}R_{x}^{2}-P, R_{y}^{2}=(R^{1})^{2}R_{x}^{2}+R^{2}P-Q,\] where \[P=\epsilon\left(\alpha(R^{1}+R^{2})R_{xx}^{1}+\left(2\alpha+\alpha^{\prime}(R^{1}+ R^{2})\right)(R_{x}^{1})^{2}+2\alpha R_{x}^{1}R_{x}^{2}+\frac{\left(\beta^{\prime}(R^{1}+R ^{2})-2\beta\right)(R_{x}^{2})^{2}+\beta(R^{1}+R^{2})R_{xx}^{2}}{(R^{1}+R^{2})^ {2}}\right)+\ldots,\] \[Q=\frac{\epsilon}{(R^{1}-R^{2})^{2}}\left(\alpha(2R^{1}+R^{2})(R^{1}+R^{2})^{3} R_{xx}^{1}+\beta R^{1}(R^{1}+R^{2})R_{xx}^{2}+(R^{1}+R^{2})^{2}\left(\alpha(6R^{1}+4R ^{2})+\right.\right.\] \[\left.\left.+\alpha^{\prime}(R^{1}+R^{2})(2R^{1}+R^{2})\right)(R_{x}^{1})^{2}+ (R^{1}+R^{2})\left(\beta+\alpha(R^{1}+R^{2})(3R^{1}+R^{2})\right)R_{x}^{1}R_{ x}^{2}+\left(\beta^{\prime}(R^{1}+R^{2})-2\beta\right)R^{1}(R_{x}^{2})^{2} \right)+\ldots,\] and \(\alpha=\alpha(R^{1}),\ \beta=\beta(R^{2})\) are arbitrary functions. Note that the second characteristic speed in the \(t\)-flow is linearly degenerate. **Case 3:**\(f_{1}(R^{1})=-\frac{1}{2}(R^{1})^{2}\) and \(f_{2}(R^{2})=-\frac{1}{2}(R^{2})^{2}\). Then \[u=R^{1}+R^{2},\ \ \ w=R^{1}R^{2},\] \[R_{t}^{1}=R^{2}R_{x}^{1}+P,\ \ \ \ \ \ \ \ \ \ \ \ \ \ R_{y}^{1}=\epsilon(R^{1}-R^{2})((R^{1}-R^{2}) \left(\alpha R_{xx}^{1}+\alpha^{\prime}(R_{x}^{1})^{2}\right)-(\alpha+\beta)R _{x}^{1}R_{x}^{2})+\ldots,\] \[R_{t}^{2}=R^{1}R_{x}^{2}-P,\ and \(p_{2}(R^{1},R^{2})\) solves the PDE \[(f_{1}^{\prime}+R^{2})\frac{\partial p_{2}}{\partial R^{1}}+p_{2}=0.\] Note that if \(f_{1}(R^{1})=\frac{1}{2}(R^{1})^{2}\) then \(p_{2}=\frac{\beta(R^{2})}{R^{1}+R^{2}}\) and we recover Case 2 with \(\alpha(R^{1})=0\). Here again the second characteristic speed in the \(t\)-flow is linearly degenerate. Terminating deformations:There is only one case of deformations terminating at the order \(\epsilon\) (where all terms at \(\epsilon^{k},\ k\geq 2\) are identically zero). It is coming from Case 1 with \(\alpha(R^{1})+\beta(R^{2})=0\). Hence, \(\alpha(R^{1})=-\beta(R^{2})=const\), where the constant can be absorbed in \(\epsilon\). This gives \[u=R^{1}+R^{2},\ \ \ w=(R^{1})^{2}+R^{1}R^{2}+(R^{2})^{2},\] \[R^{1}_{t}=(2R^{1}+R^{2})R^{1}_{x}+\frac{\epsilon}{R^{1}-R^{2}}( R^{1}_{xx}+R^{2}_{xx}),\] \[R^{2}_{t}=(R^{1}+2R^{2})R^{2}_{x}+\frac{\epsilon}{R^{2}-R^{1}}( R^{1}_{xx}+R^{2}_{xx}),\] \[R^{1}_{y}=\left(3(R^{1})^{2}+2R^{1}R^{2}+(R^{2})^{2}\right)R^{1} _{x}+\frac{\epsilon}{R^{1}-R^{2}}(2(R^{1}_{x})^{2}+2R^{1}_{x}R^{2}_{x}+2(R^{2} _{x})^{2}+(3R^{1}+R^{2})R^{1}_{xx}+2(R^{1}+R^{2})R^{2}_{xx}),\] \[R^{2}_{y}=\left(3(R^{2})^{2}+2R^{1}R^{2}+(R^{1})^{2}\right)R^{2} _{x}+\frac{\epsilon}{R^{2}-R^{1}}(2(R^{1}_{x})^{2}+2R^{1}_{x}R^{2}_{x}+2(R^{2} _{x})^{2}+\left(3R^{2}+R^{1}\right)R^{2}_{xx}+2(R^{1}+R^{2})R^{1}_{xx}).\] Introducing the potential \(q\) such that \(u=q_{x},\ w=q_{t}\), one can rewrite the above \(t\)-flow as a linearisable equation \[q_{tt}=q_{t}q_{xx}+3q_{x}q_{xt}-3q^{2}_{x}q_{xx}+\epsilon q_{xxx},\] which has appeared in [12, 10]. The corresponding \(y\)-flow reduces to its higher symmetry, \[q_{y}=2q_{x}q_{t}-q^{3}_{x}+\epsilon q_{xx}.\] ### Two-component dispersive reductions Here we assume that the expansions start with non-trivial \(\epsilon^{2}\)-terms, \[P=\epsilon^{2}[p_{1}R^{1}_{xxx}+p_{2}R^{1}_{x}R^{1}_{xx}+p_{3}(R^{1}_{x})^{3}+ p_{4}R^{2}_{xxx}+p_{5}R^{2}_{x}R^{2}_{xx}+p_{6}(R^{2}_{x})^{3}+p_{7}R^{1}_{x}R^{2}_{ xx}+p_{8}R^{2}_{x}R^{1}_{xx}+p_{9}R^{1}_{x}(R^{2}_{x})^{2}+p_{10}R^{2}_{x}(R^{1}_{x})^{ 2}]+\ldots,\] \[Q=\epsilon^{2}[q_{1}R^{1}_{xxx}+q_{2}R^{1}_{x}R^{1}_{xx}+q_{3}(R^{1}_{x})^{3}+ q_{4}R^{2}_{xxx}+q_{5}R^{2}_{x}R^{2}_{xx}+q_{6}(R^{2}_{x})^{3}+q_{7}R^{1}_{x}R^{2}_{ xx}+q_{8}R^{2}_{x}R^{1}_{xx}+q_{9}R^{1}_{x}(R^{2}_{x})^{2}+q_{10}R^{2}_{x}(R^{1}_{x})^{ 2}]+\ldots,\] where \(p_{i},q_{i}\) are some functions of \(R^{1},R^{2}\) (not all identically zero). At the order \(\epsilon^{2}\), compatibility conditions of systems (19) and (20) result in constraints for the functions \(f_{i}(R^{i})\) and \(p_{i},q_{i}\). Classification results - up to symmetries of system (1) and the interchange of indices 1 and 2 - result in five different cases. These cases are listed below, omitting the explicit presentation of \(R^{i}_{t},R^{i}_{y}\) due to the length of the expressions. **Case 1:**\(f_{1}(R^{1})=(R^{1})^{2}\) and \(f_{2}(R^{2})=(R^{2})^{2}\). Then \[u=R^{1}+R^{2},\ \ w=\frac{1}{2}(R^{1}+R^{2})^{2}+(R^{1})^{2}+(R^{2})^{2}.\] **Case 2:**\(f_{1}(R^{1})=(R^{1})^{2}\) and \(f_{2}(R^{2})=-\frac{1}{2}(R^{2})^{2}\). Then \[u=R^{1}+R^{2},\ \ w=\frac{3}{2}(R^{1})^{2}+R^{1}R^{2}.\] **Case 3:**\(f_{1}(R^{1})=-\frac{1}{2}(R^{1})^{2}\) and \(f_{2}(R^{2})=-\frac{1}{2}(R^{2})^{2}\). Then \[u=R^{1}+R^{2},\ \ w=R^{1}R^{2}.\] **Case 4:**\(f_{1}(R^{1})\) arbitrary and \(f_{2}(R^{2})=(R^{2})^{2}\). Then \[u=R^{1}+R^{2},\ \ w=\frac{1}{2}(R^{1}+R^{2})^{2}+(R^{2})^{2}+f_{1}.\] **Case 5:**\(f_{1}(R^{1})\) arbitrary and \(f_{2}(R^{2})=-\frac{1}{2}(R^{2})^{2}\). Then \[u=R^{1}+R^{2},\ \ \ w=\frac{1}{2}R^{1}(R^{1}+2R^{2})+f_{1}.\] **Terminating deformations:** There is only one case of deformations terminating at the order \(\epsilon^{2}\) (where all terms at \(\epsilon^{k},\ k\geq 3\) are identically zero). It is coming from Case 1 with \(\alpha(R^{1})+\beta(R^{2})=0\). Hence, \(\alpha(R^{1})=-\beta(R^{2})=const\), where the constant can be absorbed in \(\epsilon\). This gives \[u=R^{1}+R^{2},\ \ \ w=\frac{1}{2}(R^{1}+R^{2})^{2}+(R^{1})^{2}+(R^{ 2})^{2},\] \[R^{1}_{t}=(3R^{1}+R^{2})R^{1}_{x}+\frac{\epsilon^{2}}{R^{1}-R^{ 2}}(R^{1}_{xxx}+R^{2}_{xxx}),\] \[R^{2}_{t}=(R^{1}+3R^{2})R^{2}_{x}-\frac{\epsilon^{2}}{R^{1}-R^{ 2}}(R^{1}_{xxx}+R^{2}_{xxx}),\] \[R^{1}_{y}=\frac{3}{2}\left(5(R^{1})^{2}+2R^{1}R^{2}+(R^{2})^{2} \right)R^{1}_{x}\] \[\ that is, \(Lw_{n}=0\). Here \(X,Y,T\) denote three commuting operators of total differentiation associated with commuting systems \(u_{y}=f^{\prime}u_{x}\) and \(u_{t}=(f^{\prime 2}+u)u_{x}\), explicitly, \[X=u_{x}\frac{\partial}{\partial u}+u_{xx}\frac{\partial}{\partial u_{x}}+\dots,\] \[Y=f^{\prime}u_{x}\frac{\partial}{\partial u}+(f^{\prime}u_{x})_{x}\frac{ \partial}{\partial u_{x}}+\dots,\] \[T=(f^{\prime 2}+u)u_{x}\frac{\partial}{\partial u}+((f^{\prime 2}+u)u_{x})_{x} \frac{\partial}{\partial u_{x}}+\dots.\] Introducing the operators \(\tilde{Y}=Y-f^{\prime}X\) and \(\tilde{T}=T-2f^{\prime}Y+(f^{\prime 2}-u)X\) one can rewrite the operator \(L\) in the form \[L=(\tilde{T}-u_{x})X-\tilde{Y}^{2}+f^{\prime\prime}u_{x}\tilde{Y}.\] Our goal is to show that, for every \(f\), the kernel of \(L\) is trivial: \(Lw_{n}=0\implies w_{n}=0\) (note that \(L\) is independent of \(n\)). We adopt the standard notation for the jet variables: \((u,u_{x},u_{xx},\dots)=(u,u_{1},u_{2},\dots)\). We will need the following properties of the operators \(\tilde{Y}\) and \(\tilde{T}\): \[\tilde{Y}=\sum_{s=1}^{\infty}a_{s}\frac{\partial}{\partial u_{s}},\] \[\tilde{T}=u_{1}\left(u_{1}\frac{\partial}{\partial u_{1}}+3u_{2}\frac{ \partial}{\partial u_{2}}+4u_{3}\frac{\partial}{\partial u_{3}}+\dots\right) +\sum_{s=1}^{\infty}b_{s}\frac{\partial}{\partial u_{s}},\] where each coefficient \(a_{s}\) depends on the jet variables \(u,u_{1},\dots,u_{s}\), while each coefficient \(b_{s}\) depends on the jet variables \(u,u_{1},\dots,u_{s-1}\) only. Suppose, by contradiction, that \(w_{n}\neq 0\), let \(u_{k}\) be the highest jet variable occurring in \(w_{n}\) (thus, \(\frac{\partial w_{n}}{\partial u_{k}}\neq 0\)). Then \(u_{k+1}\) will be the highest jet variable occurring in \(Lw_{n}\), indeed, \(Xw_{n}\) will contain the term \(\frac{\partial w_{n}}{\partial u_{k}}u_{k+1}\), while the expression \((-\tilde{Y}^{2}+f^{\prime\prime}u_{x}\tilde{Y})w_{n}\) will not contain \(u_{k+1}\) due to the above property of the operator \(\tilde{Y}\). Thus, the expression \(Lw_{n}\) will depend on \(u_{k+1}\) linearly, with the coefficient at \(u_{k+1}\) equal to \[(\tilde{T}-u_{1})\left(\frac{\partial w_{n}}{\partial u_{k}}\right)+(k+2)u_{1} \frac{\partial w_{n}}{\partial u_{k}}=\tilde{T}\left(\frac{\partial w_{n}}{ \partial u_{k}}\right)+(k+1)u_{1}\frac{\partial w_{n}}{\partial u_{k}}. \tag{23}\] Let \(\sigma\) be the highest-order differential monomial in \(\frac{\partial w_{n}}{\partial u_{k}}\), in the standard lexicographic order. Then, due to the above form of \(\tilde{T}\), the highest-order monomial in the expression (23) will have the form \(cu_{1}\sigma\) with some positive constant \(c\), and therefore \(Lw_{n}\) does not vanish. This contradiction finishes the proof. \(\Box\) **Remark.** The main difference between the Mikhalev system (1) and the dKP system (21) is that the former is linearly degenerate, while the latter is not (we refer to [8, 9] for the general definition of linear degeneracy in higher dimensions). One can show that the existence of nontrivial _higher-order_ reductions is a characteristic property of _linearly degenerate_ dispersionless integrable systems in dimensions \(2+1\) and higher. We hope to report the details elsewhere. ## 7 Concluding remarks The remaining challenges are as follows: (a) find a general formula for the eigenvalues \(\lambda\); (b) for a given eigenvalue \(\lambda\), prove the existence of higher-order deformations at all orders of the deformation parameter \(\epsilon\), and investigate their functional freedom. ## Acknowledgements We thank M. Pavlov for useful discussions. The research of EVF was supported by a grant from the Russian Science Foundation No. 21-11-00006, [https://rscf.ru/project/21-11-00006/](https://rscf.ru/project/21-11-00006/).
2309.11696
LLM-based Medical Assistant Personalization with Short- and Long-Term Memory Coordination
Large Language Models (LLMs), such as GPT3.5, have exhibited remarkable proficiency in comprehending and generating natural language. On the other hand, medical assistants hold the potential to offer substantial benefits for individuals. However, the exploration of LLM-based personalized medical assistant remains relatively scarce. Typically, patients converse differently based on their background and preferences which necessitates the task of enhancing user-oriented medical assistant. While one can fully train an LLM for this objective, the resource consumption is unaffordable. Prior research has explored memory-based methods to enhance the response with aware of previous mistakes for new queries during a dialogue session. We contend that a mere memory module is inadequate and fully training an LLM can be excessively costly. In this study, we propose a novel computational bionic memory mechanism, equipped with a parameter-efficient fine-tuning (PEFT) schema, to personalize medical assistants.
Kai Zhang, Yangyang Kang, Fubang Zhao, Xiaozhong Liu
2023-09-21T00:34:33Z
http://arxiv.org/abs/2309.11696v3
# Memory-Augmented LLM Personalization with Short- and Long-Term Memory Coordination ###### Abstract Large Language Models (LLMs), such as GPT3.5, have exhibited remarkable proficiency in comprehending and generating natural language. However, their unpersonalized generation paradigm may result in suboptimal user-specific outcomes. Typically, users converse differently based on their knowledge and preferences. This necessitates the task of enhancing user-oriented LLM which remains unexplored. While one can fully train an LLM for this objective, the resource consumption is unaffordable. Prior research has explored memory-based methods to store and retrieve knowledge to enhance generation without retraining for new queries. However, we contend that a mere memory module is inadequate to comprehend a user's preference, and fully training an LLM can be excessively costly. In this study, we propose a novel computational bionic memory mechanism, equipped with a parameter-efficient fine-tuning schema, to personalize LLMs. Our extensive experimental results demonstrate the effectiveness and superiority of the proposed approach. To encourage further research into this area, we are releasing a new conversation dataset generated entirely by LLM based on an open-source medical corpus, as well as our implementation code1. Footnote 1: Github link will be placed here 1Worcester Polytechnic Institute 2Damo Academy kzhang8@wpi.edu, fubang.zfb@alibaba-inc.com, yangyang.kangyy@alibaba-inc.com, xliu14@wpi.edu ## Introduction The potential of large language models (LLMs) to understand and generate natural language is undeniable Brown et al. (2020); Chowdhery et al. (2022); Touvron et al. (2023), while there is an untapped opportunity to explore how LLMs could be customised to provide personalized interactions with users, allowing them to receive tailored responses that best suit their individual needs Bender et al. (2021). Efforts have been made to design proper prompts for steering LLMs to enhance output. For example, by memorizing previous mistakes and user feedback, given a new query, a similarity-based retriever can be leveraged to preemptively recognize and rectify LLM errorsDalvi et al. (2022); Madan et al. (2022); Lewis et al. (2020). However, this paradigm poses us two challenges: **Firstly**, most existing memory designs are dictionary-based Madan et al. (2022); Lewis et al. (2020) (i.e. key-value form where key is the previous mistake, value is the corresponding user-feedback) which can be inflexible and rely heavily on the power of retriever. For example, as the conversation goes, the size of memory in this setting would increase uncontrollably which will further impact the effectiveness and accuracy of the retriever; **Secondly**, such paradigm, without retraining, can barely provide users with personalized and engaging experience. For instance, in the healthcare context, a diabetes patient who prefers concise and straightforward medical advice won't expect detailed glucose test explanations from a doctor while others who prefer fully elaborated responses may want to know as much as possible about the disease (e.g., causes etc.). To this end, being aware of user preference and user-relevant information can be crucial for enhancing user experience and remains understudied. In this paper, we propose a novel framework that enhances personalized LLM learning through the augmentation of memory and parameters-efficient fine-tuning schema. Dictionary-based memory is not pliable due to its intricate structure and thus efforts can only be made in strengthening retrievers. Despite the improvements made by retrievers like semantic-similarity based and distance-closest Madan et al. (2022), we argue that the memory structure should be ameliorated to accommodate diverse information. For example, as Figure 1 depicted, doctors would Figure 1: Personalized responses for different users in terms of the same query.
2307.16455
Hybrid star structure from perturbative QCD
The stellar configurations of quark stars are studied using perturbative QCD (pQCD) for the equation of state (EoS). The neutron star structures with equation of state obtained from Brussels-Montreal extended Skyrme interaction are also explored. The deconfinement phase transition from quark to hadron phase in stellar interior for matter under extreme pressure is accomplished by employing the Maxwell construction. The influence of hybrid EoS on the jump in energy density has been investigated. To study hybrid stars the BSk24 hadronic model and pQCD EoS for the quark phase have been used. The properties of hybrid stars in the view of the very recent astrophysical observations have been examined. We find that the gravitational mass might exceed 2.3 $M_\odot$ in some cases, comparable with the observed mass of the pulsar PSR J0952-0607 recently detected.
Joydev Lahiri, D. N. Basu
2023-07-31T07:27:12Z
http://arxiv.org/abs/2307.16455v1
# Hybrid star structure from perturbative QCD ###### Abstract The stellar configurations of quark stars are studied using perturbative QCD (pQCD) for the equation of state (EoS). The neutron star structures with equation of state obtained from Brussels-Montreal extended Skyrme interaction are also explored. The deconfinement phase transition from quark to hadron phase in stellar interior for matter under extreme pressure is accomplished by employing the Maxwell construction. The influence of hybrid EoS on the jump in energy density has been investigated. To study hybrid stars the BSk24 hadronic model and pQCD EoS for the quark phase have been used. The properties of hybrid stars in the view of the very recent astrophysical observations have been examined. We find that the gravitational mass might exceed 2.3 \(M_{\odot}\) in some cases, comparable with the observed mass of the pulsar PSR J0952-0607 recently detected. _Keywords_: Quark stars; Hybrid stars; Equation of state; Phase transition; \(\beta\)-equilibrated matter. ## I Introduction In order to test our understanding on nuclear matter under extreme conditions, a natural laboratory is provided by the study of astrophysical compact objects, especially neutron stars and related pulsars, which are among the most dense objects in the universe. A precise knowledge of the nuclear matter equation of state (EoS) reliable under extreme conditions and a wide range of densities is an essential tool for description of these objects. A succinct exposition of the underlying physics is well described in the textbooks [1; 2; 3] which covers our knowledge of the relevant physics till about the 1990s. In order to comprehend such complex structure, one requires the EoS of compact stars in different density regions. For example, the regions of very low density, the sub-nuclear density and from neutron drip density to about nuclear density, can be well described by Feynman-Metropolis-Teller (FMT) [4], Baym-Pethick-Sutherland (BPS) [5] and Baym-Bethe-Pethick (BBP) [6] EoSs, respectively. In the present work NS structure is studied using a composite EoS, i.e. FMT, BPS, BBP and the EoS of the \(\beta\)-equilibrated dense neutron star matter with progressively increasing densities. The different density regions of a compact star are regulated by different EoSs. The density domain can be largely classified into two distinct regions: a crust which is responsible for \(\sim\) 0.5% of mass and \(\sim\) 10% of the radius of a star. The core accounts for the rest of the mass and radius of the star. Except in the outer few meters, the outer layers consist of a solid crust about a km thick comprising a lattice of bare nuclei immersed in a degenerate electron gas. When one penetrates into the crust deeper, the nuclear species become progressively more neutron rich because of the rising electron Fermi energy. Meanwhile, very recent observations of pulsars PSR J0952-0607 [7], PSR J0740+6620 [8; 9; 10] and PSR J0348+0432 [11], predict their masses \(\gtrsim 2M_{\odot}\) to a reasonable degree of accuracy. An analysis of the GW170817 event [12] also corroborates similar masses. Motivated by these observations, the structures of quark star (QS), neutron star (NS) and hybrid star (HS) have been studied in detail. For the quark matter perturbative QCD (pQCD) EoS [13; 14] has been used. For the hadronic matter, Brussels-Montreal extended Skyrme (BSk) interaction [15; 16; 17; 18; 19; 20; 21] has been used. Of these BSk family, the BSk24 interaction leads to the highest NS mass. Hence, in all subsequent studies, this interaction is preferred and for the outer crust, FMT+BPS+BBP EoSs have been used. At very high density, the deconfinement phase transition from hadronic matter to quark matter can occur inside NS leading to HS. This phenomenon is characterized by a jump in the energy density at quark-hadron phase transition inside the core which is explored using Maxwell construction. The mass radius relationship of HSs have been studied in the light of recent astrophysical observations. ## II Theoretical formalism In this section we introduce the quark and the hadronic matter models separately which will be used in constructing the quark stars, neutron stars and hybrid stars. The equation of state for cold quark matter using perturbative QCD (pQCD) and the extended Skyrme model for the hadronic matter are presented in the following subsections. ### Equation of state for cold quark matter The pQCD EoS for cold quark matter [13] in beta equilibrium can be described fairly accurately by the compact fitting function [14] as \[P_{\rm QCD}(\mu_{B},X) = P_{\rm SB}(\mu_{B})\left(c_{1}-\frac{a(X)}{(\mu_{B}/{\rm GeV})-b(X )}\right),\] \[a(X) = d_{1}X^{-\nu_{1}},\quad b(X)=d_{2}X^{-\nu_{2}}, \tag{1}\] where \(P_{\rm QCD}(\mu_{B},X)\) is the pressure of the cold quark matter with \[P_{\rm SB}(\mu_{B})=\frac{3}{4\pi^{2}}(\mu_{B}/3)^{4}. \tag{2}\] being the Stefan-Boltzmann limit of the pressure for three non-interacting quark flavors. The functions \(a(X)\) and \(b(X)\) encode the dependence of the result on renormalization scale \(\bar{\Lambda}\). The dimensionless parameter \(X=3\bar{\Lambda}/\mu_{B}\), which can take values from 1 to 4. The numerical constants \(\{c_{1},d_{1},d_{2},\nu_{1},\nu_{2}\}\) are given by the best fit [14] values \[c_{1}=0.9008 d_{1}=0.5034 d_{2}=1.452\] \[\nu_{1} =0.3553 \nu_{2} =0.9101. \tag{3}\] From Eq. 1 the expression for the energy density \(\varepsilon\) can be given by \[\varepsilon_{\rm QCD}=3P_{\rm QCD}+\frac{\mu_{B}}{{\rm GeV}}P_{\rm SB}(\mu_{ B})\frac{a(X)}{\left[(\mu_{B}/{\rm GeV})-b(X)\right]^{2}}. \tag{4}\] It is interesting to note that in MIT bag model, \(\varepsilon_{\rm QCD}-3P_{\rm QCD}=4B\), where \(B\) is the bag constant. The baryon number density \(\rho_{B}\) can be obtained using the well known thermodynamic relation \(\rho_{B}\mu_{B}=\varepsilon_{\rm QCD}+P_{\rm QCD}\). ### The \(\beta\)-equilibrated cold dense \(npe\mu\) matter The generalized Skyrme interaction for the hadronic matter can be obtained by extending the standard form of the Skyrme interaction [22; 23] with additional zero-range density- and momentum-dependent terms [15; 16; 17; 18; 19; 20; 21] as \[V(\mathbf{r}_{1},\mathbf{r}_{2}) = t_{0}(1+x_{0}P_{\sigma})\delta(\mathbf{r}) \tag{5}\] \[+\frac{1}{2}t_{1}(1+x_{1}P_{\sigma})[\mathbf{k}^{\prime 2}\delta(\mathbf{r})+ \mathrm{c.c.}]\] \[+t_{2}(1+x_{2}P_{\sigma})\mathbf{k}^{\prime}\cdot\delta(\mathbf{r})\mathbf{k}\] \[+\frac{1}{6}t_{3}(1+x_{3}P_{\sigma})\rho^{\alpha}\left(\mathbf{R} \right)\delta(\mathbf{r})\] \[+iW_{0}(\mathbf{\sigma}_{1}+\mathbf{\sigma}_{2})\cdot[\mathbf{k}^{\prime} \times\delta(\mathbf{r})\mathbf{k}]\] \[+\frac{1}{2}t_{4}(1+x_{4}P_{\sigma})\left[\mathbf{k}^{\prime 2}\rho^{ \beta}\left(\mathbf{R}\right)\delta(\mathbf{r})+\mathrm{c.c.}\right]\] \[+t_{5}(1+x_{5}P_{\sigma})\mathbf{k}^{\prime}\cdot\rho^{\gamma}\left( \mathbf{R}\right)\delta(\mathbf{r})\mathbf{k}\] where \(\mathbf{r}=\mathbf{r}_{1}-\mathbf{r}_{2}\), \(\mathbf{R}=(\mathbf{r}_{1}+\mathbf{r}_{2})/2\), \(\mathbf{\sigma}_{i}\) is the Pauli spin operator, \(P_{\sigma}\) is the spin-exchange operator, \(\mathbf{k}=-i(\mathbf{\nabla}_{1}-\mathbf{\nabla}_{2})/2\) is the relative momentum operator, and \(\mathbf{k}^{\prime}\) is the conjugate operator of \(\mathbf{k}\) acting on the left. In the above equation the first term represents central term, second and third terms represent non-local term, fourth term represents density dependent term and last term represents spin-orbit term. The last two terms are the density dependent generalization of the \(t_{1}\) and \(t_{2}\) terms in the standard Skyrme model. Infinite nuclear matter being spatially homogeneous, the axis can not be defined which leads to the absence of spin-orbit coupling. This implies that \(W_{0}\) term does not contribute in nuclear matter calculations. Thus the generalized Skyrme interaction collectively requires 15 parameters, _viz._\(t_{0}\), \(t_{1}\), \(t_{2}\), \(t_{3}\), \(t_{4}\), \(t_{5}\), \(x_{0}\), \(x_{1}\), \(x_{2}\), \(x_{3}\), \(x_{4}\), \(x_{5}\), \(\alpha\), \(\beta\) and \(\gamma\). The numerical values of these parameters used in the present work are listed in Table-I.Using density functional theory the energy per particle of an asymmetric infinite nuclear matter (ANM) can be expressed as [24] \[\epsilon(\rho,\eta)=\frac{3\hbar^{2}}{20}\left[\frac{1}{M_{\rm n}} (1+\eta)^{5/3}+\frac{1}{M_{\rm p}}(1-\eta)^{5/3}\right]nk_{\rm F}^{2}\] \[+\frac{3}{40}t_{1}\Bigg{[}(2+x_{1})F_{5/3}(\eta)-\left(\frac{1}{ 2}+x_{1}\right)F_{8/3}(\eta)\Bigg{]}n^{2}k_{\rm F}^{2}\] \[+\frac{3}{40}t_{2}\Bigg{[}(2+x_{2})F_{5/3}(\eta)+\left(\frac{1}{ 2}+x_{2}\right)F_{8/3}(\eta)\Bigg{]}n^{2}k_{\rm F}^{2}\] \[+\frac{3}{40}t_{4}\Bigg{[}(2+x_{4})F_{5/3}(\eta)-\left(\frac{1}{ 2}+x_{4}\right)F_{8/3}(\eta)\Bigg{]}n^{\beta+2}k_{\rm F}^{2}\] \[+\frac{3}{40}t_{5}\Bigg{[}(2+x_{5})F_{5/3}(\eta)+\left(\frac{1}{ 2}+x_{5}\right)F_{8/3}(\eta)\Bigg{]}n^{\gamma+2}\,k_{\rm F}^{2}\quad, \tag{6}\] \[k_{\rm F}=\left(\frac{3\pi^{2}n}{2}\right)^{1/3}\quad, \tag{7}\] \[\eta=\frac{\rho_{n}-\rho_{p}}{\rho}=1-2Y_{p} \tag{8}\] and \[F_{m}(\eta)=\frac{1}{2}\Bigg{[}(1+\eta)^{m}+(1-\eta)^{m}\Bigg{]}\quad. \tag{9}\] Here \(Y_{p}=Z/A=\rho_{p}/\rho\) is the proton fraction, \(\rho_{p}\) and \(\rho\) are the proton and baryonic number densities, respectively. The pressure \(P(\rho,\eta)\) in ANM may then be expressed as ## III Results and discussion In general relativity, the structure of a spherically symmetric body of isotropic material which is in static gravitational equilibrium is given by the Tolman-Oppenheimer-Volkoff (TOV) equation [25; 26] \[\frac{dP(r)}{dr}=-\frac{G}{c^{4}}\frac{[\varepsilon(r)+P(r)][m(r)c^{2}+4\pi r^ {3}P(r)]}{r^{2}[1-\frac{2Gm(r)}{rc^{2}}]},\] \[\varepsilon(r)=(\epsilon+m_{b}c^{2})\rho(r),\ m(r)c^{2}=\int_{0}^ {r}\varepsilon(r^{\prime})d^{3}r^{\prime}, \tag{11}\] \begin{table} \begin{tabular}{l c c c} \hline Parameter & BSk22 & BSk24 & BSk26 \\ \hline \(t_{0}\) [MeV fm\({}^{3}\)] & -3978.97 & -3970.29 & -4072.53 \\ \(t_{1}\) [MeV fm\({}^{5}\)] & 404.461 & 395.766 & 439.536 \\ \(t_{3}\) [MeV fm\({}^{3+3\alpha}\)] & 22704.7 & 22648.6 & 23369.1 \\ \(t_{4}\) [MeV fm\({}^{5+3\beta}\)] & -100.000 & -100.000 & -100.0 \\ \(t_{5}\) [MeV fm\({}^{5+3\gamma}\)] & -150.000 & -150.000 & -120.0 \\ \(x_{0}\) & 0.16154 & 0.894371 & 0.577367 \\ \(x_{1}\) & -0.047986 & 0.0563535 & -0.404961 \\ \(t_{2}x_{2}\) [MeV fm\({}^{5}\)] & -1396.13 & -1389.61 & -1147.70 \\ \(x_{3}\) & 0.514386 & 1.05119 & 0.624831 \\ \(x_{4}\) & 2.00000 & 2.00000 & -3.00000 \\ \(x_{5}\) & -11.0000 & -11.0000 & -11.00000 \\ \(\alpha\) & 1/12 & 1/12 & 1/12 \\ \(\beta\) & 1/2 & 1/2 & 1/6 \\ \(\gamma\) & 1/12 & 1/12 & 1/12 \\ \hline \end{tabular} \end{table} Table 1: Values of the parameters of nuclear matter for the Skyrme interactions corresponding to BSk22, BSk24 and BSk26 [20]. Since the parameters \(t_{2}\) and \(x_{2}\) always appear in the form of \(t_{2}x_{2}\), the value of the latter is provided. Figure 2: Plots of pressure versus energy density for EoS of hybrid quark–neutron stars. The phase transition for different values of \(X\) can be observed by the jumps in energy density except in case of \(X\) = 2.75+BSk24 which is associated with vanishing latent heat. Figure 1: Plots of pressure versus baryonic chemical potential \(\mu_{B}\) of quark matter for different values of parameter \(X\). The corresponding plots for hadronic matter are also shown. where \(\varepsilon(r)\) and \(P(r)\) are, respectively, the energy density and the pressure at a radial distance \(r\) from the center of the star and \(m(r)\) is the stellar mass contained within radius \(r\). The numerical solution of TOV equation for masses and radii has been obtained using Runge-Kutta method. The EoS provides \(\varepsilon(\rho)\) and \(P(\rho)\) as the inputs for the calculations. The boundary condition \(P(r)=0\) at the surface \(R\) of the star determines its size and integration up to \(R\) given by \(M=m(R)\)[27] provides the its total mass \(M\). The numerical solution of TOV equation, being an initial value problem, requires a single integration constant _viz_. the pressure \(P_{c}\) at the center \(r=0\) of the star calculated at a given central density \(\rho_{c}\). It is important to mention here that the solution of TOV equation determines masses of static stars which are very close to slowly rotating NSs [28; 29; 30]. ### Deconfinement phase transition: from Hadronic to Quark matter In the present work, the quark-hadron deconfinement phase transition is obtained using Maxwell construction [31], following which the transition from hadronic to quark phase occurs with sharp jump in energy density at a point where the pressure and baryonic chemical potential of the individual charge neutral phases are equal [32; 33; 34; 35; 36; 37] implying \[P_{\rm QCD}(\mu_{B}^{Q},X) = P(\rho,\eta)\] \[\mu_{B}^{Q} = \mu_{B}^{H}. \tag{12}\] In Fig. 1 the pressure versus baryonic chemical potential \(\mu_{B}\) of quark matter for different values of parameter \(X\) have been plotted. The corresponding plots for hadronic matter are also shown. The red dots satisfy the conditions of Eq. 12 for hadronic EoS BSk24 and quark EoSs corresponding to \(X=\)1.8 and 2.8. In Fig. 2 plots of pressure versus energy density for EoS of hybrid quark-neutron stars are shown. The phase transitions for \(X=\) 1.8+BSk24 and \(X=\) 2.5+BSk24 can be observed by the jumps in energy density. In case of \(X=\) 2.75+BSk24 this scenario is not observed implying that it is associated with continuous pressure and vanishing latent heat. In Fig. 3 pressure resulting from the matching of pQCD EoS for \(X=\) 2.75 with that of BSk24 EoS for hadronic matter is shown. The red dot represents the matching point of the two EoSs. It is interesting to note that the behaviors of the BSk24 EoS and quark matter (\(X=\) 2.75) EoS are markedly similar in the region near the matching point. ### The mass-radius relationship for quark stars, neutron stars and hybrid stars The mass-radius (M-R) relationship has been obtained for quark stars by solving Tolman-Oppenheimer-Volkoff (TOV) equation using quark matter EoS for different values of the dimensionless parameter \(X\). For the outer crust region in case of NS and HS, the EoSs FMT [4], BPS [5] and BBP [6] up to the number density of 0.0582 fm\({}^{-3}\) have been used. For the inner crust and beyond the Figure 3: Pressure resulting from the matching of pQCD EoS for \(X=2.75\) with that of BSk24 EoS for hadronic matter is shown. The red dot represents the matching point of the two EoSs. Figure 4: The M-R diagrams for pure quark, hadronic and hybrid quark–neutron stars are shown. The red dot represents the special point. Horizontal bands correspond to mass constraints of PSR J0952-0607 and PSR J0740+6620. \(\beta\)-equilibrated NS matter (hybrid EoS) have been used to study the structural properties of the NS (HS). The hybrid EoS used for HS have been computed for a few different values of \(X\). In Fig. 4, the M-R relationship for quark stars have been plotted for \(X=\)1.0, 1.8, 2.75 and 4.0. The maximum masses and corresponding radii are 0.41 \(M_{\odot}\) and 2.21 km, 0.98 \(M_{\odot}\) and 5.53 km, 1.81 \(M_{\odot}\) and 10.40 km, 3.05 \(M_{\odot}\) and 17.78 km, respectively. As may be seen in the figure that the M-R curves for quark stars are distinctly different from those of NSs and HSs. The M-R relationship for slowly rotating NSs for EoSs obtained with BSk24 is also plotted in Fig. 4 for different values of the parameter \(X\). The maximum NS mass for the EoS obtained using the BSk24 Skyrme set is 2.28 \(M_{\odot}\) with a corresponding radius of 11.05 kms where \(M_{\odot}\) is the solar mass. The horizontal bands in this figure correspond to mass constraints of PSR J0740+6620 and PSR J0952-0607 [7; 8; 9; 10]. The M-R plots for HSs show almost similar pattern as those of NSs with maximum masses being smaller. The maximum masses and corresponding radii for BSk24 with quark cores for \(X=\)2.75 and \(X=\)1.8 are 1.84 \(M_{\odot}\) and 11.48 km, 2.24 \(M_{\odot}\) and 11.66 km, respectively. It is important to mention here that the observations of the two-solar-mass binary millisecond pulsar J1614-2230 by Demorest et al. [38] suggest that the masses lie within 1.97 \(\pm\) 0.04 \(M_{\odot}\), effectively excluding most of the hyperon or boson condensate equations of state. The radio timing measurements of the pulsar PSR J0348+0432 and its white dwarf companion have confirmed the mass of the pulsar to be in the range 2.01 \(\pm\) 0.04 \(M_{\odot}\)[11]. Very recently, the studies for PSR J0740+6620 [8] and for PSR J0952-0607 [7] find masses of 2.08 \(\pm\) 0.07 \(M_{\odot}\) and 2.35 \(\pm\) 0.17 \(M_{\odot}\), respectively. Some recent works [9; 10] constrain the equatorial radius and mass of PSR J0740+6620 to be \(12.39^{+1.30}_{-0.98}\) km and \(2.072^{+0.067}_{-0.066}\)\(M_{\odot}\) respectively. From the recent observation of the gravitational wave event GW170817, the limits for the maximum mass has been estimated as \(2.01^{+0.04}_{-0.04}\leqslant M_{\rm TOV}/M_{\odot}\lesssim 2.16^{+0.17}_{-0.15}\)[12]. The maximum NS masses attained for BSk22, BSk24 and BSk26 are, respectively, 2.27 \(M_{\odot}\), 2.28 \(M_{\odot}\) and 2.18 \(M_{\odot}\) which are in accordance with recent observations of massive NSs. ## IV Summary and conclusion In summary, the structure of quark stars, neutron stars and hybrid stars have been explored. Using Maxwell construction the quark to hadron deconfinement phase transition in neutron star cores and the formation of hybrid stars have been investigated. The result for neutron star without quark core is in excellent agreement with recent astrophysical observations comparable with the observed mass 2.3 \(M_{\odot}\) of the pulsar PSR J0952-0607. The compact star matter can undergo deconfinement transition to quark matter, reducing its mass considerably. The calculated hybrid star properties also agree with recent astrophysical constraints on the M-R relationship obtained from PSR J0952-0607, PSR J0740+6620, PSR J0348+0432 and the GW170817 data analysis. ###### Acknowledgements. One of the authors (DNB) acknowledges support from Science and Engineering Research Board, Department of Science and Technology, Government of India, through Grant No. CRG/2021/007333.
2309.10588
Few-shot Object Detection in Remote Sensing: Lifting the Curse of Incompletely Annotated Novel Objects
Object detection is an essential and fundamental task in computer vision and satellite image processing. Existing deep learning methods have achieved impressive performance thanks to the availability of large-scale annotated datasets. Yet, in real-world applications the availability of labels is limited. In this context, few-shot object detection (FSOD) has emerged as a promising direction, which aims at enabling the model to detect novel objects with only few of them annotated. However, many existing FSOD algorithms overlook a critical issue: when an input image contains multiple novel objects and only a subset of them are annotated, the unlabeled objects will be considered as background during training. This can cause confusions and severely impact the model's ability to recall novel objects. To address this issue, we propose a self-training-based FSOD (ST-FSOD) approach, which incorporates the self-training mechanism into the few-shot fine-tuning process. ST-FSOD aims to enable the discovery of novel objects that are not annotated, and take them into account during training. On the one hand, we devise a two-branch region proposal networks (RPN) to separate the proposal extraction of base and novel objects, On another hand, we incorporate the student-teacher mechanism into RPN and the region of interest (RoI) head to include those highly confident yet unlabeled targets as pseudo labels. Experimental results demonstrate that our proposed method outperforms the state-of-the-art in various FSOD settings by a large margin. The codes will be publicly available at https://github.com/zhu-xlab/ST-FSOD.
Fahong Zhang, Yilei Shi, Zhitong Xiong, Xiao Xiang Zhu
2023-09-19T13:00:25Z
http://arxiv.org/abs/2309.10588v1
Few-shot Object Detection in Remote Sensing: Lifting the Curse of Incompletely Annotated Novel Objects ###### Abstract Object detection is an essential and fundamental task in computer vision and satellite image processing. Existing deep learning methods have achieved impressive performance thanks to the availability of large-scale annotated datasets. Yet, in real-world applications the availability of labels is limited. In this context, few-shot object detection (FSOD) has emerged as a promising direction, which aims at enabling the model to detect novel objects with only few of them annotated. However, many existing FSOD algorithms overlook a critical issue: when an input image contains multiple novel objects and only a subset of them are annotated, the unlabeled objects will be considered as background during training. This can cause confusions and severely impact the model's ability to recall novel objects. To address this issue, we propose a self-training-based FSOD (ST-FSOD) approach, which incorporates the self-training mechanism into the few-shot fine-tuning process. ST-FSOD aims to enable the discovery of novel objects that are not annotated, and take them into account during training. On the one hand, we devise a two-branch region proposal networks (RPN) to separate the proposal extraction of base and novel objects, On another hand, we incorporate the student-teacher mechanism into RPN and the region of interest (RoI) head to include those highly confident yet unlabeled targets as pseudo labels. Experimental results demonstrate that our proposed method outperforms the state-of-the-art in various FSOD settings by a large margin. The codes will be publicly available at [https://github.com/zhu-xlab/ST-FSOD](https://github.com/zhu-xlab/ST-FSOD). Object Detection, Few-shot Learning, Self-training, Remote Sensing Image Processing. ## I Introduction Object detection (OD) is a critical task in computer vision as well as remote sensing image processing, which enables the automatic identification and localization of objects of interest within an image. With the rapid development of deep learning techniques [2, 3, 4] and the emergence of large-scale human-annotated data [5, 6], the performance of the state-of-the-art OD approaches has been pushed to a new stage. These approaches have achieved remarkable success in detecting objects in various domains, including remote sensing [7, 8, 9]. However, traditional OD methods rely on a large amount of labeled data for training, which can be challenging and time-consuming to obtain in remote sensing image processing, particularly in scenarios where novel or rare objects are involved. This results in the need for few-shot learning [10, 11, 12], a paradigm that aims to overcome the data scarcity issue by enabling models to generalize and detect objects with only a limited number of labeled examples. Few-shot learning achieves this by leveraging knowledge acquired from previously seen categories to adapt and recognize novel objects efficiently. Concurrently with the achievements in few-shot classification [13] and few-shot semantic segmentation [14, 15], few-shot object detection (FSOD) [16, 17, 18] has emerged as a compelling research area in recent years. In the conventional FSOD framework, the model undergoes a two-stage training process: first, it is trained on a large-scale labeled dataset consisting of base objects, and subsequently, it is fine-tuned on a fine-tuning set with only a few labeled novel object instances. However, when there are multiple novel objects in a single image, it is possible that only a part of them are provided with labels during the fine-tuning stage. As a result, these incomplete annotations can negatively impact the training towards novel classes and hinder the discovery of novel objects. This issue, illustrated in Fig. 1, can be referred as the incompletely annotated novel objects (IANO) issue. While the challenge of IANO has been investigated in the field of FSOD for natural images [19, 20], it still remains unexplored in remote sensing. However, objects in the remote sensing images are usually smaller, and scenes often exhibit higher levels of congestion, particularly in contexts with vehicles, planes, ships, and so on. This phenomenon can be readily observed from different OD datasets tailored for remote sensing applications [21]. As a result, the IANO problem takes on an even more formidable and pressing nature when considered in the context of remote sensing. To tackle this issue, it is necessary to establish a mechanism capable of identifying and subsequently excluding potential unannotated novel objects during the background sampling process. A promising solution that emerges for mitigating this concern is self-training, a well-established technique in the field of domain adaptation [22, 23, 24] and semi-supervised learning [9, 25]. This type of methods first generate pseudo labels on unlabeled data using a pre-trained model and then fine-tune the model using these pseudo labels. The underlying philosophy of _generating pseudo labels and leveraging the unlabeled data_ aligns with our objective of identifying potential unannotated novel objects, as they are hidden in the background and remain unannotated. Therefore, we propose using self-training as a feasible approach to tackle this issue in remote sensing imagery. We build our self-training-based FSOD (ST-FSOD) method based on a popular object detection framework: Faster RCNN [2], which consists of two stages. In the first stage, a region proposal networks (RPN) is used to generate a set of candidate object proposals, each with a corresponding objectness score and a bounding box regression score. In the second stage, the candidate object proposals are refined and classified using a fully-connected layer network, also known as a Region of Interest (RoI) pooling layer [26]. We incorporate the self-training mechanism into the RPN and the bounding box head (BBH) of the RoI layer, and devise a self-training RPN (ST-RPN) and a self-training BBH (ST-BBH) modules accordingly. In these two modules, a momentum-based teacher-student modeling strategy [27] is used to filter out highly confident potential novel class objects and refine the loss calculation. More specifically, the ST-RPN module uses the teacher-student modeling strategy to filter out highly confident proposals for potential novel classes. Meanwhile, the ST-BBH module filters out highly confident novel bounding boxes. Both modules refine the loss calculation to improve the accuracy of the detection model. Our contributions can be summarized as follows: * Our study highlights and examines the challenge of the IANO issue in FSOD for remote sensing imagery. To the best of our knowledge, this is the first work that discusses and tackles the issue in the field of remote sensing, which is neglected in many existing FSOD methods for remote sensing imagery and needs to be addressed to advance the field. * To handle the IANO issue in object detection, we propose to apply self-training technique. To this end, ST-RPN and ST-BBH modules are devised to identify proposals or bounding boxes that are likely to include a novel object, even in the absence of novel class annotations. * We conduct extensive experiments on three publicly available datasets, and evaluate our proposed method under various FSOD settings. Experimental results demonstrate that our approach outperforms the state-of-the-art methods by a significant margin. ## II Related Works ### _Object Detection_ Object detection refers to identifying and localizing objects within an image, which has been one of the main research tasks in computer vision (CV). Traditional OD techniques are based on, e.g. feature extraction [28], object recognition, and template matching [29]. State-of-the-art OD techniques are mostly based on deep learning, owing to its overwhelming performance on large-scale object detection benchmarks. Deep learning-based OD methods utilize convolutional neural networks (CNNs) to perform object detection directly from raw image pixels, without the need for hand-crafted feature engineering. They can be categorized into two-stage and single-stage detectors. While two-stage detectors aim to first generate object proposals and then classify them, single-stage detectors perform both tasks simultaneously. A typical example of two-stage detectors is Faster R-CNN [2], which generates a set of region proposals and then extracts features from each proposal using a Region of Interest (RoI) pooling layer before classifying the object within the proposal. One well-known single-stage detector is YOLO [3], which applies a CNN to the entire image to simultaneously predict bounding boxes Fig. 1: Illustration of the incompletely annotated novel objects (IANO) issue. In the standard FSOD protocal, only a few bounding boxes for the novel class object provided for the few-shot fine-tuning stage [1]. Let’s consider a scenario where our goal is detecting the presence of a “plane” as the novel class object, and we are provided with just one bounding box annotation for a plane. As depicted in the figure, the challenge arises when multiple instances of planes are present within a single image, In such cases, some of the planes within the image are left unannotated. This can mislead the detector since RPN loss and bounding box classification loss are calculated based on these incomplete annotations. and class probabilities for each object without any proposal generation step. ### _Few-shot Object Detection in Computer Vision_ Few-shot object detection (FSOD) aims at recognizing novel or unseen object classes based only on a few examples of them, by fine-tuning on a model trained on many labeled examples of base classes. FSOD methods can be roughly categorized into fine-tuning-based methods, meta-learning-based methods, and metric-learning-based methods. #### Iii-B1 Fine-tuning-based Methods Fine-tuning-based methods are popular for few-shot object detection, which first train on a large number of base class examples and then perform few-shot fine-tuning on a smaller support set that includes both base and novel classes. One such method, LSTD [16], uses a flexible deep architecture that integrates the advantages of both SSD and Faster R-CNN in a unified deep framework. It also introduces transfer knowledge (TK) and background depression (BD) regularizations to leverage object knowledge from source and target domains during the fine-tuning stage. Another fine-tuning-based method, TFA [17], indicates that even a simple fine-tuning of only the last layer of a Faster R-CNN detector to novel classes can achieve better performance compared to previous meta-learning based methods. In addition, TFA replaces the fully-connected classification heads of Faster R-CNN with cosine similarities. The aim is to reduce intra-class variances and preserve performance in base classes through this feature normalization technique. In [30], the authors introduce a pretrain-transfer framework (PTF) that utilizes a knowledge inheritance approach to initialize the weights for the box classifier. Additionally, they develop an adaptive length re-scaling strategy to ensure consistency of the dimensions of the pretrained weights for both the base and novel classes. This helps to improve the efficiency and effectiveness of the fine-tuning process. #### Iii-B2 Metric-learning based Methods Metric-learning based methods aim to reduce the dimensionality of each sample and learn a feature representation such that similar samples are closer to each other, while dissimilar ones are easier to discriminate. RepMet [31] is a metric-learning approach suitable for both few-shot classification and object detection. It uses a collection of Gaussian mixture models, each with multiple modes, to describe the base and novel classes. During base training, an embedding loss is employed to ensure a margin between the distance of each query feature and its respective class representative, as well as the distance to the nearest Fig. 2: The overall architecture of the proposed method. The FPN following the backbone is not illustrated in the figures for the sake of simplicity. representative of an incorrect class. In [32], the authors introduce an attention-based Region Proposal Network (attention-RPN) that uses support features to enhance the proposal generation process and eliminate non-matching or background proposals. Additionally, the authors develop a multi-relation detector for feature representation learning that measures the similarity between the RoI features of query and support objects. #### Ii-B3 Meta-learning based Methods Meta-learning based methods are designed to learn quickly from a few examples, using a meta-learner that has been trained on a diverse set of tasks. During the base training stage, the meta-learner is trained on a meta-dataset composed of various tasks. Once trained, the meta-learner can quickly adapt to new tasks or generate a learner that is customized to the target task. One example of meta-learning based methods is Meta-YOLO [33], which improves the query feature of a model by using a set of weighting coefficients generated during the meta-learning phase. These coefficients are based on the support samples and allow the model to effectively learn the intrinsic importance of features for detecting objects. This enables the re-weighting coefficients of novel classes to be learned with only a few support examples. Another approach is FsDetView [34], where a novel technique for aggregating query and support features is introduced. Instead of feature re-weighting, this technique involves performing element-wise multiplication, subtraction, and concatenation between the two sets of features. This approach has shown promising results in few-shot object detection tasks. ### _Few-shot Object Detection in Remote Sensing_ Remote sensing images have unique characteristics that distinguish them from natural scene images, such as complex backgrounds, objects with multiple orientations, and dense and small objects. Thus, designing a few-shot object detection (FSOD) algorithm that accounts for these distinct features is crucial. In SAM&BFS [35], the authors propose a shared attention module that leverages class-agnostic prior knowledge gained during the base training stage to aid in detecting novel objects with significant size variations. In DH-FSDet [36], the authors suggest using a balanced sampling strategy between base and novel samples, taking all the base samples into consideration. Additionally, they propose separating the classification and regression heads in the Region of Interest (RoI) layer according to base and novel classes for better balance in detection. Zhang et al. [37] introduce the generalized FSOD task to remote sensing. In one hand, they propose a metric-based discriminative loss to reduce the intra-class diversity and increase the inter-class separability of the base and novel objects. In the other hand, they replace the RoI feature extractor with a representation compensation module to prevent the model from catastrophic forgetting. More recently, researchers have explored integrating text data into the visual learning pipeline to improve FSOD performance. Models such as TEMO [38] and TSF-RGR [39] leverage text-modal knowledge extractors to provide prior knowledge on the relationship between base and novel classes, resulting in improved FSOD performance. ### _Self-training_ Self-training [40, 41, 42, 43] is a widely used approach for domain adaptation [44] and semi-supervised learning [25]. This technique involves generating pseudo-labels for the target domain or unlabeled data and then using them to fine-tune the network. Self-training has shown promising results in various tasks, including semi-supervised classification [45] and semantic segmentation [46]. In semi-supervised classification, FixMatch [47] is a popular approach for self-training. It generates pseudo-labels by using a model's predictions on weakly-augmented, unlabeled images. These pseudo-labels are then used to supervise the model's predictions on a strongly-augmented view of the same image. This approach has been shown to achieve state-of-the-art performance on several benchmark datasets. For semantic segmentation tasks, SAC [27] is a self-training approach that employs a momentum network as the teacher network. The teacher network generates pseudo-labels on the weakly-augmented images, which are used to supervise the student network that receives the strongly-augmented images. The momentum network is updated based on the exponential moving average of the student network. This approach has also shown promising results on several benchmark datasets. ### _Incompletely Annotated Novel Objects Issue_ Li et al. [48] are the first to identify the Icompletely Annotated Novel Objects (IANO) issue in their work, pointing out that the base set images might contain unlabeled novel-class objects, leading to false positives. They address this challenge by introducing a distractor utilization loss. More specifically, for each annotated bounding box, a cropped image centered at the object will be fed into a few-shot correction network. This network generate corresponding pseudo labels, which are subsequently used to identify potential novel objects and adjust the loss calculation with respect tothe Region of Interest (RoI) head. On a similar note, Qiao et al. [19] emphasize that the IANO issue also exists when multiple novel objects were present within a single image. To resolve this concern, they propose a label calibration method. This method recalibrated the predicted targets of background objects based on their predicted confidences. As a result, unannotated novel objects are assigned with lower weights during the loss calculation, mitigating their negative impact. Despite these valuable contributions, it's worth noting that the aforementioned approaches primarily addressed the impact of unannotated objects to bounding box classification, neglecting their influence on the training of region proposal networks (RPN) for a two-stage object detector. Specifically, when calculating the RPN loss, the incompleteness of novel object annotations could cause the RPN to mistakenly predict unannotated novel objects as background, thereby reducing overall performance. In our method, we propose a comprehensive approach to mitigate the IANO issue. We apply advanced self-training techniques not only to the bounding box classification head but also to the RPN. By doing so, we extend our efforts to tackle this challenge on a broader scale, addressing the impact of unannotated objects throughout the entire object detection pipeline. ## III Methods In this section, we introduce the proposed ST-FSOD (Self-Training-based Few-Shot Object Detection) method, which consists of two major components: the Self-Training Region Proposal Network (ST-RPN) and the Self-Training Bounding Box Head (ST-BBH). The overall architecture is illustrated in Figure 2. The ST-RPN module takes the multi-level features extracted from the backbone and feature pyramid networks (FPN) head [49] as input and generates two sets of object proposals, namely, the base and novel proposals, corresponding to the base and novel categories, respectively. These proposals are merged and fed into the ROI pooling and feature extraction layers to obtain the ROI features. The ST-BBH module takes the ROI features as input and produces the final detection results. Specifically, it detects potential unannotated novel class objects and uses them as pseudo labels to recall more novel class objects, thereby improving the model's performance. ### _Problem Formulation_ In this section, we present the formulation of the standard FSOD setting. We assume that we have access to a base set containing base class annotations denoted as \(\mathcal{D}_{base}=\{(\mathbf{I}_{i},\mathcal{B}_{i}^{base})\}\) and a novel set containing novel class annotations denoted as \(\mathcal{D}_{nov}=\{(\mathbf{I}_{i},\mathcal{B}_{i}^{nov})\}\), where \(\mathbf{I}_{i}\) represents an image and \(\mathcal{B}_{i}=\{(x,y,h,w,c)\}\) represents a set of object bounding boxes within the image. Here, \(x,y,w\) and \(h\) denote the locations of the bounding box, and \(c\) denotes the class label. Furthermore, let \(\mathcal{C}_{base}\) and \(\mathcal{C}_{nov}\) be the label set of \(\mathcal{B}_{base}\) and \(\mathcal{B}_{nov}\), respectively, and they satisfy the condition \(\mathcal{C}_{base}\cap\mathcal{C}_{nov}=\varnothing\), indicating that there are no common classes between the base and novel classes. Our proposed ST-FSOD method is established on the classic two-stage fine-tuning approach (TFA) [17]. In the first stage of TFA, a base detector is trained using the base set \(\mathcal{D}_{base}\), following the same procedure as in a regular object detector. In the second stage, a few-shot object detector is initialized using the weights obtained in the first stage and fine-tuned on a \(K\)-shot fine-tuning set denoted by \(\mathcal{D}_{ft}=(\mathbf{I}_{i},\mathcal{B}_{i}^{base},\mathcal{B}_{i}^{nov})\), where the number of novel object bounding boxes for each novel class is \(K\). More formally, for each novel class \(c_{nov}\in\mathcal{C}_{nov}\), we have: \[\forall c_{nov}\in\mathcal{C}_{nov},\] \[\sum_{\mathbf{I}_{i},\mathcal{B}_{i}^{base},\mathcal{B}_{i}^{nov} \in\mathcal{D}_{ft}}|\{(x,y,h,w,c)\in\mathcal{B}_{i}^{nov}|\ c=c_{nov}\}|=K. \tag{1}\] Here \(|\cdot|\) denotes the number of elements of the set. ### _Balanced Sampling Strategy_ One of the key questions in constructing the few-shot fine-tuning set \(\mathcal{D}_{ft}\) is how many base class objects to sample. In the original TFA [17], the authors proposed sampling exactly \(K\) shots of base objects for each base class to maintain a better balance between base and novel classes. However, recent studies have shown that this strategy may not be the optimal for remote sensing images [35, 36]. For example, Wolf et al. [36] found that using more base objects and oversampling the few-shot novel objects can improve overall performance. Inspired by these findings, we propose a balanced sampling strategy (BSS) as follows: First, we randomly sample the \(K\)-shot novel objects as usual. Next, we include all base class images and annotations in \(\mathcal{D}_{base}\) into the fine-tuning set \(\mathcal{D}_{ft}\). Finally, when sampling images for fine-tuning, we increase the probability of sampling images \(\mathbf{I}_{i}\) with a corresponding non-empty \(\mathcal{B}_{i}^{nov}\) to ensure they are sampled with the same probability as images that do not contain any novel annotations. By adopting the BSS, we achieve a more balanced fine-tuning set between base and novel classes, while also making full use of all the available base annotations. ### _Self-training-based Region Proposal Networks_ In the Faster R-CNN network architecture, the Region Proposal Network (RPN) is utilized to generate a set of proposals \(\mathcal{P}=\{p=(t_{x},t_{y},t_{w},t_{h},o)\}\) for each input image \(I\) using multi-scale features extracted from \(I\). The parameters \(t_{x}\), \(t_{y}\), \(t_{w}\), and \(t_{h}\) represent the coordinates of each proposal, and \(o\) is an objectness score that indicates the probability of the proposal containing an object. Fig. 2 illustrates the architecture of ST-RPN, which consists of three sub-modules: base RPN, teacher RPN, and student RPN. All of these sub-modules follow the original RPN architecture. ST-RPN generates two sets of proposals, \(\mathcal{P}^{base}\) and \(\mathcal{P}^{nov}\), which are obtained as follows: * The base RPN is responsible for extracting base proposals \(\mathcal{P}^{base}\) from the input image \(\mathbf{I}\). For fine-tuning the base RPN module, only base annotations \(\mathcal{B}^{base}\) are used to calculate the RPN loss. \[\mathcal{L}_{rpn}^{base}(\mathcal{P}^{base},\mathcal{B}^{base})=\sum_{p_{i} \in\mathcal{P}^{base}}L(p_{i},p_{i}^{*}),\] (2) where \(p_{i}^{*}\) is the regression and classification targets achieved by matching \(\mathcal{P}^{base}\) with \(\mathcal{B}^{base}\) according to the intersection over union (IoU) between each pair of the proposal and the ground truth bounding box [2]. * The teacher RPN generates a set of ignored proposals \(\mathcal{P}^{ign}\), which includes output proposals from this module that have an objectness score \(o\) greater than a given threshold \(\tau_{rpn}\). Please refer to IV-B for the selection of \(\tau_{\text{rpn}}\). * The student RPN receives the extracted features from the backbone and the FPN head [49] with dropout [50] applied and outputs a set of novel proposals \(\mathcal{P}^{nov}\). This module is trainable and is supervised only by the few-shot novel annotations \(\mathcal{B}^{nov}\). During the calculation of the regression and classification target of each output proposal, those with a high overlap with the ignored proposals in \(\mathcal{B}^{ign}\) are excluded from the loss calculation. The loss function for the student RPN can be formulated as follows: \[\mathcal{L}^{nov}_{rpn}(\mathcal{P}^{nov},\mathcal{P}^{ign},\mathcal{B}^{nov})= \sum_{p_{i}\in\mathcal{P}^{nov}}w_{i}L(p_{i},p_{i}^{*}). \tag{3}\] Here \(p_{i}^{*}\) is the box targets achieved by assigning \(\mathcal{P}^{nov}\) to \(\mathcal{B}^{nov}\). \(w_{i}\) is a weighting coefficient that is used to ignore the highly confident proposals that might contain novel objects. \[w_{i}=\begin{cases}0,&\text{if }\exists\ p_{j}\in\mathcal{P}^{ign},\text{IoU}(p_{ j},p_{i})>0.7,\\ 1,&\text{otherwise}.\end{cases} \tag{4}\] Here \(\text{IoU}(\cdot,\cdot)\) denotes the IoU value of the two proposals. The separation of base and novel proposals has two benefits. First, during fine-tuning, the pre-trained weights obtained from the base training stage could be negatively affected, which can impact the quality of the extracted base proposals. Separating proposal extraction prevents the fine-tuning towards novel objects from biasing the extraction of base proposals. Second, previous FSOD works in remote sensing [35] have found that the RPN module achieved from the base training stage often fails to recall novel objects successfully. Thus, training an additional RPN module from scratch can lead to a better network state in terms of novel object detection. During the extraction of novel proposals \(\mathcal{P}^{nov}\), using a student-teacher-based self-training mechanism and introducing ignored proposals into the loss calculation can prevent potential unannotated novel objects (UNO) from being misclassified as background. This helps the RPN module to recall more novel objects, and improves detection performance. ### _Self-training-based Bounding Box Head_ In the proposed framework, the extracted base and novel proposals are merged and passed through a RoI pooling layer and a RoI feature extraction layer to obtain the corresponding RoI features. These features are then forwarded to the self-training bounding box head (ST-BBH) for achieving the final detection results. The ST-BBH consists of a teacher and a student bounding box heads (BBH). Each BBH contains a bounding box classifier and a regressor, following the same architecture as the one proposed in Faster R-CNN [26]. While processing each RoI, the feature is input to both the teacher and student classifier heads. In order to improve the robustness, the student head receives the feature with dropout [50] applied as the input. Let \(\mathbf{u}^{stu}\) and \(\mathbf{u}^{tch}\) be the output probability of the student and teacher BBH, and \(v\) be the corresponding ground truth label. The student classifier's classification loss is calculated using the following equation: \[\mathcal{L}_{bbh}(\mathbf{u}^{\text{stu}},\mathbf{u}^{\text{tch}},v)=\\ \begin{cases}L_{\text{cls}}(\mathbf{u}^{\text{stu}},\hat{u}^{\text{ tch}}),&\text{if }v=0\text{ and }\max(\mathbf{u}^{\text{tch}})>\tau_{\text{box}},\\ L_{\text{cls}}(\mathbf{u}^{\text{stu}},v),&\text{otherwise}.\end{cases} \tag{5}\] Here, \(\hat{u}^{\text{ch}}\) represents the class index with the highest value in \(\mathbf{u}^{\text{tch}}\). The assigned ground truth class label of the RoI is denoted by \(v\), with \(v=0\) indicating that the RoI is considered as background. The threshold \(\tau_{\text{box}}\) is used to determine when to use the prediction from the teacher bounding box head as the pseudo label. Please refer to IV-B for the selection of \(\tau_{\text{box}}\). Overall, ST-RPN and ST-BBH share the same self-training philosophy with a momentum network, represented in our context as the teacher module. This teacher module maintains a slowly updated copy of the original module, ensuring stable yet recent targets (or pseudo-labels) for model updates, as discussed in [27]. However, it's important to note that ST-RPN and ST-BBH are technically distinct: ST-RPN is employed to extract class-agnostic proposals, while ST-BBH is specifically designed for the classification and regression of class-specific bounding boxes. Furthermore, we emphasize that in ST-RPN, we explicitly separate the extraction of base and novel proposals, whereas in ST-BBH, the classification and regression heads for both base and novel classes share the same ROI features. ### _Weights Initialization & Update_ As depicted in Fig. 2, different trainable and fixed modules have been highlighted in different colors. Among these modules, the backbone, FPN head, base RPN, and RoI feature extraction layer will be initialized by the pre-trained weights obtained from the base training stage. For the classifier and regressor of both student and teacher BBH, the entries of their weight matrices correspond to the base classes will be initialized based on the pre-trained weights, while entries for the novel classes will be randomly initialized. The student and teacher RPN module of the ST-RPN will be randomly initialized. It is worth noting that the weights for the teacher RPN and teacher BBH are not trainable. Instead, they will be updated by the corresponding student module's weights using exponential moving average [27]. \[\theta_{\mathcal{T}}^{(t)}=\alpha\theta_{\mathcal{T}}^{(t-1)}+(1-\alpha)\theta_ {\mathcal{S}}^{(t)}, \tag{6}\] Here \(\theta_{\mathcal{T}}^{(t)}\) and \(\theta_{\mathcal{S}}^{(t)}\) denote the weights of the teacher networks and their corresponding student networks at time step \(t\) during the training stage, respectively. \(\alpha\) is a decay weight that is set to \(0.999\) following [27]. ## IV Experiments In this section, we will present the experimental results of our proposed method on various benchmarks for FSOD in remote sensing. ### _Experimental Settings_ We evaluate the proposed method on three large-scale public object detection datasets in remote sensing, including NWPU-VHR10 v2 [51], DIOR [52] and iSAID [21]. NWPU-VHR10 v2 dataset comprises a total of \(1,172\) images, each with dimension \(400\times 400\), and are divided into \(10\) categories. Following the previous works [53], \(3\) categories "airplane", "baseball dimond", and "tennis court" are adopted as novel classes, while the others are as the base classes. In line with previous researches, we employ the training and validation sets to fine-tune the model, and report the performance on the test set, which contains \(293\) images. The DIOR dataset contains \(23,463\) images and over \(190,000\) instances, with an image size of \(800\times 800\). All objects are categorized into 20 classes. In previous literature, two commonly used settings are adopted. The first setting, proposed by Li et al. [20], uses \(5\) categories (i.e., "plane", "baseball field", "tennis court", "train station", and "wind mill") as the novel categories and the remaining as the base categories. In this setting, the training set is used for base training and few-shot fine-tuning, while the validation set is used for evaluation. The second setting, proposed in [54], includes a total of \(4\) base-novel class splits, each containing \(5\) novel categories and \(15\) base categories. In this setting, both the training and validation sets are used for base training and fine-tuning, and the test set is used for evaluation. iSAID is a large-scale instance segmentation dataset for remote sensing. It is built on the same image set as DOTA [55], but provide the instance-level mask annotations, and also finer bounding box annotations. iSAID contains 2806 images, whose sizes range from \(800\times 800\) to \(20,000\times 20,000\). In total, there are \(655,451\) annotated objects, which are classified into \(15\) categories. We follow the official data pre-processing pipeline to crop the images into \(800\times 800\) patches, with a overlap of \(25\%\). We follow the FSOD setting of [36], which uses \(3\) different base-novel class splits, and sets the number of shots for each split to \(10\), \(50\), and \(100\). To make a fair comparison with the previous works, we adopt the mean average precision (mAP) with a IoU threshold at \(0.5\) as the evaluation metric following the common practices. ### _Implementation Details_ The proposed method uses the Faster R-CNN architecture [2] with a ResNet101 [56] backbone that is pre-trained on the ImageNet dataset. A Feature Pyramid Networks (FPN) [49] is used to generate multi-scale features. The Adamw optimizer [57] with a weight decay of \(0.01\) and a learning rate of \(1e-4\) is used to train the model on all settings. The base training stage of NWPU-VHR10 v2, DIOR and iSAID datasets lasts for \(10,000\), \(40,000\) and \(80,000\) iterations, respectively. Learning rate decay with a factor of 0.1 is applied at \(5,000\) and \(8,000\) for NWPU-VHR10 v2 dataset, \(24,000\) and \(32,000\) iterations for DIOR dataset, and \(40,000\) and \(60,000\) for iSAID dataset. The few-shot fine-tuning lasts for \(2,000\) iterations for NWPU-VHR10 v2 dataset and \(10,000\) iterations for the other settings. For data pre-processing and augmentation, image patches are randomly cropped to sizes of 400 \(\times\) 400 for NWPU-VHR10 v2 dataset and 608 \(\times\) 608 for the others. Multi-scale training with a range from 0.5 to 2.0, random flipping, and random rotation with degrees of 90, 180, and 270 are applied. The batch size is set to 16 for base training and 8 for fine-tuning. The momentum parameter is set to \(\alpha=0.999\) when updating the networks' weights by exponential moving average [27]. The thresholds \(\tau_{\text{ppn}}\) and \(\tau_{\text{bbh}}\) used in ST-ROI and ST-BBH are set to 0.8. In Sec. IV-F, sensitivity analyses to these hyperparameters are provided. Our codes are based on PyTorch, EarthNets [58], and MMDetection [59] platform. For more information, please refer to our published codes. ### _Quantitative Results_ The quantitative results for the NWPU-VHR10 v2, DIOR, and iSAID datasets are presented in Table I, Table II and III, and Table IV, respectively. Overall, our proposed method achieves superior or comparable performance across all datasets and various settings related to novel objects. Specifically: On the NWPU-VHR10 v2 dataset, our method attains state-of-the-art performance in both 3 and 20-shot settings, surpassing the second-best results by a notable margin ranging from 3% to 7%. Additionally, it performs comparably to state-of-the-art models in the 5 and 10-shot settings. Regarding the DIOR dataset, as shown in Table II, our method consistently outperforms the second-best approach by a substantial margin of 5% to 7%. In Table III, our method excels in all settings except for the 20-shot setting of split 3. Remarkably, our results in split 1 demonstrate improvements exceeding 10%. On the iSAID dataset, our proposed method delivers significant enhancements, ranging from 5% to over 10%. These improvements remain consistent across various splits and different numbers of shots. These results not only demonstrate the effectiveness of our proposed method but also underscore the significance of addressing the issue of incompletely annotated novel objects. It is worth noting that performance variance tends to be higher in cases with fewer shots, suggesting that FSOD performance is somewhat sensitive to the sampling of annotated novel objects, especially when the available shots are limited. \begin{table} \begin{tabular}{c|c c c c c|c c c} \hline \hline \multirow{2}{*}{Method / Shots} & \multicolumn{4}{c|}{Novel Classes} & \multicolumn{4}{c}{Base Classes} \\ & 3 & 5 & 10 & 20 & 3 & 5 & 10 & 20 \\ \hline OFA [53] & 32.8 & 37.9 & 40.7 & - & - & - & - & - \\ FSODM [20] & - & 25 & 32 & 36 & - & - & - & - \\ SAMeBPS [35] & - & 38.3 & 47.3 & 50.9 & - & - & - & - \\ PAMS-Det [60] & 28 & 33 & 38 & - & - & - & - & - \\ CIR-FSD [61] & - & 33 & 38 & 43 & - & - & - & - \\ TFACSC [62] & 38 & 42 & 47 & - & - & - & - & - \\ SAGS\&TFS [63] & - & 34 & 37 & 42 & - & - & - & - \\ TSF-RGR [39] & - & 42 & 49 & 54 & - & - & - & - \\ \hline Ours & **43.5 \(\pm\) 5.5** & **48.3 \(\pm\) 1.3** & **55.8 \(\pm\) 1.5** & **61.3 \(\pm\) 2.0** & 75.0 \(\pm\) 0.3 & 75.6 \(\pm\) 0.5 & 74.4 \(\pm\) 0.3 & 73.2 \(\pm\) 2.1 \\ \hline \hline \end{tabular} \end{table} TABLE II: Average Precision (AP) (in \(\%\)) at a IoU threshold of \(0.5\) of different methods on DIOR dataset, where the base-novel class split follows [20]. Averaged results and the standard deviations of \(3\) different runs are reported for the proposed methods. \begin{table} \begin{tabular}{c|c c c c|c c c} \hline \hline \multirow{2}{*}{Method / Shots} & \multicolumn{4}{c|}{Novel Classes} & \multicolumn{4}{c}{Base Classes} \\ & 3 & 5 & 10 & 20 & 3 & 5 & 10 & 20 \\ \hline OFA [53] & 43.2 & 60.4 & 66.7 & - & - & - & - & - \\ FSODM [20] & 32 & 53 & 65 & - & - & - & - & - \\ SAMeBPS [35] & 47.0 & 61.6 & 74.9 & - & - & - & - & - \\ PAMS-Det [60] & 37 & 55 & 66 & - & - & - & - & - \\ CIR-FSD [61] & 54 & 64 & 70 & - & - & - & - & - \\ TEACSC [62] & 47 & **67** & 72 & - & - & - & - & - \\ SAGS\&TFS [63] & 51 & 66 & 72 & - & - & - & - & - \\ TSF-RGR [39] & 57 & 66 & **77** & - & - & - & - & - \\ G-FSOD [37] & 50.1 & 58.8 & 67.0 & 75.9 & **90.0** & **90.5** & 89.2 & **90.6** \\ \hline Ours & **60.7 \(\pm\) 2.1** & **67.2 \(\pm\) 1.2** & **77.2 \(\pm\) 2.5** & **83.3 \(\pm\) 1.5** & 98.7 \(\pm\) 0.4 & 89.1 \(\pm\) 0.5 & **89.3 \(\pm\) 1.1** & 89.9 \(\pm\) 0.5 \\ \hline \hline \end{tabular} \end{table} TABLE I: Average Precision (AP) (in \(\%\)) at a IoU threshold of \(0.5\) of different methods on NWPU-VHR v2 dataset, where the base-novel class split follows [53]. Averaged results and the standard deviations of \(3\) different runs are reported for the proposed methods. \begin{table} \begin{tabular}{c|c c c c c|c c c c} \hline \hline \multirow{2}{*}{Shots} & \multirow{2}{*}{Methods} & \multicolumn{4}{c|}{Novel Classes} & \multicolumn{4}{c}{Base Classes} \\ & split1 & split2 & split3 & split4 & split1 & split2 & split3 & split4 \\ \hline 3 & P-CNN [54] & 18.0 & 14.5 & 16.5 & 15.2 & 47.0 & 48.9 & 49.5 & 49.8 \\ & SAGS\&TFS [63] & 29.3 & 12.6 & **20.9** & 17.5 & - & - & - & - \\ & G-FSOD [37] & 27.6 & 14.1 & 16.0 & 16.7 & 68.9 & 69.2 & 71.1 & 69.0 \\ & Ours & **41.9 \(\pm\) 0.6** & **17.7 \(\pm\) 2.0** & **20.9 \(\pm\) 0.4** & **20.4 \(\pm\) 3.6** & **73.5 \(\pm\) 0.5** & **72.5 \(\pm\) 0.5** & **75.2 \(\pm\) 0.4** & **73.3 \(\pm\) 0.6** \\ \hline 5 & P-CNN [54] & 22.8 & 14.9 & 18.8 & 17.5 & 48.4 & 49.1 & 49.9 & 49.9 \\ & SAGS\&TFS [63] & 31.6 & 15.5 & 24.8 & 19.7 & - & - & - & - \\ & G-FSOD [37] & 30.5 & 15.8 & 23.3 & 21.0 & 69.5 & 69.3 & 70.2 & 68.0 \\ & Ours & **45.7 \(\pm\) 1.6** & **20.7 \(\pm\) 2.8** & **26.0 \(\pm\) 2.5** & **25.2 \(\pm\) 4.5** & **73.3 \(\pm\) 0.4** & **72.7 \(\pm\) 0.4** & **75.6 \(\pm\) 0.6** & **73.5 \(\pm\) 0.4** \\ \hline 10 & P-CNN [54] & 27.6 & 18.9 & 23.3 & 18.9 & 50.9 & 52.5 & 52.1 & 51.7 \\ & SAGS\&TFS [63] & 31.6 & 15.5 & 24.8 & 19.7 & - & - & - & - \\ & G-FSOD [37] & 37.5 & 20.7 & 26.2 & 25.8 & 69.0 & 68.7 & 71.1 & 68.6 \\ & Ours & **50.0 \(\pm\) 1.5** & **27.3 \(\pm\) 1.1** & **31.3 \(\pm\) 0.3** & **33.4 \(\pm\) 1.1** & **72.6 \(\pm\) 0.3** & **72.3 \(\pm\) 0.5** & **75.7 \(\pm\) 0.4** & **73.9 \(\pm\) 0.2** \\ \hline 20 & P-CNN [54] & 29.6 & 22.8 & 28.8 & 25.7 & 52.2 & 51.6 & 53.1 & 52.3 \\ & SAGS\&TFS [63] & 40.2 & 23.8 & **36.1** & 27.7 & - & - & - & - \\ & G-FSOD [37] & 39.8 & 22.7 & 32.1 & 31.8 & 69.8 & 68.2 & 71.3 & 67.7 \\ & Ours & **53.7 \(\pm\) 1.1** & **33.4 \(\pm\) 0.4** & 34.6 \(\pm\) 1.9 & **38.2 \(\pm\) 2.0** & **73.3 \(\pm\) 0.5** & **73.3 \(\pm\) 0.5** & **75.5 \(\pm\) 0.2** & **73.8 \(\pm\) 0.2** \\ \hline \hline \end{tabular} \end{table} TABLE III: Average Precision (AP) (in \(\%\)) at a IoU threshold of \(0.5\) of different methods on DIOR dataset, where the 4 base-novel class splits follow [54]. Averaged results and the standard deviations of \(3\) different runs are reported for the proposed methods. negatively influence the detection of base class objects. As evidenced by the results depicted in the 5th column of Fig. 4 and the 3rd column of Fig. 5, our method is able to detect challenging, small base class objects such as storage tanks and vehicles. ### _Ablation Studies_ We conduct ablation studies on the first split of iSAID dataset to better understand the effect of all the components used in the proposed method. The results are presented in Table V and Fig. 6 for quantitative and qualitative analysis, respectively. The following observations can be made from the results: * The naive fine-tuning strategy introduced in [17] has a detrimental effect on the detection of base class objects. A noticeable performance decline becomes evident when comparing the results in the first row to those in the second row, according to Table V. This decline in performance stems from the fact that this particular strategy employs only a limited subset of base class annotations for model fine-tuning, leading to incomplete annotations for the base class. * BSS proves to be highly effective in preserving the base class performance after the few-shot fine-tuning process, at the cost of a slight trade-off in novel class performance. This highlights the critical importance of fine-tuning with a complete base annotation set for the maintenance of base class performance in satellite image-based FSOD. * Applying the proposed ST-RPN module does not influence the base class performance, owing to the separation of the extraction of base and novel proposals. In addition, Fig. 3: Visualized FSOD results of the proposed methods under \(K=20\) shots setting on NWPU-VHR10 v2 dataset. The base-novel class split follows [53]. \begin{table} \begin{tabular}{c|c|c c c|c c c} \hline \hline \multirow{2}{*}{Shots} & \multirow{2}{*}{Methods} & \multicolumn{3}{c|}{Novel Classes} & \multicolumn{3}{c}{Base Classes} \\ & & split1 & split2 & split3 & split1 & split2 & split3 \\ \hline 10 & FSDetView [34] & 1.3 \(\pm\) 0.3 & 8.7 \(\pm\) 2.1 & 4.6 \(\pm\) 1.2 & 33.8 \(\pm\) 0.5 & 29.8 \(\pm\) 1.6 & 32.9 \(\pm\) 3.4 \\ & TFA [17] & 3.3 \(\pm\) 0.8 & 9.0 \(\pm\) 2.6 & 3.8 \(\pm\) 1.1 & 58.6 \(\pm\) 0.3 & 56.5 \(\pm\) 0.8 & 59.0 \(\pm\) 1.5 \\ & DH-FSDet [36] & 5.2 \(\pm\) 0.8 & 14.5 \(\pm\) 1.7 & 9.7 \(\pm\) 2.2 & **65.0 \(\pm\) 0.2** & **64.5 \(\pm\) 0.1** & **67.8 \(\pm\) 0.1** \\ & Ours & **10.2 \(\pm\) 3.3** & **17.7 \(\pm\) 3.8** & **14.0 \(\pm\) 2.1** & 63.7 \(\pm\) 0.4 & 62.4 \(\pm\) 0.4 & 66.1 \(\pm\) 0.7 \\ \hline 50 & FSDetView [34] & 7.2 \(\pm\) 2.3 & 26.8 \(\pm\) 2.8 & 17.1 \(\pm\) 1.1 & 35.3 \(\pm\) 0.5 & 30.0 \(\pm\) 1.1 & 34.6 \(\pm\) 1.1 \\ & TFA [17] & 4.7 \(\pm\) 0.0 & 12.1 \(\pm\) 1.9 & 5.6 \(\pm\) 1.4 & 60.7 \(\pm\) 0.5 & 58.5 \(\pm\) 0.8 & 60.9 \(\pm\) 0.3 \\ & DH-FSDet [36] & 12.8 \(\pm\) 0.8 & 28.9 \(\pm\) 3.4 & 19.6 \(\pm\) 2.4 & **65.1 \(\pm\) 0.1** & **64.7 \(\pm\) 0.1** & **68.0 \(\pm\) 0.1** \\ & Ours & **24.8 \(\pm\) 2.1** & **39.3 \(\pm\) 2.1** & **31.1 \(\pm\) 1.2** & 62.8 \(\pm\) 0.4 & 62.6 \(\pm\) 0.3 & 65.8 \(\pm\) 0.1 \\ \hline 100 & FSDetView [34] & 10.2 \(\pm\) 1.2 & 32.8 \(\pm\) 2.0 & 24.1 \(\pm\) 1.1 & 36.4 \(\pm\) 0.6 & 30.4 \(\pm\) 0.4 & 34.5 \(\pm\) 1.3 \\ & TFA [17] & 5.0 \(\pm\) 0.3 & 14.4 \(\pm\) 1.5 & 5.4 \(\pm\) 1.1 & 61.4 \(\pm\) 0.3 & 59.2 \(\pm\) 0.2 & 61.6 \(\pm\) 0.4 \\ & DH-FSDet [36] & 16.7 \(\pm\) 1.7 & 36.0 \(\pm\) 1.7 & 23.1 \(\pm\) 0.9 & **65.2 \(\pm\) 0.1** & **64.8 \(\pm\) 0.1** & **68.1 \(\pm\) 0.1** \\ & Ours & **34.3 \(\pm\) 1.9** & **45.0 \(\pm\) 1.0** & **33.0 \(\pm\) 1.3** & 63.3 \(\pm\) 0.3 & 62.9 \(\pm\) 0.2 & 65.6 \(\pm\) 0.1 \\ \hline \hline \end{tabular} \end{table} TABLE IV: Average Precision (AP) (in \(\%\)) at a IoU threshold of \(0.5\) of different methods on iSAID dataset, where the 3 base-novel class splits follows [36]. Averaged results and the standard deviations of \(3\) different runs are reported for the proposed methods. Results of FSDetView and TFA are cited from [36]. \begin{table} \begin{tabular}{c c c c c|c c c|c c c} \hline \hline \multirow{2}{*}{w/ FT} & \multirow{2}{*}{BSS} & \multirow{2}{*}{ST-RPN} & \multirow{2}{*}{ST-BBH} & \multicolumn{4}{c|}{Novel Classes} & \multicolumn{4}{c}{Base Classes} \\ & & & & 10 & 50 & 100 & 10 & 50 & 100 \\ \hline \multirow{3}{*}{\(\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\checkcheckcheckcheckcheckcheck }}}}}}}}}}}}\)} & & & & - & - & - & 65.8 & 65.8 & 65.8 \\ \cline{3-12} & & & & 3.7 & 13.1 & 17.0 & 39.6 & 54.2 & 57.8 \\ \cline{1-1} \cline{ applying the proposed ST-RPN module is beneficial to improving the performance on novel class. * After applying the proposed ST-BBH module, there is a slight performance decay on the base classes. One possible reason is that since the base and novel classes share the same bounding box head (BBH), the fine-tuning on novel objects affects the general feature extraction within BBH and further affects the detection of base classes. However, there is a consistent improvement on the novel classes after applying ST-BBH, ranging from 3% to even 10%, at different number of shots. * By combining all the modules, the highest novel class accuracy is achieved, with a margin of 6% to nearly 20% compared to the naive fine-tuning strategy. This verifies the effectiveness of the proposed ST-RPN and ST-BBH modules in solving the IANO issue. Fig. 6 shows visualized results for different ablated models. The second row shows that using the ST-RPN module helps to recall more large vehicle objects. However, using ST-RPN alone without ST-BBH may lead to a higher false positive rate, as can be seen from the first row. By combining the ST-RPN and ST-BBH, the best visualization quality is achieved. The \begin{table} \begin{tabular}{c c c|c c|c c|c c} \hline \hline \multirow{2}{*}{\(\tau_{\text{pin}}\)} & \multirow{2}{*}{\(\tau_{\text{bath}}\)} & \multirow{2}{*}{\(\alpha\)} & \multicolumn{3}{c|}{Novel Classes} & \multicolumn{3}{c}{Base Classes} \\ & & & 10 & 50 & 100 & 10 & 50 & 100 \\ \hline 0.8 & 0.8 & 0.999 & 10.3 & 23.4 & 35.5 & 63.0 & 62.3 & 63.0 \\ 0.8 & 0.6 & 0.999 & 10.1 & **25.1** & 34.4 & 63.8 & **63.2** & 62.9 \\ 0.8 & 0.9 & 0.999 & 10.0 & 22.8 & **36.5** & **64.0** & 62.4 & 62.9 \\ 0.6 & 0.8 & 0.999 & **10.5** & 22.4 & 35.0 & 63.4 & 62.4 & 62.5 \\ 0.9 & 0.8 & 0.999 & 10.5 & 23.7 & 36.4 & 63.9 & 61.9 & **63.3** \\ 0.8 & 0.8 & 0.999 & 10.4 & 23.9 & 35.3 & 63.5 & 62.7 & **63.3** \\ \hline \hline \end{tabular} \end{table} TABLE VI: Sensitivity analyses of the hyper-parameters on split1 of the iSAID benchmark. \begin{table} \begin{tabular}{c c|c|c|c} \hline \hline ST-RPN & ST-BBH & mAP & Training Times & Inference FPS \\ \hline & & 16.2 & 0.258 & 18.8 \\ ✓ & & 22.3 & 0.458 & 16.0 \\ & ✓ & 27.0 & 0.260 & 18.7 \\ ✓ & ✓ & 35.5 & 0.479 & 15.9 \\ \hline \hline \end{tabular} \end{table} TABLE VII: Training times (seconds per iteration) and inference frames per second (FPS) of the proposed method evaluated on iSAID dataset. mAP (%) is also reported for comparison. Fig. 6: Visualized FSOD results of the proposed methods under different ablations on iSAID dataset. The number of novel shots is set to \(K=100\). The base-novel class split is the split 1 in [36]. third row demonstrates that this combination can help to detect small ship objects with high accuracy. These results further demonstrate the significance of incorporating the self-training mechanism to solve the unlabeled novel object issue. ### _Sensitivity Analyses of Hyperparameters_ We conducted a sensitivity analysis to evaluate the impact of hyper-parameter selection on the proposed method. The hyper-parameters we tested include the two self-training thresholds \(\tau_{\text{rpn}}\) and \(\tau_{\text{bkh}}\) used in ST-RPN and ST-BBH, and the momentum \(\alpha\) used when updating the teacher networks via EMA [27]. The results are presented in Tab. VI. We observed slight performance fluctuations (generally less than \(2\%\)) with different hyper-parameter values. However, compared to the variances caused by different sampling seeds as shown in Tab. II, III and IV, such fluctuations are not significant. Therefore, we can conclude that the proposed method is not highly sensitive to the values of the aforementioned hyper-parameters. ### _Computational Efficiency_ To assess the incremental computational demands introduced by our two proposed modules, ST-RPN and ST-BBH, we conducted an evaluation based on training times (measured in seconds per iteration) during the fine-tuning stage and inference frames per second (FPS) when these modules were integrated. The outcomes, as illustrated in Table VII, indicate that the inclusion of ST-BBH results in a negligible computational overhead, both during training and inference phases. This outcome aligns with our expectations, as ST-BBH simply introduces a pair of additional bounding box regressor and classifier layers on top of the Region of Interest (RoI) features. In contrast, the integration of ST-RPN introduces a more substantial computational load during the training phase. However, it is crucial to note that the fine-tuning phase in FSOD algorithms typically involves significantly fewer iterations compared to the base training phase (see our settings in Section IV-B). As such, the additional training costs remain within acceptable bounds. Furthermore, while ST-RPN does marginally reduce inference speed, it is essential to consider this cost in light of the performance gains it offers. This expense is manageable given the corresponding boost in detection performance. Additionally, upon closer investigation, we found that a substantial portion of the computational cost introduced by ST-RPN is attributed to the additional bounding box operations, such as non-maximum suppression (NMS). These options can be optimized and accelerated through the utilization of CUDA implementations. ### _Failure Cases_ To gain a better understanding of the limitations of the proposed FSOD method, we visualize some failure cases in different FSOD settings, as shown in Fig. 7. Based on the figure, we can observe that the majority of the missed detections are primarily due to the small size of the objects ("ships" in the 3rd column), or objects with large size variance compared to the training data ("ships" in the second column). Additionally, there may be duplicated detected boxes for objects that lack clear boundaries, such as the "tennis court" in the 1st column or "soccerball field" in the 4th column. While these issues are prevalent in general object detection [64], improved techniques related to addressing them can also be utilized to enhance the performance of FSOD. ## V Conclusion In this paper, we analyzed the current FSOD setting for remote sensing and identified the issue of incompletely annotated novel objects that can negatively impact the performance of FSOD methods. To address this issue, we propose to Fig. 7: Visualized failure cases of the proposed method on NWPU-VHR10 v2, DIOR and iSAID datasets. Base-novel class splits are the same as the splits in Fig. 3, 4, and 5. incorporate the self-training mechanism into the classical two-stage fine-tuning based FSOD pipeline. Our approach includes a ST-RPN module, which generates a set of novel proposals by excluding some proposals from the loss calculation that are likely to be novel objects but cannot be assigned to an existing few-shot annotation. Additionally, we designed a ST-BBH module that leverages the pseudo labels generated from a teacher BBH to filter out potential novel bounding boxes that are unlabeled and use them to supervise the student BBH to recall more novel objects. While our proposed method significantly improved the novel class FSOD performance in remote sensing, the base class performance may slightly decrease compared to the base model. Future works could focus on designing a generalized FSOD method that prevents the model from forgetting the previously learned base knowledge while improving the performance in detecting novel classes.
2309.05813
Design and Validation of a Metallic Reflectarray for Communications at True Terahertz Frequencies
Wireless communications in the terahertz band (0.1-10 THz) is a promising and key wireless technology enabling ultra-high data rate communication over multi-gigahertz-wide bandwidths, thus fulfilling the demand for denser networks. The complex propagation environment at such high frequencies introduces several challenges, such as high spreading and molecular absorption losses. As such, intelligent reflecting surfaces have been proposed as a promising solution to enable communication in the presence of blockage or to aid a resource-limited quasi-omnidirectional transmitter direct its radiated power. In this paper, we present a metallic reflectarray design achieving controlled non-specular reflection at true terahertz frequencies (i.e., 1-1.05 THz). We conduct extensive experiments to further characterize and validate its working principle using terahertz time-domain spectroscopy and demonstrate its effectiveness with information-carrying signals using a continuous-wave terahertz testbed. Our results show that the reflectarray can help facilitate robust communication links over non-specular paths and improve the reliability of terahertz communications, thereby unleashing the true potential of the terahertz band.
Sherif Badran, Arjun Singh, Arpit Jaiswal, Erik Einarsson, Josep M. Jornet
2023-09-11T20:35:20Z
http://arxiv.org/abs/2309.05813v2
# Design and Validation of a Metallic Reflectarray for Communications at True Terahertz Frequencies ###### Abstract. Wireless communications in the terahertz band (0.1-10 THz) is a promising and key wireless technology enabling ultra-high data rate communication over multi-gigahertz-wide bandwidths, thus fulfilling the demand for denser networks. The complex propagation environment at such high frequencies introduces several challenges, such as high spreading and molecular absorption losses. As such, intelligent reflecting surfaces have been proposed as a promising solution to enable communication in the presence of blockage or to aid a resource-limited quasi-omnidirectional transmitter direct its radiated power. In this paper, we present a metallic reflectarray design achieving controlled non-specular reflection at true terahertz frequencies (i.e., 1-1.05 THz). We conduct extensive experiments to further characterize and validate its working principle using terahertz time-domain spectroscopy and demonstrate its effectiveness with information-carrying signals using a continuous-wave terahertz testbed. Our results show that the reflectarray can help facilitate robust communication links over non-specular paths and improve the reliability of terahertz communications, thereby unleashing the true potential of the terahertz band. Terahertz communications, reflectarrays, intelligent reflecting surfaces, wavefront engineering. + Footnote †: journal: Acm J. of the Acm J. the THz band into multiple absorption-defined windows, where the usable bandwidth changes with distance as well as other medium parameters. Second, the very small size of THz antennas leads to a low effective area. While this is not a problem for nanonetworking applications, as the expected transmission range is usually under one meter, the small effective area of the antenna leads to high spreading losses for macroscale applications of the THz band. This requires the adoption of high-gain directional antennas and/or focusing lenses. Finally, the interaction of THz radiation with not just gases but effectively most materials can lead to significant blockage resulting from signal absorption and/or reflection. To overcome the challenging propagation of THz signals, the adoption of intelligent reflecting surfaces (IRSs) has been proposed (Hasegawa et al., 2016; Wang et al., 2017; Wang et al., 2018). IRSs can engineer the reflection of EM signals, introducing, for example, non-specular reflections, as well as more advanced functionalities such as polarization switching or wavefront engineering, including the transformation of spherical or Gaussian wavefronts into more robust Bessel beams (Hasegawa et al., 2016). Physically, IRSs come mostly in two forms, namely, reflectarrays and metasurfaces. Reflectarrays are integrated by metallic reflecting elements whose size and spacing are on the order of half a wavelength. The reflection phase or delay associated with each element can be set by means of controllable delay lines (Hasegawa et al., 2016). Metasurfaces, instead, are integrated by elements that are much smaller than the EM signal wavelength, known as meta-atoms. The smaller element size leads to enhanced functionalities but also results in more challenging control of the elements. Although the end goal is usually to have tunable reflectarrays and metasurfaces, tunability is not always needed. In many contexts, having a fixed response is sufficient, such as in the case of an indoor communication scenario where there are fixed or stationary blockers (i.e., walls, pillars, etc.) In such cases, mounting, for instance, a reflectarray to steer an incoming beam in a fixed direction to another repeater or access point (AP) is sufficient. Similarly, in nanoscale applications, reflections from the chip surface can be preprogrammed to provide connectivity with different cores (Bedran et al., 2017). Removing the reconfigurability requirement of IRSs drastically simplifies their design, fabrication, and control. This is particularly true at higher operation frequencies, where common phase/delay control elements are not available (Wang et al., 2018). Towards this goal, in this paper, we design, build, and experimentally characterize a preprogrammed reflectarray that operates in the first absorption-defined window above \(1\,\mathrm{THz}\), i.e., between \(1\) and \(1.05\,\mathrm{THz}\). The proposed design consists of an array of metallic reflecting patches with delay-controlling metallic stubs with micrometric dimensions that are fabricated with dimensions tailored to the specific criteria (Sec. 2). We experimentally characterize the structure using two complementary approaches, namely, broadband terahertz time-domain spectroscopy (THz-TDS) and narrowband communication using data-bearing continuous-wave (CW) signals (Sec. 3). With the results at hand, we then outline the next steps and potential future directions (Sec. 4). ## 2. Design and Fabrication In this section, we detail the design of the individual metallic patch, as well as the resulting reflectarray. We then explain the steps involved in the fabrication of the same. ### Reflectarray Design and Principle #### 2.1.1. Patch The fundamental radiating element of the reflectarray is a metallic patch. As per (Bedran et al., 2017), we utilized the cavity model to derive the required width \(W\) and the length \(L\) of the patch at a given design frequency \(f_{0}\): \[W=\frac{c}{2f_{0}\sqrt{\frac{\epsilon_{r}+1}{2}}}, \tag{1}\] \[L=\frac{c}{2f_{0}\sqrt{\epsilon_{\mathrm{eff}}}}-0.824h\left[\frac{(\epsilon_ {\mathrm{eff}}+0.3)(\frac{W}{h}+0.264)}{(\epsilon_{\mathrm{eff}}-0.258)(\frac{ W}{h}+0.8)}\right]. \tag{2}\] Here, \(h\) represents the thickness of the substrate, with \(\epsilon_{r}\) denoting the dielectric constant. As specified in (Bedran et al., 2017), \(h\) should be in the range of \(0.003\)-\(0.05\)\(\lambda_{0}\), where \(\lambda_{0}=c/f_{0}\). Due to the fringing effect at the edges of the patch, the effective dielectric constant \(\epsilon_{\mathrm{eff}}\), is given by (Bedran et al., 2017): \[\epsilon_{\mathrm{eff}}=\frac{\epsilon_{r}+1}{2}+\frac{\epsilon_{r}-1}{2}[1+ 12h/W]^{-1/2}. \tag{3}\] The patch is first designed in transmission mode, where the \(S_{11}\) parameter, or the reflection coefficient (Bedran et al., 2017), is utilized to ensure resonance at the design frequency. By the symmetry of EM operation, a good transmitter is also a good receiver, and thus, the patch is a good reflector as it can both receive and reradiate EM waves. Since the patch is vertically polarized, the design works for linearly polarized waves (Hasegawa et al., 2016). #### 2.1.2. Reflectarray Integration Following common reflectarray design principles, we designed the individual patches to be separated by a center-to-center distance of \(0.5\lambda_{0}\). Further, we recall from array theory that to direct a beam in an angle \(\theta\), the progressive phase delay \(\Phi_{RA}\) across the reflectarray elements should be: \[\Phi_{RA}=k_{0}(R_{i}-d\sin\theta), \tag{4}\] where \(k_{0}\) is the free-space wave vector and \(d\) is the distance between the elements. \(R_{i}\) is the additional phase that comes from the direction of incidence, and in the case of broadside radiation, this term vanishes, yielding the familiar \(\Phi_{RA}=-k_{0}d\sin\theta\) term. As the phase delay is relative, we derived a pattern that can be easily replicated across a large array. Namely, we chose to implement a \(\pi/2\) radians progressive phase shift, which wraps around \(2\pi\) every four elements. The resultant direction of steering is then 30\({}^{\circ}\) as per (4). The delay is implemented through a fixed-length delay stub along the resonant length of the patch. The length is calculated as per the phase constant. Namely, to effect a delay of \(\Phi\) radians, the length of the stub \(L\) should be \(L=\Phi/k\), where \(k\) is the wave vector within the substrate. Based on these principles, we designed the final reflectarray as shown in Fig. 1a, where the pattern can be seen to repeat every four elements. ### Fabrication We fabricated the reflectarray on a substrate consisting of a 300 nm (20 nm Ti + 260 nm Ag + 20 nm Ti) metallic ground plane deposited atop a Si/SiO\({}_{2}\) wafer to ensure high reflectivity (Bohringer et al., 2016). A 2.3 \(\upmu\)m SU8 layer was spin-coated on the ground plane and the reflectarray designs were patterned atop the SU8 dielectric layer using conventional photolithography. The Si/SiO\({}_{2}\) substrate was cleaned by sonication in acetone, isopropyl alcohol, and deionized (DI) water for 5 min each, followed by N\({}_{2}\) blow dry and heating for 5 min at 150 \({}^{\circ}\)C. The substrate was then treated with O\({}_{2}\) plasma at 65 W for 120 s to remove organic contaminants. The cleaned substrate was then inserted into an electron beam evaporation system for deposition of the ground plane metals. The structure of the deposited metal (see Fig. 1b) was 20 nm Ti (for adhesion), 260 nm Ag (for reflection), and another 20 nm Ti adhesion layer. Photolithography was used to write the array pattern onto the substrate. For improved lift-off, LOR3B photoresist was first spin coated at 2500 rpm for 45 s, followed by a post-coat bake at 190 \({}^{\circ}\)C for 4 min. A second photoresist (Microposit S1813) was then spin coated at 5000 rpm for 45 s and followed by post-coat bake at 115 \({}^{\circ}\)C for 1 min. The substrate was subsequently placed in contact with the designed photomask and exposed (Hg i-line, 350 W, 4.5 A) for 50 s. After photolithography, the photoresists were developed in Microposit MF-319 solution for 45 s followed by rinsing in DI water for 30 s. A similar 300 nm metal stack (20 nm Ti, 260 nm Ag, and 20 nm Au) was deposited to form the metal arrays. The topmost Au layer was deposited to prevent the Ag surface from oxidation. The sample was then left in Microposit Remover 1165 solution overnight at 70 \({}^{\circ}\)C. After lift-off, the sample was rinsed in isopropyl alcohol (IPA) and DI water for 5 min each, followed by N\({}_{2}\) blow dry. A photo of the resulting reflectarray is shown in Fig. 1c. ## 3. Experimental Validation In this section, we validate the reflectarray working principle using two platforms, i.e., THz-TDS and a CW THz testbed. ### Terahertz Time-Domain Spectroscopy As a starting point, we used a THz-TDS platform (Advantest TAS750TS), which offers rapid, broadband measurements in reflection or transmission (see, e.g., (Bohringer et al., 2016; Bohringer et al., 2016)). A schematic of the measurement setup (Kraemer et al., 2016) is shown in Fig. 2. In this THz-TDS system, a femtosecond pulse from an infrared (IR) laser is incident on a photoconductive antenna, which generates a broadband THz pulse. After interacting with the sample, the THz pulse is detected at a second photoconductive antenna by mixing with a delayed IR pulse. The non-specular reflection by the IRS should direct the component of the broadband THz pulse at the resonance frequency away from the detector, resulting in a _decrease_ in reflectance. This can be clearly seen in the reflectance spectrum plotted in Fig. 3, where the blue curve shows a clear dip near 1 THz. While designed for 1 THz, the response is centered at 0.9 THz, which is related to the tolerance in fabrication, as the stub lengths involved micrometer precision, and the photolithographic process involving complex lift-off had a tolerance of 10% (2 \(\upmu\)m at 22.5 \(\upmu\)m minimum stub length). The dashed red curve is the reflection of a uniform metal film of equal thickness, and both are normalized to Figure 1. Schematic and optical images of the reflectarray. The size of the patterned area in (c) is \(12.75\times 12.75\) mm. the reflection from the bare substrate. The dip at 0.9 THz is eliminated when the sample is rotated by 90\({}^{\circ}\) (Fig. 3b), confirming that the response is indeed due to the element geometry and not some other factor. ### Terahertz Communication Testbed We verify the improvement in link quality when utilizing the reflectarray through the TeraNova testbed (TeraNova et al., 2018). #### 3.2.1. Setup The TeraNova testbed consists of a transmitter with a high-performance analog programmable signal generator (PSG) and an arbitrary waveform generator (AWG) up to 92 GSa/s from Keysight Technologies, an upconverter frontend, along with a directional high-gain horn antenna, encompassing true terahertz frequencies (1-1.05 THz). The PSG is used to generate the local oscillator (LO) signal, which is mixed with the intermediate frequency (IF) signal generated by the AWG, and finally upconverted to a higher radio frequency (RF) signal. The upconverter is manufactured by Virginia Diodes, Inc. (VDI) and consists of a frequency multiplier chain of \(\times 24\), a frequency mixer with a double sideband (DSB) conversion loss of 14 dB and an IF low-noise amplifier (LNA) with 1 dB of gain. The transmit power at RF before feeding the antenna is about \(-12\) dBm (60 uW) and the horn antenna gain is 26 dBi. The testbed receiver has a similar design and is equipped with a high-performance digital storage oscilloscope (DSO) up to 160 GSa/s. The downconverter has the same architecture as the upconverter but is equipped with a high-gain IF LNA providing 12 dB of gain. Figure 4 depicts how the different transmitter and receiver components are connected. The 10 MHz reference cable is used to synchronize the transmitter and receiver PSGs and compensate for the carrier frequency and phase offsets. Figure 4. A block diagram depicting the interconnection between the various transmitter and receiver components of the TeraNova testbed. Figure 3. THz-TDS reflectance measurements of the reflectarray normalized to the substrate. Figure 2. Schematic of the THz-TDS measurement setup in reflection geometry. The digital signal processing (DSP) backend of the Teras-Nova testbed involves designing discrete-time pulse-shaped modulated waveforms in MATLAB and uploading them to the AWG. The AWG then generates the modulated IF signals, which are upconverted to RF by the VDI upconverters, transmitted over the air, received by the VDI downconverters, get downconverted back to IF, and are finally captured and stored via the DSO. Post-processing and DSP techniques can then be applied to the captured signals in MATLAB. Given this flexibility, we can experimentally evaluate any arbitrary modulation scheme or DSP technique. #### 3.2.2. Controlled Reflection Validation As depicted in Fig. 5, the transmitter is incident normal to the reflectarray, with the receiver placed in the direction of the controlled reflection (\(\theta=30^{\circ}\)). We also replaced the reflectarray with a metallic sheet to act as a benchmark. As a preliminary step, we measure the signal-to-noise ratio (SNR) when utilizing the fabricated reflectarray both in the case of the receiver being placed at the expected reflection of the signal as well as specular reflection. The SNR of a 1 GHz IF sinusoidal signal was found to be 32.9194 dB in the controlled reflection scenario (\(\theta=30^{\circ}\)) and 1.3899 dB in the specular reflection scenario (i.e. equal incident and reflected angles from the normal). This clearly indicates that the reflectarray is working as intended and directing the radiation in a non-specular path, potentially enabling non-line-of-sight (NLoS) links. To further verify broadband operation, we upconvert and transmit an IF signal comprising the sum of five sinusoidal signals at 1, 2, 3, 4, and 5 GHz. These were successfully received with a high SNR, as shown in Fig. 6, in the case of the reflectarray, but the signals could not be recovered when replacing the reflectarray with a metallic sheet. Next, we transmit a QPSK modulated data signal with a passband bandwidth of 500 MHz, comprising 200 pilot bits and 2000 data bits. The constellation diagram for the transmitted, received, and equalized signals is shown in Fig. 7. With the reflectarray, the effective received data rate was 495.05 Mbps, with a measured bit error rate (BER) of 0.0335 without any error correction. This BER is very close to the theoretical additive white Gaussian noise (AWGN) channel BER given the received SNR. The measured BER when the reflectarray is replaced by a metallic sheet was 0.499, the worst possible BER. Moreover, the reflectarray can be used for multi-Gbps data rates and higher modulation orders over multi-gigahertz-wide bandwidths, but it is very challenging to close the link with sufficient SNR given the very low available transmit power at true terahertz frequencies. Nonetheless, the results verify the response of the reflectarray and the potential for establishing highly focused and even NLoS links at THz frequencies. Figure 5. Experimental validation setup of the fabricated metallic reflectarray using the 1–1.05 THz up and downconverter frontends depicting a controlled reflection scenario at \(\theta=30^{\circ}\). Figure 6. Power spectrum (50 \(\Omega\)) at IF for the received reflected signal at \(\theta=30^{\circ}\) with normal incidence. ## 4. Conclusion and Future Works In this paper, we present the first metallic reflectarray achieving NLoS communications with information-carrying signals at true terahertz frequencies. The results validate the reflectarray working principle, showing effective communication despite the very low available transmit power. In the future, we will work towards increasing the supported bandwidth and communication distance by improving the transmitter power and receiver sensitivity. We will also explore the possibility of developing tunable structures by replacing the fixed-length stubs with voltage-controlled delay lines (Kang et al., 2019). ## Acknowledgments This work was funded by the Air Force Research Laboratory award FA8750-20-1-0500 and the National Science Foundation awards CNS-1955004 and CNS-2011411.
2309.11866
Light-to-heat conversion of optically trapped hot Brownian particles
Anisotropic hybrid nanostructures stand out as promising therapeutic agents in photothermal conversion-based treatments. Accordingly, understanding local heat generation mediated by light-to-heat conversion of absorbing multicomponent nanoparticles at the single particle level have both forthwith become a subject of broad and current interest. Nonetheless, evaluating reliable temperature profiles around a single trapped nanoparticle is challenging from all experimental, computational, and fundamental viewpoints. Committed to filling this gap, the heat generation of an anisotropic hybrid nanostructure is explored by means of two different experimental approaches from which the local temperature is measured in a direct or indirect way, all in the context of the hot Brownian motion theory. The results were compared with analytical results supported by the numerical computation of the wavelength-dependent absorption efficiencies in the discrete dipole approximation for scattering calculations, which has been here extended to inhomogeneous nanostructures. Overall, we provide a consistent and comprehensive view of the heat generation in optical traps of highly absorbing particles under the viewpoint of the hot Brownian motion theory.
Elisa Ortiz-Rivero, Sergio Orozco-Barrera, Hirak Chatterjee, Carlos D. Gonzalez-Gomez, Carlos Caro, Maria L. Garcia-Martin, Patricia Haro-Gonzalez, Raul A. Rica, Francisco Gamez
2023-09-21T08:10:46Z
http://arxiv.org/abs/2309.11866v1
# Light-to-heat conversion of optically trapped hot Brownian particles. ###### Abstract Anisotropic hybrid nanostructures stand out as promising therapeutic agents in photothermal conversion-based treatments. Accordingly, understanding local heat generation mediated by light-to-heat conversion of absorbing multicomponent nanoparticles at the single particle level have both forthwith become a subject of broad and current interest. Nonetheless, evaluating reliable temperature profiles around a single trapped nanoparticle is challenging from all experimental, computational, and fundamental viewpoints. Committed to filling this gap, the heat generation of an anisotropic hybrid nanostructure is explored by means of two different experimental approaches from which the local temperature is measured in a direct or indirect way, all in the context of the hot Brownian motion theory. The results were compared with analytical results supported by the numerical computation of the wavelength-dependent absorption efficiencies in the discrete dipole approximation for scattering calculations, which has been here extended to inhomogeneous nanostructures. Overall, we provide a consistent and comprehensive view of the heat generation in optical traps of highly absorbing particles under the viewpoint of the hot Brownian motion theory. _Keywords_. Optical tweezers, hybrid nanostructures, heat generation, nanothermometry, hot Brownian motion. ## I Introduction Multicomponent nanoparticles (MCNPs) can be defined as single structures that combine the properties of at least two different materials at the nanoscale [1; 2; 3]. A myriad of applications of a la carte MCNPs have occupied the limelight of different research fields throughout the last years because of the virtually endless materials combination that confers specific features designed for each potential purpose [4]. For instance, their relevance in biomedical applications has taken off due to their ability to act both as a dual agent for multimodal contrast and/or for combined/synergic therapies [5; 6]. In this context, materials engineering has developed specific routes for costuming a wide plethora of on-demand nanocomposites whose shape, functionalization and spectroscopic attributes of the surface plasmon resonance (SPR) provide distinctive physical and biocompatibility features that are key for real-world applications in nanomedicine [7; 8]. The so-called inductive synthesis has emerged in this field as a new and intriguing branch of nanoscience that foster the adaptation of the morphology of nanoparticles to achieve specific biological responses [9]. In particular, MCNPs comprised by a light-absorbing material (_i.e._, gold) and magnetic moieties can be conceived as agents for hyperthermia treatments by dint of their efficient heat-conversion effect under both alternating magnetic fields and near-infrared light [10; 11; 12; 13]. It becomes then apparent that measuring temperatures at the nanoscale and understanding the photothermal conversion of light-absorbing hybrid nanoparticles at the single particle level have become a central issue in applied physics and materials engineering. A captivating approach to manipulate, control and measure properties of isolated nanostructures is embodied by the continuously upgraded technology of optical tweezers [14]. Even if challenging, trapping absorbing nanoparticles, in particular those made of noble metals that feature plasmonic resonances, has been recently optimized [15; 16; 17; 18]. Particularly, optical tweezers paved the way towards the microscopic comprehension of heat generation by isolated nanoparticles by monitoring changes in the properties of the heating source [19; 20] or its surroundings [21]. Some previous works were devoted to evaluating the local temperature in optically trapped nanoparticles by means of viscosity variations of the surrounding media upon laser heating [22; 23; 24] or by the indirect monitorization of the temperature-dependent emission of a nanoprobe [25]. In the last case, the eventual influence of the neighbouring nanothermometers on the absorption, the center of mass motions and the light-to-heat conversion efficiency of the trapped particle is a question that should be considered. But a word of caution, temperature is a many-particle property, and therefore it is a blurry-defined concept from statistical mechanics grounds in single-particle systems far from equilibrium, jeopardising the validity of traditional Brownian motion prescriptions. In consequence, another point of interest is whether the measured "temperature" resembles the classic interpretation of surface or internal temperatures or if it is just a value that averages local thermal fluctuations of the solvent bath within the trap volume enabling the description of the particle dynamics within the so-called hot Brownian motion (HBM) theory [26; 27; 28; 29]. To address these points, experimental and theoretical results on the heat generation and local temperature of a light-absorbing multicomponent nanoparticle under optical trapping conditions are accomplished here. ## II Results and discussion ### Characterization of the MCNPs #### ii.1.1 Physicochemical characterization In this work, we faced thermometric measurements on hybrid nanoflowers (NFs) synthesized by the multi-nucleation of magnetite globular petals on spherical gold seeds, as reported in reference [30] and detailed in the Experimental Section. Transmission electronic microscopy (TEM) was employed to characterize the MCNPs morphology and the size distribution of the moieties of each component. The characteristic shapes are presented in the micrography of Figure 1(a). The diameter of the different inorganic counterparts were \(\sim\)12 nm for both the Au core and Fe\({}_{3}\)O\({}_{4}\) petals, respectively, as shown in the size histograms shown in Figure S1. Overall, and assuming _effective_ spherical shape, the TEM diameter of the whole nanoflower was \(\sim\)33 nm. After a PEGylation procedure, the colloidal stability of the aqueous suspension was confirmed by both the constancy of the hydrodynamic diameter (\(\sim\)58 nm) for one month and the negative value of the nanoparticle \(\zeta\)-potential of -14 mV that, according to DLVO theory, was expected to lead to stable suspensions [31]. The extinction spectra of a diluted suspension, shown in Figure 1(b), presented a broad band with a maximum extinction wavelength at \(\sim\)535 nm on account of the SPR of the gold core perturbed by the interaction with the magnetite globules. The light extinction increases in the NIR from \(\sim\)800 nm on. This behavior has been reported before in both experiments and Mie calculations and is ascribed to a charge-transfer absorption band in the magnetic moiety [32]. The behavior of the long-tail extinction of the particles in the NIR envelops both the first and second biological windows [33] that, together with their watery stability, enable their fundamental optical exploration as hyperthermia agents [34]. From a purely optical vista, these nanostructures combine the penetration depth and absorption of magnetite with the reflecting, heating and trapping advantages of gold. #### ii.1.2 Evaluation of the absorption cross section from DDA simulations The synthesized nanoflowers were simulated using parametric CAD modelling. To define the nanoflower geometry, the petals have been modelled using a piriform curve, which is parameterized just by one single parameter \(\beta\) (see Figure S2 in the Supplementary Material). Sweeping the value of \(\beta\) from 1.5 to 6 with 0.5 step size, we developed the parametric curves of the profiles of the petals, which were revolved in three dimensions to obtain the shapes of the petals, as shown in Figure S2. The flower shape has been conceived by the assignment of the petals around the core in both planar triangular and tetrahedral arrays for three selected shapes of petals having \(\beta\) values 1.5, 2.5 and 5. Since the rise in \(\beta\) leads to a concomitant increase in the height of the petals, a scaling factor has been introduced to keep the size ratio of the core and petal within the limits of the experimental size range as shown by the histogram in Figure S1. The diameters of the core gold and the magnetite caps are set to 10 nm and 11.5 nm, respectively. Corresponding models for the triangular and tetrahedral arrays are shown in Figure 1(c). Models (T1) and (t1) are the tetrahedral and triangular nanoflower for \(\beta=1.5\). Similarly, models (T2) and (t2) show the tetrahedral and triangular nanoflower for \(\beta=2.5\), and models (T3) and (t3) show the tetrahedral and triangular nanoflower for \(\beta=5\), respectively. The compositional assignment of the models for the particles synthesized in the present work has been modelled using a home-built Python code [35]. In view of the symmetric structure of the bimetallic nanoflower, the design of the code involves superposition of magnetite petals with the gold core seeded inside. The interface between the two dielectrics has been set to the dipole distance between the dipolar lattice considered for the scattering simulation considering discrete dipole approximation (DDA) for gold core and magnetite petals. The scattering simulation has been performed using DDSCAT code to calculate scattering, absorption and extinction cross-section at different wavelengths with corresponding nearfield calculations within \(\pm 2\) nm from the gold core in both \(x\)- and \(y\)-axis [36]. The result illustrates a strong electromagnetic field appearing at the interface between the gold core and magnetite petals, as shown in Figure S3 for model (T3). The appearance of strong electric field indicates strong confinement of hot electrons at the dielectric boundary, leading to processes like hot electron transfer or plasmon-phonon coupling, among others, which could open up a future prospect of study. Finally, a plot of the scattering cross-section for the \(x\)- and \(y\)-axis shows that the scattering cross-section for the \(x\)- and \(y\)-axis is similar to that for the \(x\)- and \(y\)-axis. Figure 1: (a) TEM image of the synthesized nanoflowers. (b) Extinction spectra of a diluted suspension of the nanoflower suspensions. The inset represents an schematic model of the composition of the nanoflowers. (c) Theoretical models for the calculation of the scattering profiles of the nanoflowers: models of the whole nanoflowers with 4 (left column) and 3 (right column) petals, for \(\beta=1.5\) (T1, t1), \(\beta=2.5\) (T2, t2), and \(\beta=5\) (T3, t3). (d) Extinction efficiencies for all the structures calculated with DDA. DDA calculated extinction efficiencies for the six models is shown in Figure 1(d). The patterns show a sharp peak at around 500 nm and a shoulder tail around 1100 nm, depicting the inclusion of both gold and magnetite in the nanoflower, in agreement with the experimental extinction spectra in Fig.1(b). In order to relate the simulation information with the experiments performed at 1064 and 808 nm discussed below, we evaluated the average absorption cross-section, \(\langle\sigma_{abs}\rangle\), from the simulated absorption efficiency values obtained for the different \(i\) structures, \(q_{abs}^{i}\), as \(\langle\sigma_{abs}\rangle=\left<q_{abs}^{i}\right>\pi a_{eff}^{2}\), where \(a_{eff}\) is the effective radius of scatterers [37]. This information is detailed in Table 1. This approach accounts for the sample size and shape heterogeneity in an _effective_ way. ### Nanothermometry of trapped MCNPs #### ii.2.1 Corner frequency-based nanothermometry The first set of optical trapping and nanothermometry experiments were performed with a commercial optical tweezers setup described in the Experimental section and schematically represented in Figure 2(a). Briefly, a single particle is trapped close to the beam waist of a tightly focused laser beam (\(\lambda=1064\) nm). The forward scattered light carries information about the Brownian fluctuations of the particle around the trapping position, and it is collected and guided to a quadrant photodetector (QPD) that provides a voltage signal proportional to such fluctuations. Typically, that signal is analysed in terms of the power spectral density (PSD), _i.e._, the Fourier transform of its autocorrelation function [14]. For an overdamped system where a particle is in equilibrium with a thermal bath at absolute temperature \(T\) and trapped in a parabolic potential well, the PSD depends on the frequency, \(f\), following a Lorentzian function: \[\text{PSD}(f)=\frac{D}{2\pi^{2}(f^{2}+f_{c}^{2})} \tag{1}\] characterized by the diffusion coefficient of the particle \(D=k_{B}T/\gamma\) and the corner frequency of the particle in the trap \(f_{c}=\kappa/\gamma\), where \(k_{B}\) is the Boltzmann constant, and \(\gamma=6\pi\eta R_{H}\) is the Stokes friction coefficient for a sphere with hydrodynamic radius \(R_{H}\) immersed in a fluid of viscosity \(\eta\). Finally, \(f_{c}\) is proportional to the stiffness of the trap, \(\kappa\), which in turn is proportional to the power of the trapping laser, \(P\). Therefore, one typically finds \(f_{c}\propto P\) in the case of non-absorbing particles. In our case, since the temperature of the liquid is unknown due to light absorption of the NPs, the standard calibration of displacement based on PSD cannot be used [14]. However, we can evaluate the effects of heating _via_\(f_{c}\), which can be determined without calibration since its measurement only depends on the internal clock of the electronic system. Therefore, the PSD of the Brownian fluctuations is fitted to a Lorentzian function from which \(f_{c}\) can be evaluated (see Figure 2(b)). The laser power-to-cutoff frequency \(ratio\), \(P/f_{c}\), should be proportional to the solvent viscosity, \(\eta(T)\), that depends on the temperature \(T\) in a nontrivial but well-known way. Accepting that the temperature can be linearly related to the laser power in the trap \(P\) as \(T=T_{0}+BP\)[22; 38], where \(T_{0}\) is the chamber temperature before laser heating, and using a Vogel-Fulcher-Tammann (VFT) dependence of the water viscosity on temperature from Ref.[39], the experimental data are non-linearly fitted to the function: \[\frac{P}{f_{c}}=C\eta(T_{0}+BP), \tag{2}\] where \(B\) and \(C\) are fitting parameters. After this procedure, experimental results are shown in Fig. 2(d) as violet symbols, together with the fitting curve (R\({}^{2}\)=0.92, \(\chi_{\nu}^{2}\)=0.34). The fitted heating coefficient, \(B\), was 368 \(\pm\) 51 K-W\({}^{-1}\), which is of the same order of magnitude as those obtained for different gold nanostructures in the literature [19; 40]. From this value, a temperature increment for the liquid around the particle can be obtained which can be as high as \(\Delta T\simeq 50\) K, as shown in Fig. 2(d). However, the meaning of this temperature is not clear, since the trapped particle is hotter than the surrounding liquid, and a gradient of temperature is expected to develop [26], which can even be complicated by anomalies in the structure of water that may appear in this temperature range [41]. The temperature gradient, together with a concomitant viscosity one, affects the Brownian dynamics in the trap, whose evaluation can help us interpret the obtained results in a more reliable way. Hence,a system composed of a particle that is hotter than the environment due to a continuous energy input reaches a non-equilibrium steady state where the surface temperature of the particle \(T_{s}\) gets a constant value that is higher than the surrounding liquid, and a temperature gradient close to it relaxes this value towards that of the thermal bath far from it. The \begin{table} \begin{tabular}{c|c|c|c|c} \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{\(\lambda\)= 1064 nm} & \multicolumn{2}{c}{\(\lambda\)= 808 nm} \\ \cline{2-5} & q\({}_{abs}\cdot 10^{2}\) & \(\sigma_{abs}\cdot 10^{6}(\mu\text{m}^{2})\) & q\({}_{abs}\cdot 10^{2}\) & \(\sigma_{abs}\cdot 10^{6}(\mu\text{m}^{2})\) \\ \hline T1 & 3.02 & 9.22 & 1.66 & 5.09 \\ t1 & 2.74 & 8.93 & 1.48 & 4.82 \\ T2 & 3.40 & 11.1 & 1.93 & 6.30 \\ t2 & 3.14 & 11.5 & 1.79 & 6.58 \\ T3 & 3.66 & 12.2 & 2.16 & 7.21 \\ t3 & 3.38 & 13.5 & 1.95 & 7.73 \\ \hline \end{tabular} \end{table} Table 1: Calculated values of absorption cross-section and absorption efficiencies of the selected particle models at the wavelengths of interest. theory of HBM shows that the dynamics of a particle in such a situation can be described by an _effective_ diffusion coefficient \(D_{\rm HBM}=k_{B}T_{\rm HBM}/6\pi\eta_{\rm HBM}R_{H}\) given by an effective temperature \(T_{\rm HBM}\) and an effective viscosity \(\eta_{\rm HBM}\) that would return the measured diffusion coefficient of the very same particle placed in an equilibrium bath with those properties. Therefore, we apply the HBM theory to account for the out-of-equilibrium nature of the system by exploiting Equation 2 but employing the expression for \(\eta_{\rm HBM}\) from Ref.[26] instead (Eq. 8 in that reference). Since \(\eta_{\rm HBM}\) incorporates the driving force responsible for the heat flux, the solvent temperature at the hydrodynamic boundary or surface temperature, \(T_{s}\), at each laser power is obtained by fitting the data in Figure 2(b). In a first approximation, the relation \(T_{s}=T_{0}+B_{s}P\) should also hold, so this way we obtain the equivalent heating coefficient at the surface, \(B_{s}=830\pm\) 110 K-W\({}^{-1}\) according to the fitting (R\({}^{2}\)=0.92, \(\chi^{2}_{\nu}\)=0.41). Notably, values of \(T_{s}\) above 100 \({}^{\circ}\)C are observed at moderate and high laser powers without any manifestation of cavitation during the experiments. This effect has been previously reported and explained from different perspectives [42; 43; 22], from which the more plausible one under our experimental conditions is the formation of a surface tension-induced metastable and locally stretched fluid that prevents the formation of nano/microbubbles [22]. In a second step, the _effective_ HBM temperature can be obtained following Ref.[26], using \(\Delta T_{\rm HBM}\simeq\Delta T_{s}/2-[1-\ln(\eta_{0}/\eta_{\infty})]\Delta T _{s}^{2}/(24T_{0})\), where \(\eta_{0}\) is the viscosity in the absence of heating, _i.e._, at room temperature, and \(\eta_{\infty}=0.0242631\)\({\rm mPa\cdot s}\). The new data are systematically higher (\(\sim\)25 % at the higher Figure 2: (a) Schematic representation of the inverted experimental setup for corner frequency-based nanothermometry. See sections Methods and B of Results and discussion for details. (b) Variation of the PSD with \(P\). The black, dark grey and light grey curves stand for the PSD of a trapped MCNP at laser powers of 74, 111 and 167 mW, respectively. The Lorentzian fits required to evaluate \(f_{c}\) are shown for exemplifying the experimental procedure to estimate \(\Delta T\) upon laser heating. See the main text for details. (c) Experimental values of the laser power-to-corner frequency _ratio_ versus laser power. Error bars stand for the propagated error calculated from the standard deviation values evaluated over ten replicate measurements. The dashed lines denote the fit of the experimental data to Equation 2 (violet line) and to the HBM theory (red line), respectively. (d) Temperature increments as a function of the laser power. Symbols stand for the experimental data obtained from Equation 2 (violet circles), the surface temperature from HBM theory (red circles) and the _effective_ HBM theory (yellow circles), respectively. The continuous yellow and blue lines denote the results from the fittings, textiti.e., \(BP\) for \(\Delta T\) as obtained from the VFT expression for \(\eta\) (blue) and \(B_{s}P\) for \(\Delta T_{s}\) (yellow). The red continuous line results from the analytical value of \(\Delta T_{HBM}\) evaluated from \(T_{0}\) and \(T_{s}\) following the prescriptions in Ref.[26]. Results of \(\Delta\left\langle T_{av}\right\rangle\) and \(\Delta T_{s}\) calculated with the Laplace equation are represented as dashed lines following the color code of the full symbols. See the main text for details. (e) Temperature profiles obtained from the _steady-state_ Laplace equation at different values of the laser power. The absorbed power was evaluated under the strong confinement approximation according to Equation 3. laser power value) than the _Brownian_ temperature increments derived above. In other words, the effective temperature governing the dynamics of the trapped particle is higher than what is expected from the presumption that the dynamics follow classical Brownian motion, this effect being more apparent as the laser power increases. This happens because of the viscosity and temperature gradients within the nanoparticle boundaries and its surroundings. This observation may, at least partly, explain the discrepancies observed for different thermometric methods in Ref.[22], where the effects derived from the HBM were ignored. The confirmation that under strong heating conditions assuming a constant temperature provokes the underestimation of the center-of-mass motion of the particle sets traditional thermometric estimations and interpretations aside when manipulating efficient light-to-heat conversion nanomaterials. A quick look at the so-obtained values of the temperature increments with \(P\) allows one to observe that the expected linear trend is not fulfilled above intermediate laser powers. In principle, this outcome may suggest non-Fourier _steady-state_ behavior at energy fluxes in the trap of the order of \(10^{12}\) W\(\cdot\)m\({}^{-2}\). Monte Carlo simulations of heating nanoparticles have reported positive deviations of the surface temperature from the predictions of the heat transport constitutive equation, which are ascribed to non-continuous thermal conductivity and a mismatch between the solvent and the solid nanoparticle surface [44]. Other similar effects have been experimentally observed with trapped nanoparticles, reporting a higher-than-expected variance of the instantaneous velocity of trapped Brownian particles due to a phase change in the water structure [41]. Negative departure from the linearity has been also previously observed in some experimental setups [45]. However, we are not aware of theoretical treatments for describing these phenomena under optical trapping conditions. Furthermore, temperature-dependent hydration/hydration of the polymeric PEG coating [46] might affect the local mass transport and heat exchange between the solvent layers, and water anomalies also takes place within the accessed temperature range [47]. Whether the captured experimental anomaly are due to either any of these contemplated or unreported effects or to a numerical artifact provoked by the strong variation of the viscosity with the temperature, that is more dramatically reflected in the high energy regime, is on the table now. Additional insights into the experimental data can be provided by manipulating the analytical temperature profiles that were assessed as follows. The average absorption cross section determined above _via_ the discrete dipole scattering calculations are employed to evaluate the absorbed power, which is estimated under the proviso of strong confinement as [48]: \[P_{abs}=\frac{\left\langle\sigma_{abs}\right\rangle}{\frac{1}{2}\pi W_{0}^{2}}P, \tag{3}\] where \(W_{0}=\lambda/(\pi\)NA) is the waist radius and NA is the numerical aperture of the objective. The computed absorbed power was set as a seed for evaluating the temperature profiles, \(T(r)\), outside the nanoparticle surface, \(s\), from the analytical solution of the Fourier equation around an _effective_ spherical heating source at rest in a water bath, _i.e._, the _steady-state_ Laplace equation: \[\Delta T(r)=\frac{P_{abs}}{4\pi\kappa_{w}r}\ \ \text{for}\ r\geq s, \tag{4}\] where \(\kappa_{w}\)=0.6 W\(\cdot\)K\({}^{-1}\cdot\)m\({}^{-1}\) is the thermal conductivity of water and \(r\) the radial distance from the particle center. Some examples of these profiles are shown in Figure 2(d). Since the experimental value of \(B\) implicitly considers the solvent heating, for comparative purposes this contribution was added _ad hoc_ according to the rate \(B_{w}\sim\)12 K\(\cdot\)W\({}^{-1}\) obtained in Ref.[49] for water at 1064 nm. The average temperature increment at a distance \(R=r-s\), \(\Delta\left\langle T_{av}(r)\right\rangle\), is then analytically obtained from the weighted volume integral: \[\Delta\left\langle T_{av}(r)\right\rangle=\frac{\int_{s}^{r-s}4\pi r^{2} \Delta T(r)dr}{\frac{4}{3}\pi(r^{3}-s^{3})}+B_{w}P. \tag{5}\] Theoretical results obtained from Equation 5 are in excellent agreement with the experimental data when \(\Delta\left\langle T_{av}\right\rangle\) were obtained at a distance \(r\) of 34 nm, just 5 nm away of the HD, as shown in Figure 2(e). One may interpret the radial distance at which the _steady-state_\(\Delta\left\langle T_{av}\right\rangle\) evaluated from the Laplace equation is equivalent to \(\Delta T_{\text{HBM}}\) as the point where the average kinetic energy of the molecules in the solvent bath stops being influenced by the HBM. This distance roughly coincides with the thickness of \(\sim\)70 monolayers of water, from which one may deduce the solvent volume required to subdue the strong effect of temperature gradients by thermalization. Moreover, assuming the generated heat power needed to increase the bath temperature to the effective \(T_{\text{HBM}}\) is wholly absorbed within this solvent volume, we estimated the time required to reach the HBM effective temperature is \(\sim\)200 ns. #### ii.2.2 Emission-based nanothermometry The second set of experiments was accomplished by monitoring the NIR emission of a dispersion of nanothermometers in which the NFs are embedded. The schematic representation of the trapping and photoluminescence (PL) detection setup is sketched in Fig. 3(a), and detailed in the Experimental Section. In previously published results, it was shown that the intensity of the NIR emission at \(\sim\)1200 nm of Ag\({}_{2}\)S nanocrystals (NCs) can act as reversible nanothermometers [50; 51], and are here used for the first time for nanothermometry purposes in optical trapping. Briefly, we employed a commercial suspension of Ag\({}_{2}\)S-PEG-COOH nanothermometers that, prior to the trapping experiments, were calibrated following a procedure in which the temperature dependence of the emission of an NCs dispersion is taken as the transduction signal. In Figure 3(b) this variation is presented. The changes in the area of the emission band with temperature were used to determine the thermal sensitivity of the Ag\({}_{2}\)S NCs that followed the linear relation A(\(T\))/A(\(T_{0}\))=1.37\({}_{7}\)-0.023\({}_{1}T\) (R\({}^{2}\) = 0.96) as shown in Fig. 3(c), where A(\(T\))/A(\(T_{0}\)) is the \(ratio\) of the spectral area at temperature \(T\), A(\(T\)), and that at a reference temperature \(T_{0}\), A(\(T_{0}\)). From this calibration, it can be concluded that the use of Ag\({}_{2}\)S NCs for nanothermometry is restricted to temperatures up to \(\sim\)60 \({}^{o}\)C. Then, an aqueous colloidal suspension of NPs and Ag\({}_{2}\)S NCs was placed in a homemade single-beam optical trapping setup, described in the Experimental Section, that uses an 808 nm linearly polarized single mode fiber-coupled laser diode. The laser was used for both trapping the nanofowers and exciting the NCs in their closer surroundings. The NCs emission was monitored with a NIR spectrometer fiber-coupled to the system. The normalized photoluminescence area of the NCs dispersion was measured as a function of the laser power (808 nm) in the trapping setup as shown in Fig. 3(d). The inset shows an example of the raw detected spectra at the lowest and highest laser powers. Finally, the spectral data were converted to temperature data using the calibration line obtained for the nanothermometers suspension. The final results are collected in Fig. 3(e) as full symbols. It must be stressed that our measurement through the emission changes in the NIR-spectra upon heating reflects the average temperature of the Ag\({}_{2}\)S nanocrystals contained in a liquid volume around the trapped particle, \(\left\langle T_{in}\right\rangle\), and this is not a direct observable evincing the Figure 3: (a) Scheme of the trapping setup for emission-based nanothermometry. (b) Variation of the photoluminescence spectra of a Ag\({}_{2}\)S NCs suspension with the temperature. (c) Calibration curve of the normalized photoluminescence spectra area of the Ag\({}_{2}\)S NCs suspension _versus_ the externally controlled temperature. Error bars are smaller than the symbol size. (d) Normalized photoluminescence spectra area as a function of the laser power (808 nm) as measured in the trapping experiments. The inset shows an example of the raw detected spectra at the lowest (initial) and highest (final) laser powers. Error bars are calculated from the standard deviation of, at least, triplicated experiments. (e) Experimental \(\Delta\left\langle T_{in}\right\rangle\) temperature obtained from the combination of the experimental data in bulk Ag\({}_{2}\)S NCs suspensions in (c) with the linear calibration in (d). Error bars are propagated from those of (d) and the determination error of the calibration in (d). Theoretical results obtained for \(\Delta\left\langle T_{av}\right\rangle\) to a radial distance of 21.5 nm from the NP surface is also included as a dashed line. The water and nanothermometers heating rate at 808 nm was set at \(B_{nt}\) = 42 K\(\cdot\)W\({}^{-1}\) according to the blank experiments detailed in the Methods section. trapped nanoparticle dynamics. Particularly, the experimental values of \(\Delta\left\langle T_{in}\right\rangle\) reflect real differences in the average temperature around the heated nanoparticle. As shown as a dashed blue line in Figure 3(e), we found that the values of \(\Delta\left\langle T_{av}(r)\right\rangle\) obtained from the Laplace equation at \(r=34\) nm, including the heating contribution of the solvent and the nanothermometers, \(B_{nt}\) (see Methods), are in excellent agreement with the \(\Delta\left\langle T_{in}\right\rangle\) data. Therefore, the question that arises after these results is whether the radial distance required to obtain an average temperature compatible with the spectroscopic data is representative of the effective solvent boundary affected by the HBM. Although the overall agreement between the experimental \(\Delta\left\langle T_{in}\right\rangle\) data and the theoretical values of \(\Delta\left\langle T_{av}(r)\right\rangle\) at a radial distance is consistent with the experiments at 1064 nm, solving this fundamental issue is cumbersome from any angle. In the next section, we try to address this issue directly by comparing both data sets. #### ii.2.3 Comparison of the two data sets and discussion In this section, we try to find a connection between the two types of performed measurements. Some hints can be provided by pondering that, in a first approximation, both experiments should be fully equivalent in terms of heat generation if the absorbed power is considered a common variable, regardless of the trapping wavelength. Provided the nice agreement found in the experiments performed at 1064 nm, we evaluated the surface temperature from the Laplace equation employing the \(\left\langle\sigma_{abs}\right\rangle\) at 808 nm obtained \(via\) DDSCAT calculations, from which \(T_{HBM}\) can be derived. The analytical results of both temperatures are plotted as continuous (yellow for \(\Delta T_{s}\) and red for \(\Delta T_{\text{HBM}}\)) lines in Figure 3(e). The agreement between \(\Delta\left\langle T_{av}(r)\right\rangle\) and \(\Delta T_{\text{HBM}}\) is highly remarkable, singularly considering that no contributions of the solvent and/or nanothermometers are contemplated in the HBM temperature. Under the naive assumption that this contribution is additive to the theoretical value of \(\Delta T_{\text{HBM}}\), we obtained the dashed orange line in Figure 3(e) (denoted as \(\Delta T_{\text{HBM}}\) corrected), which is in even closer agreement with \(\Delta\left\langle T_{av}(r)\right\rangle\). This is a manifestation that \(\Delta\left\langle T_{av}(r)\right\rangle\) is closely related with \(\Delta T_{\text{HBM}}\) as evaluated from the trapped particle heating once the contribution of its surroundings is accounted for by, in a first approximation, simple addition. Following this reasoning, the plot of the values of \(\Delta T\) coming from both sets of experiments against \(P_{abs}\) should overlap if the extracted \(\Delta T\) values have some physical resemblance between them. In other words, we can disclose if \(\Delta\left\langle T_{in}\right\rangle\) is comparable with \(\Delta T_{\text{HBM}}\) in the whole power regime and, consequently, whether remote-non-trapped thermometers are reliable tools to described the HBM dynamics within their thermal working range. This comparison is presented in Fig. 4, where the theoretical value of \(\Delta_{\text{HBM}}\) is also included. Notice that one can evaluate the theoretical \(\Delta T_{s}\) value from the Laplace equation independently of the concrete initial temperature \(T_{0}\) but, since \(T_{\text{HBM}}\) is a function of \(T_{0}\) and our experimental values are different in \(\sim\)2 K in each setup, the continuous line representing the theoretical \(\Delta T_{\text{HBM}}\) value has been calculated for the average \(T_{0}\). The agreement between experimental and theoretical data is such that the RMSE is \(\sim\)6 K. Besides, the contribution of the nanothermometers has been subtracted from the experiments at 808 nm for the sake of comparison. The representation of those sets of data in the absorbed power range of 0-20 \(\mu\)W indicates that the values of \(\Delta\left\langle T_{in}\right\rangle\) derived from auxiliary aide-emissive nanothermometers nicely correlate, within experimental uncertainties, with those defining the particle dynamics in the framework of the HBM theory when the laser power was appropriately scaled with the absorption cross-section of the nanoparticles. This result indicates both negligible effects of the nanothermometers on the physical properties of the heat source and, \(viceversa\), and that the Fourier constitutive equation remains applicable in the low fluence regime at the nanoscale, even for estimating the HBM temperature. ## III Conclusions To sum up, the temperature and heat generation by absorbing anisotropic hybrid nanostructures under optical trapping conditions were experimentally and theoretically achieved on the basis of: 1. _The evaluation of the balance of the optical forces Figure 4: Comparison between \(\Delta T_{\text{HBM}}\) increments obtained with the cutoff frequency (yellow symbols) and \(\Delta\left\langle T_{in}\right\rangle\) as obtained with the emission of Ag\({}_{2}\)S NCs (grey symbols). The continuous red line stands for the theoretical calculation of \(\Delta T_{\text{HBM}}\) derived from the HBM theory assuming a temperature profile as defined in Equation 4. on the confining trap as a function of the laser power._ The experimental temperature data were extracted from the expected dependence of the power-to-corner frequency _ratio_ with the _effective_ medium viscosity as established in the HBM theory. The evaluated heating parameter in the classical Brownian approximation is in correspondence with previously reported data. In contrast, the HBM method led to temperature increments systematically higher than those obtained from the classical Brownian motion vision of a particle dynamics immersed in a thermal bath. This fact points to the necessity of applying the sparingly employed HBM theory to extract valid information from thermometric measurements when handling highly absorbing particles and question the traditional meaning of _temperature_ as far as single particle nanothermometry is concerned. It was also verified that at low energy fluxes the Fourier equation remains applicable to obtain the HBM _temperature_ from the directly estimated value of _T\({}_{s}\)_. (ii) _The calibration of a colloidal suspension of NIR emissive nanothermometers embedding the trapped nanoparticles_. Average internal temperature increments within the trapping volume have been accessed directly by the near-infrared emission of Ag\({}_{2}\)S NCs acting as NIR nanothermometers. The values obtained within this approach constitute a sort of average temperature and it was found that it might be resembling the HBM temperature after correcting the heating contribution of the environment. This asseveration was demonstrated by proving that by merging the experimental data coming from both sets of results for the dependence of \(\Delta T\) with \(P\), remarkably obtained in two different laboratories and with two different trapping setups operating at different trapping wavelengths, they collapsed when plotted against the absorbed power. These findings represent a robust standpoint of the temperature measurements in single-particle systems in relation to the experimental route from which the heating properties are derived. That said, while experimental pieces of evidence indicate that the internal temperature determined from the appropriate calibration of NIR-emissive nanothermometers capture that dictating the particle dynamics (_i.e._, the HBM temperature), no rigorous theoretical validation of such conclusion has been reported to date. Further applications of the protocol initiated in this work might yield a reinterpretation or validation of the trapping experiments on self-emissive nanothermometers [52]. On a final note, in light of the importance of the HBM theory to elucidate nanothermometric data coming from optical trapping setups, it seems urgent to generalize it to incorporate the local heating contribution of both the nearby particles and solvent and the effects of the particle softness [38] into the formalism. ## Methods **Chemical reagents.** Gold (III) acetate, iron (III) chloride, sodium oleate, oleic acid 99%, oleylamine, 1,2-hexadecanediol, gallic acid, polyethyleneglycol (PEG) 3000 Da, 1-octadecene, triethylamineine, 4-dimethylaminopyridine (DMAP), and ethanol 99% were obtained from Sigma Aldrich. Dimethyl sulfoxide (DMSO), toluene, acetone, hexane, chloroform, dichloromethane (DCM), and tetrahydrofuran (THF) were supplied by Acros organics. All reagents were used as received without further purification. Milli-Q water (18.2 M\(\Omega\cdot\)cm) were obtained from a Millipore system. Ag\({}_{2}\)S NCs were obtained from NIR Optics Technology (average size, \(\sim\)9 nm). **Seed-growth synthesis of Au@Fe\({}_{3}\)O\({}_{4}\) nanoflowers.** _Synthesis of iron (III) oleate._ The synthesis was done following Ref. [53]. Briefly, a mixture of 10.8 g of iron (III) chloride (40 mmol) and 36.5 g of sodium oleate (120 mmol) were dissolved in a solvent mixture with 80 mL of ethanol, 60 mL of distilled water and 140 mL of hexane. The resulting solution was heated to 60 \({}^{0}\)C for 4 h in hexane reflux under inert atmosphere. The reaction was then cooled down to room temperature. The organic phase was washed 3 times with distilled water and the hexane was evaporated in the rotavapor. _Synthesis of the gold seeds._ The synthesis was done following Ref.[11] with some modifications. 50 mg of gold (III) acetate were dissolved in a mixture of 0.8 mL oleic acid, 0.6 mL of oleylamine, 100 mg of 1,2-hexadecanediol and 5 mL of 1-octadecene. The mixture was heated under vacuum to 120 \({}^{0}\)C with a heating rate of 5 \({}^{0}\)C\(\cdot\)min\({}^{-1}\), and kept at this temperature for 30 min. After cooling down, it was washed twice with 36 mL of a mixture of ethanol/acetone (1:1, v:v) and centrifuged at 8000 rpm for 20 minutes. Then, the gold NPs were dispersed in 10 mL of hexane. _Synthesis of Au-Fe\({}_{3}\)O\({}_{4}\) nanoflowers._ The synthesis was conducted as previously described by some of us [30]. 1 mL of Au NPs was mixed with 0.63 mL of oleylamine, 0.66 mL of oleic acid, 0.645 mL of 1,2-hexadecanediol, 10 mL of 1-octadecene and 0.125 g of iron (III) oleate. The mixture was heated under vacuum to 120 \({}^{0}\)C and kept at this condition for 20 min. Then, the temperature was raised to 200 \({}^{0}\)C and kept at this temperature for 120 min. The temperature was raised again to 315 \({}^{0}\)C with a heating rate of 5 \({}^{0}\)C\(\cdot\)min\({}^{-1}\) and kept at this temperature for 30 min (growing of iron). After cooling it down, it was washed with a mixture of ethanol/acetone (1:1) and centrifuged. This step was done twice. Then, the gold NPs were dispersed in 10 mL of hexane. **Functionalization of the Au-Fe nanoflowers.** The synthesis of the appropriate PEGylated ligand and the subsequent ligand exchange process were conducted as described previously by some of us in Ref.[54]. In the following we provide with the more important information. _Synthesis of the PEGylated ligand_. First, we proceed with a dropwise addition of DCC (1 g in 5 mL of THF) to a solution of 3 g of PEG, 170 mg of gallic acid and 24 mg of DMAP in 100 mL of THF and 10 mL of DCM. The resulting mixture was stirred overnight at room temperature and, finally, it was filtered through a filter paper and the solvents were rota-evaporated. _Ligand Exchange._ The ligand exchange procedure was performed following [54]. In short, in a glass vial a solution containing 1 mL of NPs (10 g/L of Fe), 1 mL of the gallol-PEG\({}_{n}\)-OH in a concentration of 0.1 M in CHCl\({}_{3}\) and 50 \(\mu\)L of triethylamine was added. The mixture was ultrasonicated for 1 h and kept 4 h at 50 \({}^{0}\)C. At this point, it was diluted with 5 mL of toluene, 5 mL of Milli-Q water and 10 mL of acetone. Then, it was shaked and the nanoparticles were transferred to the aqueous phase. After that, the aqueous phase was collected in a round-bottom flask and the residual organic solvents were rota-evaporated. Then, the gallol derived NPs were purified in centrifuge filters with a molecular weight cutoff of 100 kDa at 450 ref. In each centrifugation, the functionalized NPs were re-suspended with Milli-Q water. The purification step was repeated several times until the filtered solution was clear. After the purification, the gallol derived NPs were re-suspended in PBS buffer. Finally, to improve the monodispersity and remove aggregates, this solution was centrifuged at 150 ref for 5 min and it was placed onto a permanent magnet (0.6 T) for 5 min. **Physicochemical characterization of the Au-Fe nanoflowers.** The size distribution were obtained by TEM imaging carbon-coated copper grid in which a sample suspension of \(\sim\)1 g/L of (Fe+Au) are dropwise deposited. The images were acquired on a FEI Tecnai G2 Twin microscope working at 100 kV. Size histograms were then calculated by averaging the characteristic dimensions of 100 nanoparticles with the aid of the ImageJ free software. The extinction spectra were recorded in a Jenway Series 67 spectrophotometer.. The measurement were performed in a very dilute suspension to avoid potential collective effects affecting the scattering contribution due to the potential aggregation of the nanoparticles. **Piriform curve for particle modelling.** The geometry used to implement the simulations is based on the revolution of the piriform curve, which only depends on the value of a parameter \(\beta\), and is given in parametric form by the following expression: \[\beta^{2}=(x^{2}+y^{2})(1+2x+5x^{2}+6x^{3}+6x^{4}\\ +4x^{5}+x^{6}-3y^{2}-2xy^{2}+8x^{2}y^{2}\\ +8x^{3}y^{2}+3x^{4}y^{2}+2y^{4}+4xy^{4}+3x^{2}y^{4}+y^{6}). \tag{6}\] **Optical trapping at 1064 nm.** The experiments at 1064 nm were performed in a NanoTracker-II optical tweezers (JPK-Bruker) device. In this setup, the infrared laser was tightly focused by a high numerical aperture objective (63\(\times\), NA\({}_{b}\)=1.2) to a diffraction-limited spot where trapping occurs. The forward-scattered light is collected by a second objective and guided to a QPD to record the X-Y traces that generate a voltage signal sampled at a frequency of 50 kHz. This voltage is proportional to the displacement of the particle inside the optical trap. Spurious bright flashes in the video were assumed to be a consequence of a multiple trapping event and, in such cases, the PSD and the corresponding \(f_{c}\) were discarded. **Optical trapping at 808 nm.** The optical trapping experiments were performed in a homemade single-beam optical trapping setup. A drop of aqueous suspensions of MCNPs and NCs, previously stirred to avoid clusters, was pipetted into a 120 \(\mu\)m height and 13 mm diameter micro-chamber that was placed in the optical tweezers setup. A linearly polarized 808 nm single-mode fiber-coupled laser diode was focused into the chamber containing the sample by using a LCPLN 100\(\times\) IR Olympus microscope objective with a numerical aperture of 0.85 that leads to a spot size of \(\sim\)0.63 \(\mu\)m. The tightly focused laser beam was used for both trapping the nanoflowers and exciting the NCs in its closer surroundings. Real time optical imaging of the NCs was achieved by coupling a white LED, focused on the sample by a 40\(\times\) objective lens, and using a CMOS camera incorporated into the system. The lower objective lens is used as a light condenser, but also serves as a collector lens to focus the NCs emission into a OceanInsight NIR spectrometer, fiber-coupled to the system. **External and internal Ag\({}_{2}\)S NCs calibration.** For the external calibration of the Ag\({}_{2}\)S in bulk dispersions, the temperature of the sample was controlled by a Linkam PE120 stage (\(\pm\) 0.1 \({}^{9}\)C). The NCs dispersion was excited with a fiber-coupled 808 nm single-mode diode laser while the NIR emission spectra were collected with an OceanInsight high-performance NIR Spectrometer (900-1700 nm). A high quality long-pass filter (750 nm) was placed in the detection path so that the noise level registered by the camera did not exceed 0.5 % of the signal generated by the Ag\({}_{2}\)S NCs. While the heating contribution of water in the trap might be set to \(B_{w}=3\) K\(\cdot\)W\({}^{-1}\)[55], the contribution of the Ag\({}_{2}\)S NCs is unknown. To evaluate its contribution, a set of experiments on the colloidal suspension under trapping conditions (but without trapped MCNPs) were performed. The temperature increments were evaluated from the internal calibration curve, and the solvent plus NCs heating coefficient, \(B_{nt}\), was determined from the slope of the \(\Delta T\) against laser power measurements. The calibration curve is shown in Figure S4. The outcome of these experiments is \(B_{nt}\)\(\approx\) 42 K\(\cdot\)W\({}^{-1}\). **Acknowledgements** The authors acknowledge funding from the Ministerio de Ciencia e Innovacion of Spain (grants PID2022-136919NA-C33, PID2021-127427NB-I00, PID2020-118448RBC21, PID2019-105195RA-I00, CNS2022-135495, and TED2021-129937B-I00), from the Ministerio de Economia, Industria y Competitividad of Spain (grant CTQ2017-86655-R) and from FEDER/Consejeria de Transformacion Economica, Industria, Conocimiento y Universidades of Andalucia (grants P18-FR-3583 and P20_00727/PAIDI2020). H.C. and R.A.R. gratefully acknowledge funding from HORIZON-MSCA-2021-PF-01 Grant agreement ID: 101065163. E.O.R gratefully acknowledges the financial support provided by the Spanish Ministerio de Universidades, through the FPU program (FPU19/04803) and C. C. thanks to the Consejeria de Salud y Families (Junta de Andalucia) for his senior postdoctoral grant (RH-0040-2021).
2309.07962
Beck modules and alternative algebras
We set out the general theory of ``Beck modules'' in a variety of algebras and describe them as modules over suitable ``universal enveloping'' unital associative algebras. We develop a theory of ``noncommutative partial differentiation'' to pass from the equations of the variety to relations in a universal enveloping algebra. We pay particular attention to the case of alternative algebras, defined by a restricted associative law, and determine the Poincar\'e polynomial of the universal enveloping algebra in the homogeneous case.
Nishant Dhankhar, Haynes Miller, Ali Tahboub
2023-09-14T18:00:20Z
http://arxiv.org/abs/2309.07962v3
# Beck modules and alternative algebras ###### Abstract. We set out the general theory of "Beck modules" in a variety of algebras and describe them as modules over suitable "universal enveloping" unital associative algebras. We pay particular attention to the somewhat nonstandard case of "alternative algebras," defined by a restricted associative law, and determine the Poincare polynomial of the universal enveloping algebra in the homogenous case. 2010 Mathematics Subject Classification: 17A30,17.04 ## 1. Introduction The notion of a "module" occupies an important place in the study of general algebraic systems. Most of these diverse notions are united under the theory of "Beck modules." Given an object \(A\) in any category \(\mathbf{C}\), one may consider the "slice category" \(\mathbf{C}/A\) of objects in \(\mathbf{C}\) equipped with a map to \(A\). A Beck module for \(A\) is then an abelian group object in \(\mathbf{C}/A\). If \(\mathbf{C}\) is the category of commutative rings, for example, a Beck module for \(A\) is simply an \(A\)-module, while if \(\mathbf{C}\) is the category of associative algebras, a Beck module for \(A\) is an \(A\)-bimodule. Many other examples occur in the literature. This definition occurs in the thesis [5] of Jonathan Beck written under the direction of Samuel Eilenberg. Eilenberg himself had discussed such objects in [8], at least in the linear context, as the kernel of a "square zero extension." These kernels were understood to constitute "representations" of the algebra, and this structure was made explicit in various cases. We review below the context of a "variety" \(\mathbf{V}\) of algebras over a commutative ring \(K\). In this case, for every \(\mathbf{V}\)-algebra \(A\) the category \(\mathbf{Mod}_{A}\) of Beck \(A\)-modules is an abelian category with a single projective generator. As a result, the category \(\mathbf{Mod}_{A}\) is equivalent to the category of right modules over a canonical unital associative \(K\)-algebra \(U_{\mathbf{V}}(A)\), the "universal enveloping algebra" for \(A\). This raises the question of identifying the structure of \(U_{\mathbf{V}}(A)\) for various varieties \(\mathbf{V}\) and \(\mathbf{V}\)-algebras \(A\). We review some of the standard examples, and then focus on a somewhat less standard one, the variety of "alternative algebras" over \(K\). This example has been considered before, but even over a field basic features of the universal enveloping algebra for an alternative algebra, such as its dimension, have remained obscure. In 1954, Nathan Jacobson [12] wrote "The introduction of the universal associative algebras for the birepresentations [his term for Beck modules] enables one to split the representation problem into two parts: (1) determination of the structure of \(U(A)\), (2) representation theory for the associative algebra \(U(A)\). In practice, however, it seems to be difficult to treat (1) as a separate problem. Only in some special cases is it feasible to attack this directly." Richard Schafer [15] observed in 1966 that if \(\mathbf{V}\) is the variety of alternative \(K\)-algebras, with \(K\) a field, and \(\dim_{K}A=n\), then \(\dim_{K}U(A)\leq 4^{n}\). But the precise dimension, even the case of "homogeneous" alternative algebras - those with trivial product - over a field, has eluded analysis. In that case, the universal enveloping algebra admits a natural grading, by word length, or, as we call it, by weight. Write \(K^{n}\) for the \(n\)-dimensional homogeneous alternative \(K\)-algebra. The principal new result in this paper is the description of an explicit basis for the universal enveloping algebra of a homogeneous alternative algebra over a field. A surprise is that characteristic 3 diverges from all other characteristics. **Theorem 1.1**.: _Let \(K\) be a field. If the field \(K\) is of characteristic different from 3,_ \[\dim U(K^{n})_{k}=\begin{cases}\binom{2n}{k}&\text{if }k\leq 2\\ 2\binom{n}{k}&\text{if }k\geq 3\,.\end{cases}\] _The same is true if the characteristic is \(3\) except when \(k=3\), in which case the dimension is \(4\binom{n}{3}\)._ In particular, \[U(K^{n})_{k}=0\quad\text{for}\quad k>n\,.\] **Corollary 1.2**.: _When \(K=\mathbb{Z}\), \(U(\mathbb{Z}^{n})_{k}\) is torsion-free except when \(n\geq 3\) and \(k=3\), in which case \(U(\mathbb{Z}^{n})_{3}\) is a direct sum of \(2\binom{n}{3}\) infinite cyclic groups and the same number of nontrivial finite cyclic 3-groups._ These calculations show that at least in the homogeneous case, the growth rate of the univesal enveloping algebra is indeed exponential, but much slower than the upper bound observed by Schafer. Our tool is the theory of Grobner bases for noncommutative graded algebras. We employ the computer algebra system Sage to determine a Grobner basis for the ideal of relations defining \(U(K^{n})\), with \(K=\mathbb{Q}\), for \(n\leq 5\). The structure of this basis for these small values of \(n\) turns out to imply that a basis with the same structure exists for all \(n\). The set of normal monomials with respect to this basis (which project to a basis for \(U(K^{n})\)) is then easy to determine. An analysis of the Sage output indicates that the same is true for any characteristic different from 3, and gives the modified answer when the characteristic is 3. To our knowledge, this is the first example of the use of a computer algebra system to prove an infinite family of results. After a review of the theory of varieties of algebras in SS2, and a reminder of some particular features of alternative algebras in SS3, we describe in SS4 the theory of Beck modules in this generality and describe how they appear in our various examples. In SS5 we discuss the universal enveloping algebra and some of its features, and in SS6 we specialize to the case of alternative algebras. Finally, in SS7, we review some of the essential features of the theory of Grobner bases. This work is intended as a first step in the study of the Quillen homology and cohomology of algbraic systems such as alternative algebras. **Acknowledgements.** This work was carried out under the auspices of a program, supported by MIT's Jameel World Education Laboratory, designed to foster collaborative research projects involving students from MIT and Palestinian universities. We acknowledge with thanks the contributions made by early particpants in this program - Mohammad Damaj and Nasief Khlaif of Birzeit University, Hadeel AbuTabeekh of An-Najah National University, and Heidi Lei of MIT - as well as the support of Palestinian faculty - Reema Sbeih and Mohammad Saleh at Birzeit and Khalid Adarbeh and Muath Karaki at NNU. The first author acknowledges support by the MIT UROP office. ## 2. Varieties of algebras We will work with algebras defined by a product operation, though much of this work can be carried out in much greater generality. Following the lead of Bourbaki [6, SS7.1], we make the following definition. **Definition 2.1**.: _A magma is a set \(X\) with a binary operation \(X\times X\to X\) (written as juxtaposition). A unital magma is a magma \(X\) equipped with an element \(e\) such that \(ex=x=xe\) for all \(x\in X\)._ We will also restrict our attention to linear examples, and work over a commutative ring \(K\). So a _magmatic \(K\)-algebra_ (or just \(K\)-_algebra_) is a \(K\)-module \(A\) equipped with a \(K\)-bilinear product \(A\otimes A\to A\) (written as juxtaposition). It becomes _unital_ if it is equipped with an element \(1\) such that \(1a=a=a1\) for all \(a\in A\). Magmatic \(K\)-algebras constitute the objects in a category \(\mathbf{Mag}_{K}\). The forgetful functor to sets has a left adjoint \(Mag_{K}:\mathbf{Set}\to\mathbf{Mag}_{K}\), so we can define the free magma on a set. The free magma \(Mag(S)\) generated by a set \(S\) is the set of bracketed strings of elements of \(S\); see [6, SS7.1]. The free magmatic \(K\)-algebra on a set \(S\) is the free \(K\)-module generated by \(Mag(S)\): \(Mag_{K}(S)=KMag(S)\). We can adjoin axioms using the following device [2]. An _equation_ is an element of the free magmatic \(K\)-algebra on a finite set. Given a magmatic \(K\)-algebra \(A\), we will say that an equation \(\omega\in Mag_{K}(S)\) is _satisfied by \(A\)_ if for any set map \(S\to A\) the induced map \(Mag_{K}(S)\to A\) sends \(\omega\) to \(0\). A set of equations defines a _variety_ of \(K\)-algebras, namely the subcategory of \(\mathbf{Mag}_{K}\) cut out by (that is, satisfying) these equations. An object of a variety of \(K\)-algebras \(\mathbf{V}\) is a "\(\mathbf{V}\)-algebra." A variety of \(K\)-algebras is an "algebraic category" [9]. It is complete and cocomplete. Any subalgebra of a \({\bf V}\)-algebra is again a \({\bf V}\)-algebra. The forgetful functor \(u:{\bf V}\to{\bf Mod}_{K}\) has a left adjoint \[F:{\bf Mod}_{K}\to{\bf V}\,.\] **Examples 2.2**.: Here are four standard examples, beyond \({\bf Mag}_{K}\) itself. * \({\bf Ass}_{K}\), the variety of associative algebras, is defined by the equation \[(xy)z-x(yz)\in Mag_{K}\{x,y,z\}\,.\] * Adding the further equation \[xy-yx\in Mag_{K}\{x,y\}\] gives us commutative \(K\)-algebras, \({\bf Com}_{K}\). * A Lie algebra (in \({\bf Lie}_{K}\)) is a \(K\)-algebra satisfying the equations \[xx\in Mag_{K}\{x\}\,,\quad(xy)z+(yz)x+(zx)y\in Mag_{K}\{x,y,z\}\,.\] * An alternative algebra is a magmatic \(K\)-algebra satisfying the equations \[(xx)y-x(xy)\,,\,(xy)y-x(yy)\,\in\,Mag_{K}\{x,y\}\,.\] These are the objects in the variety \({\bf Alt}_{K}\). Note that we do not assume a unit element in any of these examples. There are many other axioms in use today - Vinberg algebras, Novikov algebras, Leibniz algebras, divided power algebras...- each one of which determines an interesting theory of Beck modules. There is a reversal involution \(\overline{(-)}:{\bf Mag}\to{\bf Mag}\). It comes with a natural bijection of sets \(X\to\overline{X}\) that we will also denote with an overline, and \(\overline{x}\,\overline{y}=\overline{yx}\). It extends to an involution on \({\bf Mag}_{K}\). To any variety \({\bf V}\) of \(K\)-algebras we can associate an "opposite" variety \(\overline{{\bf V}}\), with defining equations given by reversing the defining equations of \({\bf V}\). By sending a \({\bf V}\)-algebra to the same \(K\)-module with opposite multiplication, you get a natural equivalence of categories \[{\bf V}\to\overline{{\bf V}}\,,\quad A\mapsto\overline{A}\,.\] A variety \({\bf V}\) is _symmetric_\(\overline{{\bf V}}={\bf V}\). All the examples above are symmetric, but, for example, the variety of "left alternative \(K\)-algebras," satisfying \((xx)y-x(xy)\) but perhaps not \((xy)y-x(yy)\), is not symmetric; its opposite is the variety of right alternative \(K\)-algebras. If \({\bf V}\) is a symmetric variety, the isomorphism \({\bf V}\to\overline{{\bf V}}\) becomes an involution on \({\bf V}\), sending an algebra to the same \(K\)-module with the opposite multiplication. We end with a discussion of base-change. Let \(\alpha:K\to L\) be homomorphism of commutative rings. An equation in \({\bf Mag}_{K}(S)\) induces an equation in \({\bf Mag}_{L}(S)\). So a variety of \(K\)-algebras, say \({\bf V}_{K}\), induces a variety of \(L\)-algebras, which we'll denote by \({\bf V}_{L}\). This is a transitive operation. Moreover, the functor \(L\otimes_{K}-:{\bf Mod}_{K}\to{\bf Mod}_{L}\) lifts to a functor \(L\otimes_{K}-:{\bf V}_{K}\to{\bf V}_{L}\). ## 3. Alternative algebras The example of alternative algebras is less familiar than the others and we spend a moment introducing it. Schafer's book [15] provides a good reference. Any associative \(K\)-algebra is alternative, and Emil Artin proved that any alternative \(K\)-algebra with two generators is associative [7]. The alternative identities imply the further "flexible" equation \[(xy)x=x(xy)\,.\] The algebra of octonions [3] is a well-known example of a nonassociative alternative algebra. The _associator_ in a magmatic \(K\)-algebra is the trilinear form \[(x,y,z)=(xy)z-x(yz)\] In an alternative algebra the associator is an alternating form: transpositions reverse the sign. This suggests adding a further basic example, one defined by weight 3 equations: * An almost alternative \(K\)-algebra is a magmatic \(K\)-algebra for which the associator is an alternating form; that is to say, satisfying the equations \[(xy)z-x(yz)+(xz)y-x(zy)\,,\quad(xy)z-x(yz)+(yx)z-y(xz)\,.\] Write \(\mathbf{AAlt}_{K}\) for this variety. If 2 is invertible in \(K\) these axioms are equivalent to the alternative axioms. In some respects this "almost alternative" condition is better behaved than the alternative condition itself; it is operadic, for example. If \(K\) is an \(\mathbb{F}_{2}\)-algebra, multiplication table \[\begin{array}{c|cccc}&a&b&c\\ \hline a&a&b&0\\ b&0&0&0\\ c&c&0&b\end{array}\] defines an almost alternative \(K\)-algebra that is not alternative. A monoid \(X\) in \(\mathbf{Set}\) defines an associative algebra in \(\mathbf{Mod}_{K}\) by forming the free \(K\)-module on \(X\). The equations for alternative algebras make sense in \(\mathbf{Set}\), so one can talk about "alternative sets." An alternative product on \(X\) determines a magmatic \(K\)-algebra structure on \(KX\), but it is not necessarily alternative. For example the multiplication table \[\begin{array}{c|cccc}&a&b&c\\ \hline a&a&a&c\\ b&a&b&b\\ c&c&b&c\end{array}\] is commutative and alternative, and hence even flexible, but the \(K\)-module that it generates is not alternative. We thank Hadeel AbuTabeekh for this example. ## 4. Beck modules Let \(\mathbf{V}\) be a variety of \(K\)-algebras and \(A\) a \(\mathbf{V}\)-algebra. The "slice category" \(\mathbf{V}/A\) has as objects morphisms in \(\mathbf{V}\) with target \(A\), and as morphisms maps compatible with the projections to \(A\). This slice category again has good properties; in particular it is complete and cocomplete. We can thus speak of abelian group objects in \(\mathbf{V}/A\). An abelian group structure on a \(\mathbf{V}\)-algebra over \(A\), \(p:B\downarrow A\), begins with a unit: a map from the terminal object of \(\mathbf{V}/A\), that is, a section \(\eta:A\uparrow B\) of \(p\). This unit defines an "axis inclusion" \(i:B\coprod B\to B\times_{A}B\) in \(\mathbf{V}/A\). A magma structure on \(B\) is an extension of the "fold map" \(\nabla:B\coprod B\to B\) over the product. In these algebraic situations, the map \(i\) is an epimorphism, so such an extension is unique if it exists: Being a unital magma in \(\mathbf{V}/A\) is a _property_ of a pointed object, not further structure on it. Furthermore, the unique unital magma structure with given unit, when it exists, is an abelian group structure. We call an object of this type an _abelian object_. **Definition 4.1**.: _[_5_]_ _Let \(A\) be a \(\mathbf{V}\)-algebra. A Beck \(A\)-module is an abelian object in the slice category \(\mathbf{V}/A\):_ \[\mathbf{Mod}_{A}=\operatorname{Ab}(\mathbf{V}/A)\,.\] **Proposition 4.2**.: _[_1_, Theorem 3.16]_ _and [4, Chapter 2, Theorem 2.4]\(\mathbf{Mod}_{A}\) is an abelian category._ In our \(K\)-linear situation, write \(M\) for the kernel of \(p:B\downarrow A\). Suppose \(B\downarrow A\) has the structure of a unital magma in \(\mathbf{V}/A\). This consists of two pieces of structure: the "unit" is a map from the terminal object in \(\mathbf{Mod}_{K}/A\), that is, a section of \(p:B\downarrow A\), and the "addition" is a map \(\alpha:B\times_{A}B\to B\) over \(A\). Since \[B\times_{A}B=(A\oplus M)\times_{A}(A\oplus M)=A\oplus M\oplus M\] the structure map has the form \(\alpha:A\oplus M\oplus M\to A\oplus M\). Using linearity and unitality it's easy to see that the abelian group structure is actually determined by the addition in \(M\): \[\alpha(a,x,y)=(a,x+y)\,.\] The \(K\)-algebra structure on \(A\oplus M\) is described by left and right "actions" \[A\otimes M\to M\,,\quad M\otimes A\to M\] both of which we denote by juxtaposition. Together they determine the multiplication on \(A\oplus M\) by \[(a,x)(b,y)=(ab,ay+xb)\,.\] Absent further axioms, these action maps satisfy no properties. This describes the category of magmatic Beck \(A\)-modules. If we are working with a general variety of \(K\)-algebras \(\mathbf{V}\), the axioms of \(\mathbf{V}\) will determine further properties of these two actions. Here's how this works out in our examples. In describing them it will be useful to adjoin a unit element to an object in \(\mathbf{Ass}_{K}\); so let's write \[A_{+}=K\oplus A\] with product given by \((p,a)(q,b)=(pq,pb+qa)\). * \(\mathbf{V}=\mathbf{Ass}_{K}\): Applying the associativity equation to the triple \((a,x)\), \((b,y)\), \((c,z)\) in \(A\oplus M\) and setting all but one of \(x,y,z\) to \(0\) gives \[(xb)c=x(bc)\,,\quad(ay)c=a(yc)\,,\quad(ab)z=a(bz)\,.\] In other words, \(\mathbf{Mod}_{A}\) is the usual category of bimodules over \(A_{+}\) (for which \(K\) acts the same way on both sides). * \(\mathbf{V}=\mathbf{Com}_{K}\): The actions satisfy these axioms together with \[ax=xa\] In other words, \(\mathbf{Mod}_{A}\) is the usual category of \(A_{+}\)-modules. * \(\mathbf{V}=\mathbf{Lie}_{K}\): \((a,x)(a,x)=0\) gives \(ax+xa=0\), while the Jacobi identity \(((a,x)(b,y))(c,z)+((b,y)(c,z))(a,x)+((c,z)(a,x))(b,y)=0\) gives \[(xb)c+(bc)x+(cx)b=0\quad(ay)c+(yc)a+(ca)y=0\] \[(ab)z+(bz)a+(za)b=0\,.\] These three equations all say the same thing, which we may re-express using \(ax=-xa\) as \[x(ab)=(xa)b-(xb)a\,.\] * \(\mathbf{V}=\mathbf{Alt}_{K}\): The equation \(((a,x)(a,x))(b,y)=(a,x)((a,x)(b,y)\) gives \[(ax)b+(xa)b=a(xb)+x(ab)\,,\quad(aa)y=a(ay)\] while the other alternative axiom gives \[(xb)b=x(bb)\,,\quad(ab)y+(ay)b=a(by)+a(yb)\,.\] Rearranging gives \[(ab)x-a(bx)=a(xb)-(ax)b=(xa)b-x(ab)\] \[(aa)x=a(ax)\,,\quad(xb)b=x(bb)\,.\] * \(\mathbf{V}=\mathbf{AAlt}_{K}\): The the associator \((u,v,w)\) makes sense whenever two of \(\{u,v,w\}\) are in the \(K\)-algebra \(A\) and the third is in a Beck module for it. The linearization of the almost alternating condition is then simply that \[(a,x,b)\] is alternating for \(a,b\in A\) and \(x\) in a Beck module \(M\). These are the "alternative modules" of [15, p. 66], where the connection between such objects and extensions of the alternative algebra \(A\) is made explicit. Let \(\mathbf{V}\) be a variety of \(K\)-algebras and \(A\) a \(\mathbf{V}\)-algebra. The forgetful functor \(\mathbf{Mod}_{A}\to\mathbf{Mod}_{K}\) sending \(M\) to its underlying \(K\)-module has a left adjoint \[F_{A}:\mathbf{Mod}_{K}\to\mathbf{Mod}_{A}\,.\] In \(\mathbf{Ass}\) this is given by \[F_{A}(V)=A\otimes V\otimes A\,.\] In \(\mathbf{Com}\) it is given by \[F_{A}(V)=V\otimes A\,.\] The free Beck module construction in the other cases is most conveniently described in terms of the universal enveloping algebra; see SS5. Finally, we take note of a canonical object in \(\mathbf{Mod}_{A}\), the "regular representation" of \(A\): **Lemma 4.3**.: _In any variety of \(K\)-algebras, \(A\) itself is a Beck \(A\)-module, using its multiplication for both actions._ Proof.: Given a magmatic algebra \(A\), define the "dual number" magmatic algebra to be \(A\oplus Ae\) with product given by \((a+xe)(b+ye)=ab+(ay+xb)e\). This is an object in \(\mathrm{Ab}(\mathbf{Mag}_{K}/A)\). The map \(a\to(a,0)\) is a section, and the resulting actions of \(A\) on the kernel establish \(A\) as a Beck module over itself. So the result holds for magmatic Beck modules. Let \(S\) be a finite set and \(\omega\) an element of \(KMag(S)\). We claim that if \(\omega\) is satisfied by \(A\) then it is also satisfied by \(A\oplus Ae\). Applying this to all the equations defining the variety \(\mathbf{V}\), we reach our conclusion. Given a map \(S=\{s_{1},\ldots,s_{n}\}\to A\) to the underlying set of \(A\) sending \(s_{i}\) to \(a_{i}\), denote by \(\omega(a_{1},\ldots,a_{n})\) the image of \(\omega\) under the extension of this map to a \(K\)-algebra map. Then \[\omega(a_{1}+x_{1}e,\ldots,a_{n}+x_{n}e)=\omega(a_{1},\ldots,a_{n})+\sum_{i} \omega(a_{1},\ldots,x_{i},\ldots,a_{n})e\] and all the terms on the right vanish. ## 5. The universal enveloping algebra In many cases, such as \(\mathbf{Lie}_{K}\) and \(\mathbf{Alt}_{K}\), it is useful to use the Freyd embedding theorem [10, p. 106] to express the category \(\mathbf{Mod}_{A}\) as the category of modules over an appropriate associative \(K\)-algebra with unit, \(U_{\mathbf{V}}(A)\), which we term the _universal enveloping algebra_. In the operadic context, this construction was described for example by Ginzburg and Kapranov [11, SS1.6]. So let \(\mathbf{V}\) be a variety of \(K\)-algebras and \(A\in\mathbf{V}\). The forgetful functor \(u:\mathbf{Mod}_{A}\to\mathbf{Mod}_{K}\) has a left adjoint \(F_{A}:\mathbf{Mod}_{K}\to\mathbf{Mod}_{A}\). Since \[\mathbf{Mod}_{A}(F_{A}(K),M)=\mathbf{Mod}_{K}(K,uM)=uM\,,\] the functor \(\mathbf{Mod}_{A}(F_{A}K,-)\) is faithful and \(F_{A}K\) is a projective object: it serves as the projective generator for the category \(\mathbf{Mod}_{A}\) required by Freyd's theorem. The associative \(K\)-algebra \(U_{\mathbf{V}}(A)\) is defined as the endomorphism \(K\)-algebra of the free Beck \(A\)-module on one generator: \[U_{\mathbf{V}}(A)=\operatorname{End}_{\mathbf{Mod}_{A}}(F_{A}(K))\,.\] By the universal property of \(F_{A}(K)\), the underlying \(K\)-module of \(U_{\mathbf{V}}(A)\) is the same as that of \(F_{A}(K)\). The identity endomorphism \(e\) is the image in \(F_{A}(K)\) of \(1\in K\) under the natural map \(K\to uF_{A}(K)\); it is the unit element in this associative \(K\)-algebra. Now the \(K\)-module underlying a Beck module \(M\) has the structure of a right \(U_{\mathbf{V}}(A)\)-module: Represent \(x\in M\) by \(\hat{x}:F_{A}(K)\to M\) and let \(f\in U_{\mathbf{V}}(A)\), i.e. \(f:F_{A}(K)\to F_{A}(K)\). Then \(xf\) is the element represented by \(\hat{x}\circ f\). This construction is of course natural in the algebra \(A\). Any variety of \(K\)-algebras has a terminal object, the trivial \(K\)-module \(0\), and the defining property of the universal enveloping algebra implies that \(U_{\mathbf{V}}(0)=K\). So universal enveloping algebras always have a canonical augmentation \[\epsilon:U_{\mathbf{V}}(A)\to K\,.\] There is a map \(A\oplus A\to U_{\mathbf{V}}(A)\) given by \[(a,b)\mapsto ae+eb\,.\] Let's write \(l_{a}\in U_{\mathbf{V}}(A)\) for the element corresponding to \(ae\) and \(r_{b}\in U_{\mathbf{V}}(A)\) for the element corresponding to \(eb\). This map extends to a surjection of unital associative \(K\)-algebras \[\operatorname{Tens}_{K}(A\oplus A)\to U_{\mathbf{V}}(A)\,.\] If \(\mathbf{V}=\mathbf{Mag}_{K}\), this map is an isomorphism. In general, the kernel will depend on what axioms one has in \(\mathbf{V}\). For any finite set \(S\) and \(\omega\in\mathbf{Mag}_{K}S\), the process of writing down the corresponding relations in \(U_{\mathbf{V}}(A)\) for \(\mathbf{V}\) a variety of \(K\)-algebras satisfying \(\omega\) is a form of noncommutative partial differentiation. Here are our standard examples again. * \(\mathbf{V}=\mathbf{Ass}\): The identities read \[r_{a}r_{b}=r_{ab}\,,\quad l_{a}r_{b}=r_{b}l_{a}\,,\quad l_{ab}=l_{b}l_{a}\,.\] The quotient of \(\operatorname{Tens}_{K}(A\oplus A)\) by this relation is the usual "extended algebra": \[U_{\mathbf{Ass}}(A)=A^{e}=A_{+}^{op}\otimes A_{+}\] In this representation, \(r_{a}=1\otimes a\) and \(l_{a}=a\otimes 1\). * \(\mathbf{V}=\mathbf{Com}\): Now \(U_{\mathbf{Com}}(A)=A_{+}\), and both \(l\) and \(r\) are the natural embedding of \(A\) into the direct sum \(K\oplus A\). * \(\mathbf{V}=\mathbf{Lie}\): Now \(r=-l\), and the second equation in our description of Beck modules in \(\mathbf{Lie}\) gives \[r_{ab}=r_{a}r_{b}-r_{b}r_{a}\] This equation defines the usual Lie universal enveloping algebra. * \(\mathbf{V}=\mathbf{Alt}\): Here are the relations among the maps \(l\) and \(r\) (cf. [14, equation (6) on p. 2]): \[l_{a}^{2}=l_{aa}\,,\quad r_{b}^{2}=r_{bb}\] \[l_{ab}-l_{b}l_{a}=r_{b}l_{a}-l_{a}r_{b}=r_{a}r_{b}-r_{ab}\] This use of the term "universal enveloping algebra" differs from the classical Lie perspective. In that case, one has a "forgetful" functor \(\mathbf{Ass}_{K}\to\mathbf{Lie}_{K}\) that sends an associative \(K\)-algebra to the Lie algebra structure on the \(K\)-module \(A\) given by \([a,b]=ab-ba\); and \(U\) is the left adjoint of this functor. In our generality, there is no "underlying" associative algebra; the universal enveloping algebra has a different defining property. But it turns out to produce the same result in the case of \(\mathbf{Lie}_{K}\). The reversal endomorphism of \(\mathbf{Mag}\) induces natural isomorphisms \[U_{\overline{\mathbf{V}}}(\overline{A})=U_{\mathbf{V}}(A)^{op}\] that swaps the \(K\)-module maps \(r,l\) from the \(K\)-module \(A\). Formation of the universal enveloping algebra enjoys a strong base-change property. **Proposition 5.1**.: _Let \(\alpha:K\to L\) be a homomorphism of commutative rings, \(\mathbf{V}_{K}\) a variety of \(K\)-algebras, and \(A\in\mathbf{V}_{K}\). There is a natural map \(\alpha:U_{\mathbf{V}_{K}}(A)\to U_{\mathbf{V}_{L}}(L\otimes_{K}A)\) of \(K\)-algebras that extends to an isomorphism_ \[e_{\mathbf{V}}:L\otimes_{K}U_{\mathbf{V}_{K}}(A)\to U_{\mathbf{V}_{L}}(L \otimes_{K}A)\,.\] Proof.: To begin with, extension of scalars gives us a map \[e:\operatorname{Tens}_{K}(A\otimes_{K}A)\to\operatorname{Tens}_{L}((L\otimes _{K}A)\otimes_{L}(L\otimes_{K}A))\,.\] Let \(\omega\in\mathbf{Mag}_{K}(S)\) be an equation satisfied by \(\mathbf{V}_{K}\). Each map \(a:S\to A\) determines an element \(\omega(a)\) of \(\operatorname{Tens}_{K}(A\otimes A)\) that vanishes in \(U_{\mathbf{V}_{K}}(A)\). The image of \(\omega\) in \(\mathbf{Mag}_{L}(S)\) together with the composite \(S\to A\to L\otimes_{K}A\) determines an element of \(\operatorname{Tens}_{L}((L\otimes_{K}A)\otimes_{L}(L\otimes_{K}A))\) that vanishes in \(U_{\mathbf{V}_{L}}(L\otimes_{K}A)\). This element is none other than \(e(\omega(a))\). This shows that the map on tensor algebras descends to a map on universal enveloping algebra quotients. Since \(L\otimes_{K}(A^{\otimes_{K}n})\cong(L\otimes_{K}A)^{\otimes_{L}n}\), the map \(e_{\mathbf{V}}\) is an isomorphism in the absence of equations. The ideals match up since elements \(e(\omega(a))\) generate the ideal of relations in \(U_{\mathbf{V}_{L}}(L\otimes_{K}A)\). We have three closely related varieties of \(K\)-algebras, related by forgetful right adjoints \[\mathbf{Com}_{K}\mathop{\longrightarrow}^{u}\mathbf{Ass}_{K}\mathop{ \longrightarrow}^{u}\mathbf{Alt}_{K}\,.\] The left adjoints are given by forming the maximal associative quotient of an alternative algebra, and the maximal commutative quotient of an associative algebra. These functors induce right adjoints \[\mathbf{Com}_{K}/A\to\mathbf{Ass}_{K}/uA\] for \(A\in\mathbf{Com}_{K}\) and \[\mathbf{Ass}_{K}/A\to\mathbf{Alt}_{K}/uA\] for \(A\in\mathbf{Ass}_{K}\). As right adjoints, they preserve products, and hence induce functors \[\operatorname{Ab}(\mathbf{Com}_{K}/A)\to\operatorname{Ab}(\mathbf{Ass}_{K}/uA) \,,\quad\operatorname{Ab}(\mathbf{Ass}_{K}/A)\to\operatorname{Ab}(\mathbf{Alt} _{K}/uA)\,.\] These functors can be described by means of ring homomorphisms between the corresponding universal enveloping algebras: There is a \(K\)-algebra surjection natural in \(A\in\mathbf{Ass}_{K}\) \[U_{\mathbf{Alt}}(uA)\to U_{\mathbf{Ass}}(A)\,,\quad l_{a}\mapsto a\otimes 1\,, \quad r_{b}\mapsto 1\otimes b\,,\] and a \(K\)-algebra surjection natural in \(A\in\mathbf{Com}_{K}\) \[U_{\mathbf{Ass}}(uA)\to U_{\mathbf{Com}}(A)\,,\quad a\otimes b\mapsto ab\,.\] ## 6. Universal enveloping algebras for \(\mathbf{Alt}_{K}\) Let \(K^{n}\) denote the free \(K\)-module of rank \(n\) regarded as an alternative \(K\)-algebra with trivial product. Write \(l_{i}\) and \(r_{i}\), \(1\leq i\leq n\), for the images in \(U(K^{n})\) of the standard basis elements under \(l,r:K^{n}\to U(K^{n})\). This associative \(K\)-algebra is graded by weight. Clearly \(U(K^{0})=K\) and \(U(K^{1})\) has basis \(\{1,l_{1},r_{1},l_{1}r_{1}\}\); in fact \(l_{1}^{2}=r_{1}^{2}=l_{1}r_{1}-r_{1}l_{1}=0\). **Theorem 6.1**.: _If the field \(K\) is of characteristic not 3,_ \[\dim U(K^{n})_{k}=\begin{cases}\binom{2n}{k}&k\leq 2\\ 2\binom{n}{k}&k\geq 3\,.\end{cases}\] In particular, \(U(K^{n})_{k}=0\) for \(k>n\) as long as \(n>1\). The growth of \(\dim_{K}U(K^{n})\) with \(n\) is exponential; \[\dim_{K}U(K^{n})=2\cdot 2^{n}+n^{2}-1\,.\] Proof.: In fact we can make a more precise statement, specifying for each \(k\) a set of monomials in \(S=\{l_{1},r_{1},\ldots,l_{n},r_{n}\}\) forming a basis for \(U(K^{n})_{k}\). Order \(S\) as shown; order monomials first by weight and within a given weight left-lexicographically. The "leading monomial" in a polynomial will be the least term. In weight 0, we have only 1. In weight 1 we have \(S\). In weight 2 the basis consists of the elements \(xy\) where \(x,y\in S\) with \(x>y\). In weight \(k\geq 3\), the basis elements are \[r_{i(1)}r_{i(2)}\cdots r_{i(k-1)}r_{i(k)}\quad\text{and}\quad r_{i(1)}r_{i(2)} \cdots r_{i(k-1)}l_{i(k)}\] where \(i\) ranges over all strictly decreasing sequences of length \(k\). This follows from the theory of Grobner bases, together with a computation with Sage. We claim that the following list constitutes a reduced Grobner basis for \(U(K^{n})\), for any \(n\). We put leading monomials first. * weight 2, length 1: \(l_{i}l_{i},r_{i}r_{i}\) for all \(i\) * weight 2, length 2: \(l_{i}r_{i}-r_{i}l_{i}\) for all * weight 2, length 2: \(r_{j}r_{i}+l_{i}l_{j}\) and \(l_{j}l_{i}+r_{i}r_{j}\) for \(i>j\) * weight 2, length 3: \(r_{j}l_{i}-l_{i}r_{j}-r_{i}r_{j}\) and \(l_{j}r_{i}-l_{i}r_{j}-r_{i}l_{j}\) for \(i>j\) * weight 3, length 1: \(r_{i}r_{j}l_{j}\), \(l_{i}r_{j}l_{j}\), \(r_{i}l_{i}r_{j}\), and \(r_{i}l_{i}l_{j}\) for \(i>j\) * weight 3, length 2: \(r_{i}l_{j}l_{k}-r_{i}r_{j}r_{k}\), \(l_{i}r_{j}r_{k}-r_{i}r_{j}l_{k}\), \(l_{i}l_{j}r_{k}-r_{i}r_{j}r_{k}\), and \(l_{i}l_{j}l_{k}-r_{i}r_{j}l_{k}\) for \(i>j>k\) * weight 3, length 3: \(r_{i}l_{j}r_{k}+r_{i}r_{j}l_{k}+r_{i}r_{j}r_{k}\) and \(l_{i}r_{j}l_{k}+r_{i}r_{j}l_{k}+r_{i}r_{j}r_{k}\) for \(i>j>k\) This is proved over a field of characteristic zero by a Sage computation for \(n\leq 5\). This computation actually suffices to prove the general case. To see this, notice that in this range the only new relations beyond the defining relations are of weight 3. Thus polynomials arising from overlap differences involving 4 or more generators do not result in any additions to the basis. Stated this way, the same calculation applies for all larger values of \(n\). Now one must verify that none of the given monomials is divisible by the leading monomial of any Grobner basis element. The weight 2 Grobner polynomials enforce the condition that the indices in a normal basis element are weakly increasing, and strictly increasing except that \(r_{i}l_{i}\) is allowed. This gives the basis in weights less than 3. The six weight three patterns exclude \(l\)'s in normal monomials of weight greater than or equal to 3, except possibly at the end. This argument suffices to verify the theorem if the characteristic is zero. For \(K\) of finite characteristic one must certify that the given basis serves as a Grobner basis over \(K\) as well. For this, one first executes the overlap procedure for pairs of weight 2 polynomial relations. The result differs from the list given by Sage. But one checks that each of the overlap polynomials is in fact an integral linear combination of the polynomials given by Sage, and conversely each of the Sage polynomials is a linear combination of the overlap polynomials. This latter step precisely requires 3 to be invertible, so it succeeds over any field of characteristic not 3. Here is some more detail. The generators of \[I=\ker(\operatorname{Tens}(K^{n}\oplus K^{n})\to U_{\mathbf{Alt}}(K^{n}))\] are given by the set \(R_{0}\) consisting in \[r_{i}r_{i}\,,\quad l_{i}r_{i}-r_{i}l_{i}\,,\quad l_{i}l_{i}\,, \quad\text{for }1\leq i\leq n\] \[r_{j}r_{i}+l_{i}l_{j}\,,\quad l_{j}l_{i}+r_{i}r_{j}\,,\quad\text{ for }1\leq j<i\leq n\] \[r_{j}l_{i}-l_{i}r_{j}-r_{i}r_{j}\,,\quad l_{j}r_{i}-l_{i}r_{j}-r_ {i}l_{j}\,,\quad\text{for }1\leq j<i\leq n\,.\] The overlap differences come in three types, depending on whether one, two, or three indices are involved. The overlaps with just one index are already divisible by \(LM(R_{0})\) since \(r_{i}r_{i}\) and \(l_{i}l_{i}\) are. For fixed \(j<i\), half of the sixteen cases are not needed for the same reason. Each of the remaining eight overlap polynomials reduces using \(LM(R_{0})\) to an elment of the span of \[r_{i}r_{j}l_{j}\,,\quad r_{i}l_{i}r_{j}\,,\quad r_{i}l_{i}l_{j}\,,\quad l_{i}r _{j}l_{j}\,,\] and together they span this subspace, so we add these terms to the basis. For each sequence \(i>j>k\) between \(1\) and \(n\) we can consider overlaps of elements of \(R_{0}\) whose leading terms have indices given by \((k,j),(j,i)\) There are eight overlap patterns, indexed by the sequence of letters in the leading entries. Each overlap difference can be reduced using \(R_{0}\) to a linear combination of monomials with subscripts \((i,j,k)\). They are given by the following table of coefficients. \[\begin{array}{c|cccccccc}&rrr&rrl&rlr&rll&lrr&lrl&llr&lll\\ \hline rr,rr&1&1&1&-1&1&0&1&-1\\ rr,rl&0&0&1&1&0&1&1&2\\ rl,lr&1&0&1&0&2&1&1&0\\ rl,ll&1&0&1&0&1&1&1&1\\ lr,rr&1&0&1&-1&0&-1&0&0\\ lr,rl&0&1&1&2&0&1&0&1\\ ll,lr&1&1&1&1&1&0&0\\ ll,ll&-1&1&0&1&-1&1&1&1\end{array}\] Sage suggests that the rational vector space spanned by these eight vectors should have basis given by the six vectors with coefficients given by the table \[\begin{array}{c|cccccccc}&rrr&rrl&rlr&rll&lrr&lrl&llr&lll\\ \hline 0&-1&0&0&0&0&0&1\\ -1&0&0&0&0&1&0\\ 1&1&0&0&0&1&0&0\\ 0&-1&0&0&1&0&0&0\\ -1&0&0&1&0&0&0\\ 1&1&1&0&0&0&0&0\end{array}\] It is easily checked that in fact all eight vectors are integral linear combinations of these six. Now the row rank of the first matrix is \(6\) in characteristic not \(3\), but only \(5\) in characteristic \(3\), so the second six vectors can be expressed as linear combinations of the first eight except in characteristic \(3\), where they cannot be so expressed. This accounts for the divergence of characteristic \(3\). But in characteristic not \(3\), one now adjoins these six vectors for each \(i>j>k\) to obtain a larger basis \(R_{1}\). Then one checks that all new ovelap differences reduce to \(0\) using \(R_{1}\), so we have obtained the Grobner basis as described. \(\Box\) In particular, \(U(K^{n})_{k}=0\) for \(k>n\) as long as \(n>1\). The growth of \(\dim_{K}U(K^{n})\) with \(n\) is exponential; \[\dim_{K}U(K^{n})=2\cdot 2^{n}+n^{2}-1\,.\] **Theorem 6.2**.: _If the characteristic of \(K\) is 3,_ \[\dim U(K^{n})_{k}=\begin{cases}\binom{2n}{k}&k\leq 2\\ 4\binom{n}{3}&k=3\\ 2\binom{n}{k}&k\geq 4\,.\end{cases}\] Proof.: The proof is the same, with the following change. The normal monomials of weight not 3 are as before, but in weight 3 we have twice as many: \[r_{i}r_{j}r_{k},\,r_{i}r_{j}l_{k},\,r_{i}l_{j}r_{k},\,r_{i}l_{j}l_{k}\,,\quad i >j>k\,.\] This follows from the fact that a reduced Grobner basis is given by the following set. * weight 2, length 1: \(l_{i}l_{i}\), \(r_{i}r_{i}\) for all \(i\) * weight 2, length 2: \(l_{i}r_{i}-r_{i}l_{i}\) for all \(i\) * weight 2, length 2: \(r_{j}r_{i}+l_{i}l_{j}\) and \(l_{j}l_{i}+r_{i}r_{j}\) for \(i>j\) * weight 2, length 3: \(r_{j}l_{i}-l_{i}r_{j}-r_{i}r_{j}\) and \(l_{j}r_{i}-l_{i}r_{j}-r_{i}l_{j}\) for \(i>j\) * weight 3, length 1: \(r_{i}r_{j}l_{j}\), \(l_{i}r_{j}l_{j}\), \(r_{i}l_{i}r_{j}\), and \(r_{i}l_{i}l_{j}\) for \(i>j\) * weight 3, length 4: \(l_{i}r_{j}r_{k}-r_{i}l_{j}l_{k}-r_{i}l_{j}r_{k}+r_{i}r_{j}l_{k}\) and \(l_{i}r_{j}l_{k}+r_{i}l_{j}l_{k}-r_{i}l_{j}r_{k}-r_{i}r_{j}r_{k}\) for \(i>j>k\) * weight 3, length 5: \(l_{i}l_{j}r_{k}+r_{i}l_{j}l_{k}+r_{i}l_{j}r_{k}+r_{i}r_{j}l_{k}-r_{i}r_{j}r_{k}\) and \(l_{i}l_{j}l_{k}+r_{i}l_{j}l_{k}-r_{i}l_{j}r_{k}+r_{i}r_{j}l_{k}+r_{i}r_{j}r_{k}\) for \(i>j>k\) * weight 4, length 2: \(r_{i}r_{j}l_{k}l_{m}-r_{i}r_{j}r_{k}r_{m}\) for \(i>j>k>m\) * weight 4, length 3: \(r_{i}r_{j}l_{k}r_{m}+r_{i}r_{j}r_{k}l_{m}+r_{i}r_{j}r_{k}r_{m}\) for \(i>j>k>m\) Sage verifies this table for \(n\leq 7\), and the general result follows as before. Thus if \(K\) is a field of characteristic 3, \[\dim U(K^{n})=2\cdot 2^{n}+\frac{n^{3}+2n}{3}-1\,.\] If \(A\) is an alternative algebra over \(\mathbb{Z}\), \[U(K\otimes A)=K\otimes U(A)\] for any commutative ring \(K\) by Proposition 5.1. Since each weight component of \(U(\mathbb{Z}^{n})\) is a finitely generated abelian group, we discover that \(U(\mathbb{Z}^{n})_{k}\) is torsion-free (of known rank) unless \(k=3\), and that \(U(\mathbb{Z}^{n})_{3}\) is a sum of \(2\binom{n}{3}\) free abelian groups and the same number of nontrivial finite cyclic 3-groups. ## 7. Grobner bases We elaborate briefly on the Grobner process. Let \(S\) be a set equipped with a well-founded partial order: a partial order in which every strictly decreasing sequence is finite. The free monoid \(B\) generated by \(S\) inherits a partial order - first by weight, and left-lexicographically within a given weight - that is again well-founded. Let \(K\) be a field. The free \(K\)-module generated by \(B\) is the tensor algebra on \(S\), \(T=KS\). Any nonzero element in \(T\) has a "leading monomial," the least monomial occurring with nonzero coefficient in its expression as a linear combination of elements of \(B\). Write \[LM:T^{*}\to B\] for this function, where we write \(I^{*}=I-\{0\}\) for any ideal \(I\) in \(T\). The partial order on \(B\) pulls back to a well-founded weak ordering on \(T^{*}\). For \(u,v\in B\), say \(u|v\) if there are monomials \(s,t\) such that \(v=sut\). Divisiblity is transitive. **Definition 7.1**.: _Let \(I\) be an ideal in \(T\). A subset \(G\) of \(I^{*}\) is a Grobner basis for \(I\) if for all \(r\in I^{*}\) there is \(g\in G\) such that \(LM(g)|LM(r)\)._ A Grobner basis \(G\) for \(I\) yields an efficient algorithm for deciding whether \(z\in T^{*}\) lies in \(I^{*}\). We may assume \(z\) is monic; that is, the coefficient of \(LM(z)\) is \(1\). If \(LM(z)\) is not divisble by \(LM(g)\) for any \(g\in G\), then \(z\not\in I^{*}\). If instead \[LM(z)=sLM(g)t\] for some \(g\in G\) and \(s,t\in B\), then form \[z^{\prime}=z-sgt\,.\] This is in \(I\) if and only if \(z\) is. If \(z^{\prime}=0\), we have established that \(z\in I\). If not, at least we can say that the leading monomials cancel, so \(z^{\prime}\) is strictly less than \(z\) in the well order on \(T^{*}\). Divide \(z^{\prime}\) by the coefficient of its leading monomial, and repeat this process, which terminates because the ordering is well-founded. The original element \(z\) is in \(I\) if and only if the element you wind up with is \(0\), in which case you have written \(z\) as an explicit linear combination of terms divisible by elements of \(G\). This shows that \(G\) generates \(I\) as an ideal. Given a subset \(R\) of \(I\) that generates \(I\) as an ideal, we attempt to enlarge it to a Grobner basis for \(I\). We may assume that each element of \(R\) is monic. Say that \(r,s\in R\)_overlap_ if there are \(a,b,c\in B\) such that \(LM(r)=ab\) and \(LM(s)=bc\). For each overlapping pair \(r,s\), form the "overlap difference" \[rc-as\,.\] This difference again lies in \(I\), but exhibits a new leading monomial, one that was hidden in the original generating set \(R\). Now use the reduction process with respect to \(R\) to simplify each of the overlap differences. Then adjoin all the nonzero reduced overlap differences, made monic, to the set \(R\) to get a new generating set \(R^{\prime}\). One may want to precede this step by doing some linear algebra to find a simpler basis for the space spanned by these polyomials. **Proposition 7.2** ([13], Prop. 5.2,p. 95).: _If \(R\) is finite, this process terminates, and the result is a Grobner basis for \(I\)._ A Grobner basis is minimal (no subset is a Grobner basis for the same ideal) if and only if it is reduced (no divisibility relations among its leading monomials) [13, p. 67]. One may always refine a Grobner basis to a reduced one. A monomial \(u\) is _normal_ mod \(I\) if it is not divisible by any element of \(LM(I^{*})\). If \(G\) is a Grobner basis for \(I\), it suffices to check non-divisibility by the leading monomials of elements of \(G\): Suppose that \(r\in I^{*}\) is such that \(LM(r)|u\), and let \(g\in G\) be such that \(LM(g)|LM(r)\): then \(LM(g)|u\). **Proposition 7.3** ([13], Prop. 3.3, p. 70).: _The set of monomials that are normal mod \(I\) projects to a vector space basis for \(T/I\)._
2309.09797
A Read Margin Enhancement Circuit with Dynamic Bias Optimization for MRAM
This brief introduces a read bias circuit to improve readout yield of magnetic random access memories (MRAMs). A dynamic bias optimization (DBO) circuit is proposed to enable the real-time tracking of the optimal read voltage across processvoltage-temperature (PVT) variations within an MRAM array. It optimizes read performance by adjusting the read bias voltage dynamically for maximum sensing margin. Simulation results on a 28-nm 1Mb MRAM macro show that the tracking accuracy of the proposed DBO circuit remains above 90% even when the optimal sensing voltage varies up to 50%. Such dynamic tracking strategy further results in up to two orders of magnitude reduction in the bit error rate with respect to different variations, highlighting its effectiveness in enhancing MRAM performance and reliability.
Renhe Chen, Albert Lee, Zirui Wang, Di Wu, Xufeng Kou
2023-09-18T14:16:42Z
http://arxiv.org/abs/2309.09797v1
# A Read Margin Enhancement Circuit with Dynamic Bias Optimization for MRAM ###### Abstract This brief introduces a read bias circuit to improve readout yield of magnetic random access memories (MRAMs). A dynamic bias optimization (DBO) circuit is proposed to enable the real-time tracking of the optimal read voltage across process-voltage-temperature (PVT) variations within an MRAM array. It optimizes read performance by adjusting the read bias voltage dynamically for maximum sensing margin. Simulation results on a 28-nm 1Mb MRAM macro show that the tracking accuracy of the proposed DBO circuit remains above 90% even when the optimal sensing voltage varies up to 50%. Such dynamic tracking strategy further results in up to two orders of magnitude reduction in the bit error rate with respect to different variations, highlighting its effectiveness in enhancing MRAM performance and reliability. Magnetic random access memory, read margin enhancement, tunneling magnetoresistance ratio, readout mechanism, bias voltage optimization ## I Introduction Magnetic random access memory (MRAM) has emerged as a compelling candidate for the next generation non-volatile memory, which offers fast operating speed (\(<\)10ns), low energy consumption (\(<\)1pJ/bit), and long endurance (\(>\)10\({}^{9}\)) [1, 2]. So far, different programming mechanisms, such as spin-transfer torque (STT), spin-orbit torque (SOT), and voltage-controlled magnetic anisotropy (VCMA) effects have been proposed to enhance the energy efficiency of data storage [3, 4, 5]. On the other hand, the read operation of MRAM relies on the tunneling magnetoresistance effect, where the resistance of a magnetic tunnel junction (MTJ) depends on the relative magnetic orientations between the fixed layer and the free layer, as shown in Fig. 1(a). In this context, the data stored in an MRAM cell can be read by comparing the resistance of bit-cell against a reference via a sense amplifier (SA) (Fig. 1(b)). A major challenge for MRAM read operation is the small tunneling magnetoresistance ratio (TMR) which characterizes the resistance difference between the parallel (P) and anti-parallel (AP) states. This, along with process-voltage-temperature (PVT) variations, results in narrow sensing margins which are inevitably vulnerable to thermal noise and circuit mismatch-induced offset, therefore causing read errors. To address such an issue, calibration modules or offset compensation circuits have been introduced into MRAM sensing circuitry, yet such approaches require additional complexity, area overhead, speed penalty and power consumption [6, 7]. Meanwhile, as for read-disturbance-free MRAMs like SOT and VCMA, sensing margin can be enlarged by raising the readout voltage. However, the TMR ratio normally displays a negative correlation with the bias voltage [8], and it also changes under temperature and process variations [9, 10, 11]. Therefore, a higher read bias voltage does not necessarily guarantee a higher sensing margin, and a system capable of dynamically tracking the optimal read bias voltage to maximize the sensing margin hence becomes essential. In this brief, we utilize a dynamic bias optimizer (DBO) to enhance MRAM readout margin. The remainder of this brief is organized as follows: Section II quantifies the theoretical optimal read bias voltage. Section III provides a detailed explanation of the schematic and operating principle of the DBO circuit. Section IV presents the simulated transient response, tracking accuracy and bit error rate improvement of the DBO-integrated 1Mb MRAM macro. Finally, Section V concludes this brief. ## II Optimal Bias Voltage for Maximum Margin In general, TMR ratio of an MRAM device at a given temperature can be modeled as [12] \[TMR(V_{REF})=\frac{ TMR(0)}{ 1+\frac{V_{REF}^{2}}{V_{h}^{2}}} \tag{1}\] where \(TMR(0)\) represents the TMR ratio under zero bias, \(V_{h}\) is the characteristic voltage at which the TMR ratio drops to half of \(TMR(0)\), and \(V_{REF}\) is the read bias voltage. As illustrated in Fig. 1(b), during the MRAM read operation, both the reference and the selected bit-cell are biased at \(V_{REF}\), which generate \(I_{CELL}\) and \(I_{REF}\) for the sense amplifier, respectively. The sensing margin (\(I_{M}\)) is defined as the Fig. 1: (a) Parallel and anti-parallel states of MTJ. (b) Schematic of current based sensing in MRAM. minimum current difference between the cell current and the reference current, that is \(I_{M}=\min\left(I_{P}-I_{REF},I_{REF}-I_{AP}\right)\). \(I_{REF}\) is usually generated by the reference circuit depicted in Fig. 2(a), where the summation of \(I_{P}\) and \(I_{AP}\) is extracted using two parallel connected MTJ cells in the P and AP states, respectively. Afterwards, the current is scaled by a 2:1 current mirror to generate \(\left(I_{P}+I_{AP}\right)/2\) as the reference current. In this regard, the sensing margin becomes identical for both read 1 and read 0 operations, as expressed by \[\begin{split} I_{M}&=\frac{I_{P}+I_{AP}}{2}-I_{AP}=I _{P}-\frac{I_{P}+I_{AP}}{2}\\ &=\frac{ TMR(0)}{2R_{P}}\frac{1}{\frac{1+TMR(0)}{V_{REF}}+\frac{V_{REF}}{V_{h} ^{2}}}\end{split} \tag{2}\] Accordingly, the read bias voltage dependent \(I_{M}\) curve is shown in Fig. 2(b). Instead of an monotonically increasing correlation, the \(I_{M}-V_{REF}\) curve reaches the maximum value at a certain voltage \(V_{OPT}\), and above such an optimal point, any higher bias voltage would lead to a reduced sensing margin instead. Mathematically, \(V_{OPT}\) can be derived by taking \(\frac{\partial I_{M}}{\partial V_{REF}}=0\) in (2) as \[V_{OPT}=\sqrt{1+TMR(0)}V_{h} \tag{3}\] Guided by (3), we can observe that the optimal bias point in reference to the largest sensing margin is not a constant. Instead, \(V_{OPT}\) is positively correlated with both \(TMR(0)\) and \(V_{h}\). As summarized in Fig. 3(a), the increase of \(TMR(0)\) from 60% to 140% can result in a 30% increase in \(V_{OPT}\). Concurrently, by appropriately adjusting the characteristic voltage \(V_{h}\) from 0.15 V to 0.35 V, the optimal bias point is shifted by 100% (Fig. 3(b)). It is noted that the fabrication process of the MTJ stack invariably causes discrepancies of \(V_{h}\) and \(TMR(0)\) among different memory blocks across the die, and such variations would deteriorate as the MTJ size is scaled down. Therefore, it is important to precisely track the local optimal bias point for individual memory blocks. Moreover, both ambient thermal conditions and circuit heat dissipation would affect the operating temperature of the MRAM array, which also leads to the variations of \(TMR(0)\) and \(V_{h}\). For instance, Fig. 4(a) shows that both the \(TMR(0)\) and \(V_{h}\) values decrease 30% when the MTJ bit-cell is heated from room temperature to 400 K, which in turn reduces \(V_{OPT}\) by 50% (Fig. 4(b)). Under such circumstances, real time \(V_{OPT}\) tracking is essential for high density, high speed MRAM applications. ## III Dynamic Bias Optimizer Circuit Design The schematic of the proposed DBO circuit for MRAM read operation is illustrated in Fig. 5. In general, the DBO generates reference voltage \(V_{REF}\) to approximate \(V_{OPT}\) with a feedback system, which consists of three main blocks: 1. a margin extraction circuit that extracts sensing margin \(I_{M}\) and converts it to voltage margin \(V_{M}\), 2. a control logic block that periodically samples \(V_{M}\) to determine the tuning direction of \(V_{REF}\), and 3. a charge pump module that feeds back the updated \(V_{REF}\) value to the margin extraction circuit as reference voltage in the next cycle. Specifically, the margin extraction circuit is composed of two active clamp circuits, current mirrors, a trans-impedance amplifier (TIA), and a source follower. To accurately replicate the cell current, MC1/A1 and MC2/A2 regulate the voltage on the reference MTJs to \(V_{REF}\), ensuring the current on each path is \(I_{P}=V_{REF}/R_{P}\) and \(I_{AP}=V_{REF}/R_{AP}\), respectively. Meanwhile, the current mirror M1/M2 subtracts \(I_{AP}\) from \(I_{P}\) to obtain the sensing current margin \(I_{M}\). Subsequently, \(I_{M}\) is converted to \(V_{M}\) through TIA M3/M4/R\({}_{REF}\), following \(V_{M}=R_{REF}\times W4/W3\times I_{M}\). Finally, a source follower isolates the TIA from the control circuits, which not only protects the margin extraction circuit from kickback, but also provides higher driving strength. Next, in view of the control logic block, it comprises a sample/hold circuit, a comparator, a toggle flip-flop, a latch, and combinational logic. The output voltage \(V_{M}\) from the margin extraction circuit is periodically sampled and stored as \(V_{S}\). The comparator compares \(V_{S}\) and \(V_{M}\) to detect the change of \(V_{M}\), and the FLIP signal determines whether the tuning direction of \(V_{REF}\) needs to be adjusted. Additionally, a COARSE flag, set by the FLIP signal, configures the incremental step of \(V_{REF}\). In this regard, based on the FLIP and COARSE signals, the control logic generates UP_C, UP_F, and DN signals for the charge pump module. Fig. 4: (a) Temperature-dependent \(TMR(0)\) and \(V_{h}\) of a typical MTJ bit-cell. Data extracted from [13]. (b) Read bias voltage dependent sensing margin curves at different operation temperatures. Fig. 3: Sensing margin and optimal bias voltage shift at (a) different \(TMR(0)\) amplitudes under a given \(V_{h}\) (b) varied \(V_{h}\) for a fixed \(TMR(0)\) value. Fig. 2: (a) The schematic of an MRAM current reference. (b) Read bias voltage-dependent sensing current margin. Finally, a charge pump module updates \(V_{REF}\) according to the output signals from the control logic block. In this stage, a bias circuit M13/M14 determines the minimum update step in \(V_{REF}\). The current source M7/M8, enabled by UP_F, provides a fine-tuned increase in \(V_{REF}\), while M9/M10, enabled by UP_C, provides a coarse increase of \(V_{REF}\) with 20\(\times\) step because of the transistor width ratio between M10 and M8. Conversely, M11/M12, enabled by DN, pulls down the \(V_{REF}\) with 1\(\times\) step. Therefore, the combination of the coarse-fine paths significantly reduces the settling time during power up and exit from sleep. Accompanying with the DBO circuit implementation, the corresponding operation waveforms and real-time sensing margin tracking mechanism are visualized in Fig. 6. Initially, \(V_{REF}\) and all latches/flip-flops are reset to 0. Once the DBO is activated, M7/M11 are turned off, while M9 is switched on to enable a fast slewing in \(V_{REF}\). In the meantime, the sample/hold circuit samples \(V_{M}\) to \(V_{S}\) at every falling edge of CK\({}_{\text{S}}\). At the end of each hold phase, the comparator that is triggered by CK\({}_{\text{C}}\) would determine the output FLIP state. In particular, as long as \(V_{REF}\) approaches \(V_{OPT}\), the sensing margin increases (i.e., \(V_{M}>V_{\text{S}}\)), and the FLIP signal remains 0 (i.e., indicative of the same charging direction). On the other hand, when the read bias \(V_{REF}\) crosses \(V_{OPT}\), the sensing margin decreases as \(V_{REF}\) deviates away from the optimal bias point, as illustrated in Fig. 2(b). As a result, once the current \(V_{M}\) becomes lower than \(V_{S}\), the FLIP signal is triggered to toggle the charging direction of the charge pump module for the next sampling cycle, and the circuit converges to the optimal \(V_{REF}\)= \(V_{OPT}\) with a periodic toggling of the FLIP signal in the steady state, therefore realizing the real-time tracking of \(V_{REF}\) in reference to the \(V_{OPT}\) baseline, as shown in Fig. 6(b). It is noted that the first FLIP signal after initialization sets the COARSE flag to 0, hence automatically driving the charge pump stage into the fine-tuning phase to provide a more refined tracking of \(V_{OPT}\). ## IV 1MB MRAM Simulation Following the aforementioned read margin optimization method, an 1Mb MRAM macro with the real-time dynamic bias adjustment function was designed, as depicted in Fig. 7. The macro is organized with 64 blocks, each of which consists a 512\(\times\)34 array with 32 data bit-lines (BL) and 2 reference BLs. Row and column drivers are placed adjacent to the MRAM bit-cell array, which generate WL and YSEL signals for bit-cell selection. To track block level \(V_{OPT}\) variation, a DBO module is embedded in each block next to the sense amplifier (SA). It is also seen from Fig. 7 that the DBOs share cells with the reference columns in the block so as to save layout area. During initialization or exit from sleep, the DBOs help stabilize \(V_{REF}\) towards \(V_{OPT}\). After stabilization, the DBOs periodically monitor the evolution of \(V_{OPT}\) due to temperature induced \(TMR(0)\) and \(V_{h}\) drifts, and make dynamic adjustment to \(V_{REF}\) accordingly. Besides, each DBO unit is activated individually with 1/64 duty cycle in steady state to reduce power overhead. The performance of the DBO-embedded 1Mb MRAM was evaluated using a 28-nm CMOS process design kit (PDK), and a physical MTJ device model Fig. 5: (a) Block diagram of the proposed DBO with feedback mechanism which dynamically tracks sensing margin and generate \(V_{REF}\) for MRAM read operations. (b) Detailed schematic of the DBO circuit, which includes margin extraction circuit, control logic block and the charge pump module. Fig. 6: Operation waveforms of the DBO circuit, from initialization to the steady state, with \(V_{REF}\) crossing \(V_{OPT}\) (marked with arrow) and FLIP events (highlighted with red background). in Verilog-A was used to capture the key parameters from reported measurement results (Table I) [14]. In addition, we also included the temperature-dependent \(TMR(0)\) and \(V_{h}\) data adopted from [13] to warrant the reliability of high-temperature simulations. Fig. 8 presents the post-layout simulation results of the designed DBO circuit at room temperature. During the simulation, the sampling rate is set at 5 MHz, and the coarse and fine voltage steps are 80 mV and 4 mV per cycle, respectively. It can be clearly observed from Fig. 8(c) that the \(V_{REF}\) curve slews swiftly under the coarse tuning mode during initialization. Afterwards, the control logics promptly responds with FLIP signal pulses whenever \(V_{REF}\) crosses \(V_{OPT}\). Eventually, the system successfully converges to (100%\(\pm\)2%)\(V_{OPT}\) within 2\(\upmu\)s, or 10 cycles, with a \(\pm\)10 mV ripple in steady state. Apart from the \(V_{OPT}\) real-time tracking function validation, the DBO tracking capability under different MTJ electrical parameters and thermal conditions are also evaluated. As illustrated in Fig. 9(a), within the examined range where \(TMR(0)\) and \(V_{h}\) vary from 60% to 120% and from 0.2 V to 0.35 V respectively, the DBO-adjusted bias voltage \(V_{REF}\) manages to follow the evolution of \(V_{OPT}\) with the tracking accuracy above 98%, hence justifying a wide functional range. Likewise, our DBO also exhibits a highly-consistent performance over a wide temperature range from -40 \({}^{\circ}\)C to 125 \({}^{\circ}\)C, and the \(V_{REF}\) value only slightly deviates from the ideal \(V_{OPT}\) baseline at extreme high and low temperatures, possibly due to the headroom compression of the current mirrors (M1-M4) and a reduced sensitivity of the comparator in the control logic stage, as shown in Fig. 9(b). Furthermore, by introducing the temperature variation of \(TMR(0)\) and \(V_{h}\) during simulation, the transient \(V_{REF}\) waveform is able to respond to the rapid temperature drift rate of 98 \({}^{\circ}\)C/ms (Fig. 10(a)), and the readout sensing margin improves by 20% compared to that of the MRAM block without DBO, as highlighted in Fig.10(b). Consequently, the above results evince the advantage of DBO in optimizing the read performance of the MRAM array against process and thermal variations. Finally, Fig. 11 examines the bit error rate of the 1Mb MRAM macro with (blue circles) and without (red squares) the DBO module by taking into account different degrees of variation-to-mean ratios (\(\sigma/\mu\)) in \(V_{h}\) and \(TMR(0)\). Strikingly, the simulation results at both temperatures consistently confirm that the DBO helps to reduce the bit error rate by 1 to 2 orders of magnitude. From Table II, it can be referred that under the same physical parameters, MRAM array with DBO Fig. 7: High-level architecture of DBO-enhanced MRAM readout path. Fig. 9: DBO tracking accuracy under different (a) \(TMR(0)\) and \(V_{h}\) of MTJ, and (b) thermal conditions. Fig. 8: Simulation waveforms of proposed DBO, showing (a) critical control signals (b) \(V_{M}\), \(V_{S}\) and (c) output \(V_{REF}\).
2309.09783
The ParlaSent Multilingual Training Dataset for Sentiment Identification in Parliamentary Proceedings
The paper presents a new training dataset of sentences in 7 languages, manually annotated for sentiment, which are used in a series of experiments focused on training a robust sentiment identifier for parliamentary proceedings. The paper additionally introduces the first domain-specific multilingual transformer language model for political science applications, which was additionally pre-trained on 1.72 billion words from parliamentary proceedings of 27 European parliaments. We present experiments demonstrating how the additional pre-training on parliamentary data can significantly improve the model downstream performance, in our case, sentiment identification in parliamentary proceedings. We further show that our multilingual model performs very well on languages not seen during fine-tuning, and that additional fine-tuning data from other languages significantly improves the target parliament's results. The paper makes an important contribution to multiple disciplines inside the social sciences, and bridges them with computer science and computational linguistics. Lastly, the resulting fine-tuned language model sets up a more robust approach to sentiment analysis of political texts across languages, which allows scholars to study political sentiment from a comparative perspective using standardized tools and techniques.
Michal Mochtak, Peter Rupnik, Nikola Ljubešić
2023-09-18T14:01:06Z
http://arxiv.org/abs/2309.09783v2
The ParlaSent multilingual training dataset for sentiment identification in parliamentary proceedings ###### Abstract Sentiments inherently drive politics. How we receive and process information plays an essential role in political decision-making, shaping our judgment with strategic consequences both on the level of legislators and the masses. If sentiment plays such an important role in politics, how can we study and measure it systematically? The paper presents a new dataset of sentiment-annotated sentences, which are used in a series of experiments focused on training a robust sentiment classifier for parliamentary proceedings. The paper also introduces the first domain-specific LLM for political science applications additionally pre-trained on 1.72 billion domain-specific words from proceedings of 27 European parliaments. We present experiments demonstrating how the additional pre-training of LLM on parliamentary data can significantly improve the model downstream performance on the domain-specific tasks, in our case, sentiment detection in parliamentary proceedings. We further show that multilingual models perform very well on unseen languages and that additional data from other languages significantly improves the target parliament's results. The paper makes an important contribution to multiple domains of social sciences and bridges them with computer science and computational linguistics. Lastly, it sets up a more robust approach to sentiment analysis of political texts in general, which allows scholars to study political sentiment from a comparative perspective using standardized tools and techniques. ## 1 Introduction Emotions and sentiment in political discourse are deemed as crucial and influential as substantive policies promoted by the elected representatives [1]. Since the golden era of research on propaganda [2, 3], several scholars have demonstrated the growing role of emotions on affective polarization in politics with negative consequences for the stability of democratic institutions and social cohesion [4, 5, 6]. With the booming popularity of online media, sentiment analysis has become an indispensable tool for understanding the positions of viewers, customers, and voters [7]. It has allowed all sorts of entrepreneurs to know their target audience like never before [8]. Experts on political communication argue that the way we receive and process information plays an important role in political decision-making, shaping our judgment with strategic consequences both on the level of legislators and the masses [9]. Emotions and sentiment simply do play an essential role in political arenas, and politicians have been (ab)using them for decades. Although there is a general agreement among political scientists that sentiment analysis represents a critical component for understanding political communication in general [1, 10, 11], the empirical applications outside the English-speaking world are still rare [12, 13]. Moreover, many of the research applications in social sciences lag behind the latest methodological advancements grounded in computational linguistics. This is especially the case for studies analyzing political discourse in low-resourced languages, where the lack of out-of-the-box tools creates a huge barrier for social scientists to do such research in the first place [14, 15, 12]. As a result, many of the applications still rely on dictionary-based methods, which tend to produce potentially skewed results [16, 14] or approximate sentiment scores based on position-taking stances with relatively high-level generalization (e.g. roll-calls or voting behavior [17]). Field-specific sentiment classifiers trained using machine learning algorithms are comparatively rare. Part of the reason is the fact that training machine learning models can be prohibitively expensive, especially when it comes to collecting, cleaning, and processing training data. However, recent development in the field of computational linguistic and natural language processing fueled by transformer-based deep learning models has lowered the bar for social scientists substantially. This development has additionally allowed for existing language models to be adapted to a specific domain by additionally pre-training the language model on non-annotated domain data [18]. The paper presents annotated sentence-level datasets in five European (macro-)languages (Czech, English, Serbo-Croatian, Slovak, and Slovene) sampled from parliamentary proceedings of seven European countries (Bosnia-Herzegovina, Croatia, Czech Republic, Serbia, Slovakia, Slovenia, and United Kingdom). The selection of proceedings is driven by the existing gap in low-resourced languages of Central and Eastern Europe and their relevance in a broader comparative perspective. The human-annotated datasets are used in a series of experiments focused on training sentiment classifiers intended for detecting sentiment in political discourse. Apart from methodological and experimental goals the paper has, it also can be read as summary guidelines for social scientists interested in training their own sentiment classifiers with similar scope. The paper is written with the intention of facilitating the process of collecting, cleaning, and processing textual data for political science research with realistic estimates for needed resources. When it comes to substantial findings, the paper shows that 1) additional pre-training of a language model on raw parliamentary data can significantly improve the model performance on the task of sentiment identification; 2) large 561-million-parameter multilingual models perform drastically better than those with half of the parameters; 3) multilingual models work very well also on unseen languages; and finally 4) even when the language-specific training data exist for the parliament proceedings one wants to process, a multilingual model trained on four times the size of the dataset from other languages improves the results on the target parliament significantly. ## 2 Related work Despite the boom of computational methods in recent years has shown new ways to perform sentiment analysis with relatively high accuracy, political science is catching up relatively slowly. Abercrombie and Batista-Navarro [19] found that most of the automated applications focused on parliamentary debates and position-taking exist outside of the mainstream of political science, both a surprise and an opportunity for future research. Addressing the existing gap reflects upon the needs of empirical political science research, which recognizes that people tend to interact with politics through emotions [20]. Recent research has found that political leaders are keen to use violent and populist rhetoric to connect with citizens on an emotional level [21, 22, 20]. As an effective campaigning strategy, populist parties in Europe use significantly more negative framing than their less populist counterparts simply because negative emotions work [23]. They are often abused as highly conflicting drivers leading to affective polarization [24, 25], negative partisanship [26], group-based partisan competition [27], and political sectarianism [28]. If connected with the long-run emotional effects on the electorate, the impacts are disastrous. Partisan dehumanization, partisan antipathy, and acceptance of partisan violence are just a few examples of morphed competition infused with an emotionally laden identity fueling hostility, bias, and anger [29]. Understanding these mechanisms is highly important. NLP literature, the research area where sentiment analysis is discussed the most, uses the term "sentiment analysis" interchangeably with "opinion mining" [30]. In both cases, the term refers to the task of determining the document's sentiment polarity (e.g. categorical labels; sentiment ratios) or, alternatively, the more general task of identification of emotional and attitudinal 'private states' (e.g. opinion, appraisal, attitude) [31, 32]. Hopkins and King [33] and Monroe et al. [34] argue that there is a substantial difference between the goals in empirical research conducted by computer scientists on the one hand and political scientists on the other. While computer scientists focus more on extracting latent characteristics of individual documents, political scientists are more interested in characterizing collections of documents as a whole. These differences are, however, rather empirical than substantial and are based on the preferences of those conducting the research. There is no reason why these approaches cannot be combined or swapped [33, 14, 35]. This paper does exactly that. When it comes to actual applications focused on political domain, sentiment analysis can be most often found in research focused on classification [17, 36, 37, 38, 39] and dictionary-based sentiment extraction [40, 41, 42, 43, 14]. The first stream of research uses different ML algorithms to develop models able to "classify" textual data into predefined categories (classes). These categories are either generated in an automated way based on known polarity traces, such as yes/no votes assigned to MPs' speech acts [44, 17], or are produced using traditional manual annotation with ground-truth labels [41, 12]. A majority of these applications fall under the umbrella of supervised learning using a wide range of algorithms, from logistic regression to naive Bayes, decision trees, nearest neighbor, or boosting. In recent years, many applications in computer science have been significantly tilted towards strategies using neural networks ranging from 'vanilla' feed-forward networks to more complex architectures such as transformers pre-trained on raw non-annotated data [45]. In political science, however, dictionary-based approaches are still the dominant approach. They are traditionally focused on counting words with known sentiment affinity in raw text and generalizing their frequencies over the unit of analysis. Although sentiment dictionaries are deemed less accurate and may produce relatively crude estimates, their usage in political and social sciences is quite popular [42, 46, 19, 14]. We see that as an opportunity for substantial improvement. The following sections present a new dataset for training a domain-specific sentiment classifier, which builds on a first-of-its-kind domain-specific large language model, fine-tuned on 1.68B domain-specific words from proceedings of 27 European parilaments. In a series of experiments, we then demonstrate how robust the approach is in various settings, proving its reliability in real-life applications. ## 3 Dataset construction ### Focus on sentences The datasets we compile and then use for training different classification models focus on a sentence-level data and utilize sentence-centric approach for capturing sentiment polarity in text. The focus on sentences as the basic level of the analysis goes against the mainstream research strategies in social sciences which prefer either longer pieces of text (e.g. utterance of'speech segment' or whole documents [38, 47]) or generally more coherent messages of shorter nature [11, 10]. However, this approach creates limitations when it comes to political debates in national parilaments, where speeches range from very short comments counting only a handful of sentences to long monologues with thousands of words. Moreover, as longer text may contain a multitude of sentiments, any annotation attempt must generalize them, introducing a complex coder bias that is embedded in any subsequent analysis. The sentence-centric approach attempts to refocus the attention on individual sentences capturing attitudes, emotions, and sentiment positions and use them as lower-level indices of sentiment polarity in a more complex political narrative. Although sentences cannot capture complex meanings as paragraphs or whole documents do, they usually carry coherent ideas with relevant sentiment affinity. This approach stems from a tradition of content analyses used by such projects as Comparative Manifesto [48], the core-sentence approach in mobilization studies [49], or claims analysis in public policy research [50]. Unlike most of the literature which approaches sentiment analysis in political discourse as a proxy for position-taking stances or as a scaling indicator [19, 51, 14], a general sentence-level classifier has a more holistic (and narrower) aim. Rather than focusing on a specific policy or issue area, the task is to assign a correct sentiment category to sentence-level data in political discourse with the highest possible accuracy. Only when a good performing model exists, knowing whether a sentence is positive, negative, or neutral provides a myriad of possibilities for understanding the context of political concepts as well as their role in political discourse. Furthermore, sentences as lower semantic units can be aggregated to the level of paragraphs or whole documents, which is often impossible the other way around (document \(\rightarrow\) sentences). Although sentences as the basic level of analysis are less frequent in political science research, existing applications include the validation of sentiment dictionaries [12], ethos mining [52], opinion mining [53], or detection of sentiment carrying sentences [41]. We base our experiments on data sampled from parliamentary proceedings which provide representative and often exhaustive evidence on political discourse in respective countries. In this context, parliamentary debates are considered to be a rich source of information on the position-taking strategies of politicians and one of the best sources of political discourse in general [54]. As democracy thrives through debate, tracing it becomes essential to understanding politics, its development, and its consequences. In this context, essentially, all democratic parilaments hold a debate before a bill is passed. If public, the debate becomes evidence of how members of parilaments represent their voters and constituencies and their personal beliefs and interests [55]. With all their flaws and shortcomings, parliamentary debates are an important aspect of political representation and an irreplaceable element of democratic systems. They connect voters with their representatives and show how responsive politicians are to people's wishes [56]. ### Background data In order to compile datasets capturing political discourse, manually annotate them, and then use them for training the classification models for real world applications, we sampled sentences from seven corpora of parliamentary proceedings - Bosnia and Herzegovina [57]1, Croatia [58]2, Serbia [59]3, Czechia [60]4, Slovakia [61]5, Slovenia [60] 6, and United Kingdom [60]7. The Bosnian corpus contains speeches collected on the federal level. Both chambers are included - House of Representatives (Predstavnicki dom / Zastupnicki dom) and House of Peoples (Dom naroda). The corpus covers the period from 1998 to 2018 and counts 127,713 speeches. The Czech corpus covers the period of 2013-2021 and counts 154,460 speeches from the Chamber of Deputies, the lower house of the parliament (Poslancek sanromva). The Croatian corpus of parliamentary debates covers debates in the Croatian parliament (Sabor) from 2003 to 2020 and counts 481,508 speeches. The Serbian corpus contains 321,103 speeches from the National Assembly of Serbia (Skupstina) over the period of 1997 to 2020. The Slovenia corpus covers the period of 2000-2022 and counts 311,354 speeches from the National Assembly (Drzavni zbor), the lower house of the parliament. The Slovak corpus contains speeches from the period of 1994-2020 from the National Council of the Slovak Republic (Narodn arada) and counts 375,024 speeches. Finally, the corpus from British Parliament covers speeches from both the House of Commons and the House of Lords from the period of 2015-2021 counting 552,103 speeches. Footnote 6: [https://www.clarin.si/repository/xmlui/handle/11356/1486](https://www.clarin.si/repository/xmlui/handle/11356/1486) Footnote 7: [https://www.clarin.si/repository/xmlui/handle/11356/1486](https://www.clarin.si/repository/xmlui/handle/11356/1486) ### Data sampling Speeches were segmented into sentences and words using the CLASSLA-Stanza [62, 63] and Trankit [64] pipelines with tokenizers available for Czech, Croatian, Serbian, Slovak, Slovene, and English languages. This step was necessary in order to extract individual sentences as the basic unit of our analysis. In the next step, we filtered out only sentences presented by actual speakers, excluding moderators of the parliamentary sessions. Sentences are preserved within their country-defined pools with the exception of Bosnia and Herzogovina, Croatia, and Serbia which were merged together as representatives of one language family (i.e. the corpora were treated as one sampling pool). We are, from now on, referring to this pool as BCS (Bosnian-Croatian-Serbian)8. As we want to sample what can be understood as "average sentences", we further subset each sentence-based corpus to only sentences having the number of tokens within the first and third frequency quartile (i.e. being within the interquartile range) of the original corpora. Having the set of "average sentences", we used common sentiment lexicons available for each of the languages [66, 67, 68], and applied them as seed words for sampling sentences for manual annotation. These seed words are used for stratified random sampling which gives us 867 sentences with negative seed word(s), 867 sentences with positive seed word(s), and 866 sentences with neither positive nor negative seed words (supposedly having neutral sentiment) per sampling pool. We sample 2,600 sentences in total for manual annotation per corpus. Under this setting, the sentences inherit all meta information of their parent documents. Footnote 8: The main reason for keeping these three parlaments in one pool is our pilot work on annotating sentiment in parliamentary proceedings [65] which consisted of the 2,600 instances coming from the three underlying parliaments. We further sample two random datasets, one from the BCS collection of parliaments, another from the English parliament, not applying the sentiment seed list, but rather aiming at a random selection of sentences, still satisfying the criterion of the sentence length falling into the interquartile range. The primary use case for these two datasets is testing various sentiment identification approaches, therefore we wanted for their sentiment distribution to follow the one occurring in the underlying parliaments. The sampling pipeline is identical to seed-based datasets but without the seed words component. For our experiments, we again sample 2,600 average-length sentences cleaned off of entries from proceedings' moderators (see above). From now on, we refer to these two datasets as the BCS and the English test sets. ### Annotation schema The annotation schema for labelling sentence-level data was adopted from Batanovic et al. [69] who propose a six-item scale for annotation of sentiment polarity in a short text. The schema was originally developed and applied to SentiComments.SR, a corpus of movie comments in Serbian and is particularly suitable for low-resourced languages. The annotation schema contains six sentiment labels [69, p. 6]: * +1 (Positive in our dataset) for sentences that are entirely or predominantly positive * -1 (Negative in our dataset) for sentences that are entirely or predominantly negative * +M (M_Positive in our dataset) for sentences that convey an ambiguous sentiment or a mixture of sentiments, but lean more towards the positive sentiment in a strict binary classification * -M (M_Negative in our dataset) for sentences that convey an ambiguous sentiment or a mixture of sentiments, but lean more towards the negative sentiment in a strict binary classification * +NS (P_Neutral in our dataset) for sentences that only contain non-sentiment-related statements, but still lean more towards the positive sentiment in a strict binary classification * -NS (N_Neutral in our dataset) for sentences that only contain non-sentiment-related statements, but still lean more towards the negative sentiment in a strict binary classification The different naming convention we have applied in our dataset serves primarily practical purposes: obtaining the 3-way classification by taking under consideration only the second part of the string (if an underscore is present). Additionally, we also follow the original schema which allowed marking text deemed as sarcastic with a code "sarcasm". The benefit of the whole annotation logic is that it was designed with versatility in mind allowing reducing the sentiment label set in subsequent processing if needed. That includes various reductions considering polarity categorization, subjective/objective categorization, change of the number of categories, or sarcasm detection. This is important for various empirical tests we perform in the following sections. ### Data annotation Data were annotated in multiple waves. Each seed-based dataset was annotated by two annotators, both being native speakers or having an excellent command of the annotated language. Annotation was done via a custom-built, locally-hosted online app with consistent logging allowing monitoring of the annotation process systematically. We used _prodigy_ software with a custom recipe for that [70]. Each annotator undergone approximately ten hours of training before the actual annotation. The annotation was done iteratively in several rounds with automated oversight of the coding process in order to minimize any systematic disagreement. Trained annotators were able to annotate up to 75 sentences per hour on average, resulting in 35 man-hours per annotator per annotation (feedback and reconciliation not included). When it comes to BCS and English test sets, data were annotated by only one highly trained annotator. Annotation of both test sets follows the same quality control procedures as discussed with the seed-based datasets. Despite relatively smooth annotation process across the datasets, the inter-annotator agreement (IAA) measured using Krippendorff's alpha (KA) has not reached high values. This is consistent across datasets supporting the argument that sentiment perception is a highly subjective matter [71, 72] and despite the effort to eliminate hard disagreements, the results often reflect upon a hard call annotators had to make. Monitoring and reconciliation of the disagreements further showed that most of the disagreements are not substantially wrong and can be considered relevant under the provided context (i.e. short text snippets). Summary of Krippendorff's alpha and agreement rates (accuracy) across datasets and annotation schemas is presented in Table 1.9 Footnote 9: We do not report KA for the BCS and English test sets here as only one annotator performed the annotations. The final distributions of the six- and three- class labels after reconciliation across datasets are presented in Table 2 and Table 3 respectively. The reconciliation was done by the original annotators after the main annotation round was finished. As the reconciliation was done by mutual agreement, the whole process was administrated online to avoid any immediate peer pressure. Annotators were asked to mark annotations for which they do not have problem to agree with their colleague. This eliminated disagreements which can be considered easy. Annotators then met in person or online and discussed instances which they could not agree on in the first round. Each dataset contained a handful of hard disagreements that were not possible to reconcile without an external reviewer (one of the authors of this paper). The presented distributions show that the negative class is often the most populous category. This observation, however, does not entirely hold true in the Slovene dataset and English test set. We theorize that this might indicate potential nuances in political nature of different parliaments and an existence of different patterns in their political culture. Neutral sentiment appears to be more dominant there. \begin{table} \begin{tabular}{|l|r|r||r|r|} \hline Dataset & ACC (6 classes) & KA (6 classes) & ACC (3 classes) & KA (3 classes) \\ \hline BCS & 62.0\% & 0.502 & 77.7\% & 0.639 \\ CZ & 68.1\% & 0.531 & 77.4\% & 0.612 \\ SK & 63.4\% & 0.506 & 75.4\% & 0.607 \\ SL & 64.1\% & 0.502 & 73.7\% & 0.531 \\ EN & 66.0\% & 0.543 & 78.4\% & 0.656 \\ \hline \end{tabular} \end{table} Table 1: Krippendorff’s alpha (KA) and agreement rates (ACC) across datasets for 6-fold and 3-fold annotation schemas. ### Dataset encoding The final dataset is encoded as a JSONL document, each line in the dataset representing a JSON object. The dataset is encoded in seven files, five files representing the 2,600 training instances per (macro)language and two files representing our two BCS and English test sets. Each of the training datasets contains the following metadata: * sentence that is annotated * country of origin of the sentence * annotation of annotator1 with one of the labels from the annotation schema presented in Section 3.4 * annotation of annotator2 following the same annotation schema * annotation given during reconciliation of hard disagreements * the three-way label (positive, negative, neutral) where +NS and -NS labels are mapped to the neutral class * the document_id the sentence comes from * the sentence_id of the sentence * the term inside which the speech was given * the date the speech was given * the name, party, gender, birth_year of the speaker * the split the instance has been assigned to (training set containing 2054 instances, development set 180 instances, and testing set 366 instances) * the ruling status of the party while the speech was given (opposition or coalition) The test sets differ in their structure minimally, containing only the annotator1 attribute, and missing the annotator2 and reconciliation attributes. A single annotator performed the annotation of test sets. Furthermore, there is no split attribute as the purpose of these datasets is testing various algorithms, while the training datasets can also be used for language-specific experiments, therefore requiring the train-dev-test split. The dataset is made available through the CLARIN.SI repository at [http://hdl.handle.net/11356/1585](http://hdl.handle.net/11356/1585) and is available under the CC-BY-SA 4.0 license. \begin{table} \begin{tabular}{|l|r|r|r|r|r|} \hline Dataset & Negative & Mixed negative & Negative neutral & Positive neutral & Mixed positive & Positive \\ \hline all & 7018 & 1214 & 2486 & 4205 & 965 & 2312 \\ BCS & 1084 & 230 & 289 & 484 & 155 & 358 \\ CZ & 1324 & 74 & 317 & 549 & 76 & 260 \\ SK & 1123 & 130 & 337 & 558 & 130 & 322 \\ SL & 966 & 44 & 659 & 750 & 43 & 138 \\ EN & 1058 & 211 & 267 & 413 & 168 & 483 \\ BCS-test & 875 & 272 & 320 & 686 & 168 & 279 \\ EN-test & 588 & 253 & 297 & 765 & 225 & 472 \\ \hline \end{tabular} \end{table} Table 2: Distribution of the six-class labels across datasets. \begin{table} \begin{tabular}{|l|r|r|r|} \hline Dataset & Negative & Neutral & Positive \\ \hline all & 8232 & 6691 & 3277 \\ BCS & 1314 & 773 & 513 \\ CZ & 1398 & 866 & 336 \\ SK & 1253 & 895 & 452 \\ SL & 1010 & 1409 & 181 \\ EN & 1269 & 680 & 651 \\ BCS-test & 1147 & 1006 & 447 \\ EN-test & 841 & 1062 & 697 \\ \hline \end{tabular} \end{table} Table 3: Distribution of the three-class labels across datasets. ## 4 Experiments In this section, we present our experiments through which we aim to answer the following three research questions: Q1: Does our newly released XLM-R-parla model, additionally pre-trained on 1.72 billion words of parliamentary proceedings, model the sentiment phenomenon better than the original XLM-R models? Q2: How well does our model work on unseen languages? Q3: If one wants to process data from the parliaments that are covered in the ParlaSent (or any other) dataset, is it advisable to train a language-specific (and parliament-specific) model, or rather train a multilingual model containing the whole ParlaSent dataset? To make the most out of our rather complex 6-level annotation schema, we set-up all our experiments as regression tasks, the six levels being modified into integer values from 0 to 5. For evaluation, we use primarily the \(R^{2}\) score, which quantifies the proportion of the prediction variance that can be explained through gold labels, due to its sensitivity to differences in system performance. We also report mean average error (MAE) as a simple-to-understand metric in terms of an average error per instance, knowing that we are predicting values on a scale from 0 to 5, maximum MAE thereby being 5, and acceptable error for most use cases being somewhere below 1. ### The XLM-R-parla model In this subsection we are answering our first question - whether our newly released model XLM-R-parla10 performs better than the vanilla XLM-R models [73]. Footnote 10: [https://huggingface.co/classla/xlm-r-parla](https://huggingface.co/classla/xlm-r-parla) The XLM-R-parla model is based on the XLM-R-large model11 due to our preliminary experiments that have shown that XLM-R-large models outperform XLM-R-base models on this task. The XLM-R-parla model was additionally pre-trained for only 8 thousand steps with a batch size of 1024 sequences of 512 tokens of text. The model was pre-trained on a merge of the ParlaMint 3.0 dataset and the EuroParl dataset12, together covering 30 languages, with 1,717,113,737 words of running text. Important to mention is that our pre-training dataset contains all the languages contained inside the ParlaSent dataset. The exact number of words per language in the pre-training dataset can be inspected in the Appendix. Footnote 11: [https://huggingface.co/xlm-roberta-large](https://huggingface.co/xlm-roberta-large) Footnote 12: [https://www.statmt.org/euorparl/](https://www.statmt.org/euorparl/) Our hypothesis for this question is that the additional adaptation of the XLM-R-large model to texts as they occur in parliamentary proceedings will significantly improve our predictions of sentiment in parliamentary proceedings. We fine-tune both the XLM-R-base and XLM-R-large, as well as the XLM-R-parla model, with the same hyperparameter settings, that have proven in preliminary experiments to perform well for our task: learning rate of 8e-6, batch size of 32, and 4 epochs over our whole training dataset (2,600 instances per each of the five parliaments, 13,000 instances in total). We test each model separately on the BCS and the English test set, each comprising of 2,600 instances. We compare these three models with a dummy baseline returning always the mean value of the training data. We perform five runs for each set-up and report the mean result. The results in Table 4 show that the mean dummy, as expected, gives a \(R^{2}\) value of around 0, while the MAE is around 1.5, which means that, if we always predicted a mean value from our training data, on average, we would be "only" 1.5 points away from the correct value. This result represents the baseline result any reasonable system should improve over. The XLM-R-base model does exactly that, lowering the MAE to between 0.81 and 0.87, depending on the test set. \begin{table} \begin{tabular}{|l|c c|c c|} \hline & \multicolumn{2}{c|}{\(R^{2}\)} & \multicolumn{2}{c|}{\(MAE\)} \\ test set & bcs & en & bcs & en \\ \hline Mean dummy & -0.012 & -0.165 & 1.522 & 1.645 \\ XLM-R-base & 0.500 & 0.561 & 0.868 & 0.808 \\ XLM-R-large & 0.605 & 0.653 & 0.706 & 0.694 \\ XLM-R-parla & 0.615 & 0.672 & 0.705 & 0.675 \\ \hline \end{tabular} \end{table} Table 4: Results of the first research question comparing the additionally-pretrained XLM-R-parla model with the vanilla XLM-R models and a random baseline The XLM-R large model, identical to our preliminary experiments, drastically improves over the base model, which simply shows that the task at our hands is a rather complex one and that the extra capacity delivers around 0.1 points better results in \(R^{2}\), which can be called a drastic improvement. The MAE score, much less sensitive to changes in the quality of predictions, still shows an error lower on average of 0.1 to 0.15 points. Once we compare the original XLM-R-large model with the additionally pre-trained XLM-R-parla model, we can observe that the additional pre-training has paid off, with minor, but consistent improvements on all metrics. As expected, all systems perform quite better on the English test set due to much more English data being seen during pre-training than that of Bosnian, Croatian, or Serbian. We can conclude with an answer to the first research question - the additional pre-training of a multilingual model on parliamentary data does improve the performance on our task. ### Performance on unseen languages Here, we are answering our second question - how is the performance of our best-performing model XLM-R-parla on a language that the model has not seen during fine-tuning? Our initial hypothesis is that there would be a minor impact on whether the model was fine-tuned on the testing language. We perform two ablation experiments, first skipping BCS data from the training dataset, and then skipping English data, and we evaluate both models on the BCS and the English test sets. Therefore, in the two additional experiments, we do not train on 13,000 but on 10,400 instances. We keep the same hyperparameter values as with the initial XLM-R-parla experiment described in the previous subsection. In Table 5, reporting the results of these experiments, to our surprise, we cannot observe any obvious pattern regarding the performance on the language that has been removed from the training data. On the BCS example the model performs even better on the BCS test set regarding the \(R^{2}\) evaluation metric and worse on the EN dataset. If we look at the MAE metric, the results are slightly more expected, the model performing worse on both test sets, the drop being still more significant on the English test set. The inconsistency between the two metrics on the BCS test set is very likely due to \(R^{2}\) penalizing outliers more harshly than MAE does. In the experiment where the English training data is removed, the results are a little more consistent, with performance on both test sets being similarly worse, the English test set getting an extra hit on the MAE metric but not on the \(R^{2}\) metric. Overall, we cannot observe a hit on the performance of the models if the testing language gets removed from the training data to a greater extent than what is observed on the other test set. Therefore, we can conclude that the performance drops due to less training data, not due to the target language not being present in the training data. ### Monolingual vs. multilingual training In this subsection, we answer our third research question - whether target language performance is better if the model is trained on that language only, or rather if it is trained on all five parliaments. We hypothesize that results might be evened out. On the monolingual side, there is a drastic similarity between the training and the test data, not just due to the same language used, but also due to data coming from the same parliament, each parliament surely having many specific features a system might use to predict sentiment. On the multilingual side, the argument relies on five times more training data than in the monolingual setting. In this set of experiments, we train and evaluate only on the training datasets, which have a train-dev-test split, as already reported in Section 3.6. In the case of the monolingual setting, we train the model only on the 2054 training \begin{table} \begin{tabular}{|l|c c|c c|} \hline & \multicolumn{2}{c|}{\(R^{2}\)} & \multicolumn{2}{c|}{MAE} \\ \cline{2-5} testset & bcs & en & bcs & en \\ \hline ParlaSent & 0.615 & 0.672 & 0.705 & 0.675 \\ ParlaSent \(\backslash\{BCS\}\) & 0.630 & 0.659 & 0.727 & 0.704 \\ ParlaSent \(\backslash\{EN\}\) & 0.596 & 0.655 & 0.728 & 0.756 \\ \hline \end{tabular} \end{table} Table 5: Experiments on removing the BCS and the English data from the training data, evaluating on the BCS and English test data, to check for performance on unseen languages instances for the specific parliament, while in the case of the multilingual setting, we train on five times that amount of data, i.e., 10,270 instances. We always evaluate on the test portion of the training dataset of the target parliament. We keep our hyperparameters the same as before, with the difference that monolingual models are fine-tuned for ten epochs due to less data available for fine-tuning. The results in Table-6 show that only in Czech there seems to be a similar performance of the monolingual and the multilingual models, while in all remaining parliaments, there is a consistent benefit of training on all available languages. Given these results, we can conclude that if one wanted to annotate the sentiment in a specific parliament for which there is training data available, better results might still be obtained with additional data, written in different languages and coming from different parliaments. This result also shows that our annotation guidelines were detailed enough for the annotators in the different languages to have comparable annotation criteria. ## 5 Conclusion In this paper, we have presented a new dataset, consisting of sentences coming from seven different parliaments, manually annotated with a six-level schema for sentiment. This is the first of such datasets available for parliamentary proceedings' data. We show the inter-annotator agreement to be reasonably high for such an endeavor. We share 2,600 instances per parliamentary group, the Bosnian, Croatian, and Serbian parliaments forming the BCS parliamentary group, the remaining four parliaments being those of Czechia, United Kingdom, Slovakia, and Slovenia. Aside from these five training datasets, we also share two additional test sets, one from the BCS group and another from the United Kingdom. The data are shared via the CLARIN.SI repository13. Footnote 13: [http://hdl.handle.net/11356/1868](http://hdl.handle.net/11356/1868) In our experiments, we answer three main research questions. The first question relates to whether additional pre-training of a transformer model on raw parliamentary data would improve the performance on the task, which proves to be correct, and therefore, we also present the new XLM-R-parla model14, especially suited for parliamentary text processing. Whether the model is more potent in the processing of political texts, in general, will have to be tested in follow-up work. Footnote 14: [https://huggingface.co/classla/xlm-r-parla](https://huggingface.co/classla/xlm-r-parla) The second question we tackle is how well we can expect the final, fine-tuned model, named XLM-R-ParlaSent15, to perform on unseen languages and parliaments. We show that the model is very robust to unseen languages and parliaments with no or minor drop in performance. The limitation of this insight is that the languages and parliaments we check this presumption on are linguistically and traditionally rather related to the remaining languages and parliaments in the training data, so caution is advised if more distant languages or parliaments were to be annotated with the XLM-R-ParlaSent model. Footnote 15: [https://huggingface.co/classla/xlm-r-parlasent](https://huggingface.co/classla/xlm-r-parlasent) The third question relates to whether a model performs better on a specific language and parliament if it is trained on that parliament's data only, or whether the additional, four times larger dataset, coming from different languages and parliaments, is more beneficial. We show that the multilingual multi-parliamentary approach performs better, which is a direct signal that our annotations are not of high quality inside parliaments only, as measured via the inter-annotator agreement, but also between parliaments and languages. The limitation of this insight is that each of the approaches still carries a significant error rate and that each of the approaches has quite likely a specific error bias present, and depending on the downstream application of the automatic annotations, a monolingual single-parliament training approach might be more beneficial in some use cases. \begin{table} \begin{tabular}{|l|c c|c c|} \hline & \multicolumn{2}{c|}{\(R^{2}\)} & \multicolumn{2}{c|}{MAE} \\ language & mono & multi & mono & multi \\ \hline bcs & 0.699 & 0.737 & 0.644 & 0.572 \\ cz & 0.564 & 0.560 & 0.706 & 0.665 \\ en & 0.707 & 0.741 & 0.652 & 0.599 \\ sk & 0.646 & 0.681 & 0.665 & 0.593 \\ sl & 0.512 & 0.520 & 0.708 & 0.667 \\ \hline \end{tabular} \end{table} Table 6: Experiments on comparing performance when training on the target language vs. training on all available languages Our obvious next step is the annotation of all the languages in the ParlaMint corpus [60] with sentiment, which will open the door to a series of additional insights from this already very rich dataset. The downstream research will also shed light on the limitations of our sentiment annotations. Another direction of research is the benchmarking of the recent large language models, such as GPT4 and LLaMa2 [74], to what extent they could be useful for such an enrichment in the zero-shot or few-shot scenario. In the case of the LLaMa2 model, fine-tuning with the full ParlaSent dataset is an option as well. We consider this work to be a very important step in setting up a more robust approach to sentiment analysis of political texts beyond sentiment lexicon approaches, which will find many applications in the downstream research of political and parliamentary communications. It is part of a more general effort to democratize the latest advancements in natural language processing and their relevance in humanities and social sciences.
2307.16469
Sub-Doppler laser cooling and magnetic trapping of natural-abundance fermionic potassium
We report on reaching sub-Doppler temperatures of $^{40}$K in a single-chamber setup using a dispenser-based potassium source with natural (0.012$\%$ of $^{40}$K) isotopic composition. With gray molasses cooling on the $D_1$-line following a standard $D_2$-line magneto-optical trap, we obtain $3\times10^5$ atoms at $\sim$10~\textmu K. We reach densities high enough to measure the temperature via absorption imaging using the time-of-flight method. Directly after sub-Doppler cooling we pump atoms into the $F=7/2$ hyperfine ground state and transfer a mixture of $m_F=-3/2,-5/2$ and $-7/2$ Zeeman states into the magnetic trap. We trap $5\times10^4$ atoms with a lifetime of 0.6~s when the dispensers are heated up to maximize the atom number at a cost of deteriorated background gas pressure. When the dispensers have been off for a day and the magneto-optical trap loading rate has been increased by light induced atomic desorption we can magnetically trap $\sim$$10^3$ atoms with a lifetime of 2.8~s. The background pressure-limited lifetime of 0.6~s is a reasonable starting point for proof-of-principle experiments with atoms and/or molecules in optical tweezers as well as for sympathetic cooling with another species if transport to a secondary chamber is implemented. Our results show that unenriched potassium can be used to optimize experimental setups containing $^{40}$K in the initial stages of their construction, which can effectively extend the lifetime of enriched sources needed for proper experiments. Moreover, demonstration of sub-Doppler cooling and magnetic trapping of a relatively small number of potassium atoms might influence experiments with laser cooled radioactive isotopes of potassium.
Mateusz Bocheński, Mariusz Semczuk
2023-07-31T08:01:15Z
http://arxiv.org/abs/2307.16469v2
# Sub-Doppler laser cooling and magnetic trapping of natural-abundance fermionic potassium. ###### Abstract We report on reaching sub-Doppler temperatures of \({}^{40}\)K in a single-chamber setup using a dispenser-based potassium source with natural (0.012% of \({}^{40}\)K) isotopic composition. With gray molasses cooling on the \(D_{1}\)-line following a standard \(D_{2}\)-line magneto-optical trap, we obtain \(3\times 10^{5}\) atoms at \(\sim\)10 \(\mu\)K. We reach densities high enough to measure the temperature via absorption imaging using the time-of-flight method. Directly after sub-Doppler cooling we pump atoms into the \(F=7/2\) hyperfine ground state and transfer a mixture of \(m_{F}=-3/2,-5/2\) and \(-7/2\) Zeeman states into the magnetic trap. We trap \(5\times 10^{4}\) atoms with a lifetime of 0.6 s when the dispensers are heated up to maximize the atom number at a cost of deteriorated background gas pressure. When the dispensers have been off for a day and the magneto-optical trap loading rate has been increased by light induced atomic desorption we can magnetically trap \(\sim\)10\({}^{3}\) atoms with a lifetime of 2.8 s. The background pressure-limited lifetime of 0.6 s is a reasonable starting point for proof-of-principle experiments with atoms and/or molecules in optical tweezers as well as for sympathetic cooling with another species if transport to a secondary chamber is implemented. Our results show that unenriched potassium can be used to optimize experimental setups containing \({}^{40}\)K in the initial stages of their construction, which can effectively extend the lifetime of enriched sources needed for proper experiments. Moreover, demonstration of sub-Doppler cooling and magnetic trapping of a relatively small number of potassium atoms might influence experiments with laser cooled radioactive isotopes of potassium. ## I Introduction Observation of the first degenerate Fermi gas in 1999 [1], only four years after successful creation of a Bose-Einstein condensate [2], widened the scope of experimental platforms where phenomena are predominantly governed by quantum statistics [3]. The choice of fermions for laser cooling is rather limited if compared to the number of available bosonic species. Among alkali atoms only lithium and potassium have long lived isotopes obeying Fermi statistics, \({}^{6}\)Li and \({}^{40}\)K. Laser cooling of the latter benefits from availability of high power CW lasers, nowadays primarily based on telecom technology [4] or tapered amplifiers [5]. Working with \({}^{40}\)K has, however, a major drawback: natural potassium contains only 0.012% of this isotope. Pioneering works that demonstrated laser cooling and trapping of several thousands \({}^{40}\)K atoms loaded from a natural source [6; 7] made it abundantly clear that an isotopically enriched source would be needed to provide sufficiently large atom number to implement evaporative cooling. Since then, nearly all experiments using fermionic potassium rely on sources enriched to 3%-6% [8; 9]. The main issue with enriched potassium is its high price and limited availability - since the late nineties the price has gone up by more than an order of magnitude. As a result, experiments with enriched fermionic potassium do not use Zeeman slowers and almost exclusively rely on 2D MOTs [10] or double-stage MOTs [1] as a source of pre-cooled atoms. As opposed to other alkali species, single chamber apparatuses are rarely used with \({}^{40}\)K, even though a single chamber design with a source located near the trapping region has enabled studies of the BEC-BCS crossover regime with \({}^{6}\)Li [11]. In recent years some efforts have been made to bypass the need for potassium enrichment. Both a Zeeman slower [12] and a 2D MOT [13] have been demonstrated but they have not been widely adopted by the community. In this work we demonstrate that state-of-the-art sub-Doppler cooling in a single chamber apparatus is possible with natural abundance potassium source. As a proof-of-principle measurement we trap 5\(\times\)10\({}^{4}\) atoms in a magnetic trap, sufficiently many to enable sympathetic cooling by another species [12] if the magnetically trapped clouds are transferred to a second chamber with a much better vacuum. Our results also show that during the construction of a new experimental setup one can use cheap, natural potassium sources for optimization, thus effectively prolonging the useful lifetime of the enriched source. ## II Experimental setup The experimental chamber is based on a fused silica glass cell without any antireflection coatings. Ultra-high vacuum is maintained by a 55 l/s ion pump (VacIon Plus 55 Noble Diode) supported by a titanium sublimation pump (fried no more than twice per year). The vacuum chamber is designed for laser cooling of cesium and potassium atoms; therefore, we use two dispensers (SAES Getters) for each species as atomic sources. We use potassium with a natural abundance of isotopes, that is, the content of \({}^{40}\)K is only \(\sim\)0.012%. All dispensers are located about 7 cm from the center of the magneto-optical trap (MOT). We use a LED emitting at the central wavelength of 370 nm (FWHM \(\sim\)10 nm) to increase the MOT loading rate due to the light-induced atomic desorption (LIAD) process [14]. This method of increasing additional free-atom load in the trapping region allows us to release less atoms from the dispensers into the setup, thus minimizing the deterioration of the vacuum quality. The base pressure (i.e., when the dispensers have not been turned on for several days) reaches 10\({}^{-11}\) mbar. The quadrupole field for the magneto-optical and the magnetic trap is created by a pair of coils in a nearly anti-Helmholtz configuration. The coils are arranged such that the axis of the strongest gradient \(B_{\rm axial}^{\prime}\) is in the horizontal plane. Throughout the paper we report the radial gradient \(B_{\rm r}^{\prime}\) because it sets the effective trapping potential in the vertical direction. The laser system is designed to maximize the available power by limiting double passes through acousto-optic modulators after the final amplification stage. In general, the design allows us to switch between potassium isotopes (\({}^{39}\)K, \({}^{40}\)K or \({}^{41}\)K) on a sub-millisecond time scale and to implement sub-Doppler cooling on the \(D_{1}\) line for all isotopes. Here, we only discuss the core elements of the laser system relevant to the current work on \({}^{40}\)K. The details of the entire laser system will be the subject of a further publication. Figure 1 shows a simplified layout of the laser system. We use two master oscillator tapered amplifiers (Toptica TA pro) stabilized to crossover transitions in \({}^{39}\)K, one on the \(D_{1}\) line and the other on the \(D_{2}\) line. The beams from both master lasers are combined on a polarizing beam splitting cube (PBS1) and are coupled into the same single mode, polarization maintaining fiber with perpendicular polarizations. The PBS2 splits each of the \(D_{1}\) and \(D_{2}\) beams into two paths, called cooling and repumping paths, where two double pass acousto-optic modulators DPAOM\({}_{\rm cool}\) and DPAOM\({}_{\rm rep}\) provide frequency tuning around optimized values of \(+350\) MHz and \(-370\) MHz, respectively, before seeding dedicated tapered amplifiers TA\({}_{\rm cool}\) and TA\({}_{\rm rep}\) (both are Toptica TA pro models, but we only use tapered amplifiers from these systems). Mechanical shutters S\({}_{1}\) and S\({}_{2}\) are used to choose whether the \(D_{1}\) or \(D_{2}\) line light is amplified. Single-pass acousto-optic modulators SPAOM\({}_{\rm cool}\) and SPAOM\({}_{\rm rep}\) provide final frequency shifts of \(+80\) MHz and \(-80\) MHz, respectively, and work as fast optical shutters. To eliminate light leakage caused by finite extinction of AOMs, mechanical shutters S\({}_{3}\)-S\({}_{5}\) are used. Our design allows us to reach all frequencies required for efficient cooling on both \({}^{40}\)K lines. In fermionic potassium, the splitting of the P\({}_{3/2}\) state is sufficiently large to allow efficient (compared to bosonic potassium isotopes) cooling and compression using the \(D_{2}\) line, therefore we do not need to implement a \(D_{1}\)&\(D_{2}\)-line compressed magneto-optical trap as used in experiments with \({}^{39}\)K and \({}^{41}\)K [15; 16]. The design of our laser system provides an additional feature, namely the phase coherence of the cooling and the repumping beams which might be useful for driving Raman transitions between hyperfine states. The phase coherence has been shown to increase the cooling efficiency on the \(D_{2}\)-line for some species [17], but it is not clear if it provides any benefit in our setting since the role of phase coherence for \(D_{1}\)-line gray molasses has not been studied. The single chamber design of our vacuum system and the use of unenriched dispensers as a source of \({}^{40}\)K heavily restrict the possibility to optimize the magneto-optical trap (MOT). In particular, we are forced to choose between maximizing the atom number and maximizing the lifetime. These two are strongly coupled unlike in setups where a 2D MOT or a Zeeman slower are used - even if these sources rely on natural abundance potassium [12; 13]. We use three retro-reflected MOT beams, each collimated with a standard 30 mm diameter lens to a \(1/e\) diameter of about 25 mm. This size guarantees that there is no noticeable distortion of the beams caused by the glass cell walls, which are separated by 30 mm. After passing the glass cell, the diameter of the beams is decreased to \(\approx\)9 mm with a telescope to fit through 0.5" waveplates. This choice is made to avoid purchasing custom made, 30 mm diameter waveplates which are rather costly if they have to be achromatic (our setup is also used for laser cooling of cesium). The retro-reflected beams are slightly focused to partially compensate for reflection losses on the un-coated glass cell walls. ## III Magneto-optical trap For magneto-optical trapping we use well established laser cooling methods where we close the \({}^{2}S_{1/2},F=9/2\rightarrow{}^{2}P_{3/2},F^{\prime}=11/2\) cooling transition by providing repumping light on the \({}^{2}S_{1/2},F=7/2\rightarrow{}^{2}P_{3/2},F^{\prime}=9/2\) transition to depopulate the ground hyperfine state with lower total angular momentum \(F\). It is worth noting that this level has higher energy than the \(F=9/2\) level, unlike in other alkali atoms. In figure 2 we present the relevant energy levels and schematically show various detunings of the laser beams used at different stages of laser cooling on both the \(D_{1}\) and the \(D_{2}\) line. During MOT loading we use maximum available laser power with total intensities of cooling (repumping) beams equal to \(29I_{\rm s}\) (\(18I_{\rm s}\)), where \(I_{\rm s}\) is the saturation intensity of the \(D_{2}\) line. We have found that we can obtain the highest atom number in the MOT for the magnetic field gradient of 11 G/cm, the cooling light red-detuned by -3.5T to -4.5T and the repumper red-detuned by -0.5T to -4T (see figure 3). Here, \(\Gamma\) is the natural linewidth of the \(D_{2}\) line transitions. All measurements reported in the following paragraphs are obtained for the cooler (repumper) detunings equal to -4\(\Gamma\) (\(-1.5\Gamma\)). With optimized parameters we have compared magneto Figure 1: Simplified scheme of the \({}^{40}\)K laser cooling system. \(D_{1}\) ML and \(D_{2}\) ML: master lasers (Toptica TA pro) stabilized to crossover transitions of the \(D_{1}\) and the \(D_{2}\) line in \({}^{39}\)K, respectively; DP AOM\({}_{\rm cool}\) and DP AOM\({}_{\rm rep}\): double pass acousto-optic modulators providing total frequency shifts +(700-850) MHz and -(700-800 MHz), respectively, TA\({}_{\rm cool}\) and TA\({}_{\rm rep}\): tapered amplifiers (Toptica), SP AOM\({}_{\rm cool}\) and SP AOM\({}_{\rm rep}\): single pass acousto-optic modulators operating at fixed frequencies of +80 MHz and -80 MHz, respectively, which work as fast optical shutters, S1-S5: mechanical shutters. optical trap loading curves for two cases relevant to this investigation (see figure 4). In the first case, the number of atoms is maximized by heating up dispensers beyond typical temperatures we use while working with nearly 8,000 times more abundant \({}^{39}\)K and by using light-induced atomic desorption (LIAD) during MOT loading. This reduces the quality of vacuum (see section V) as dispensers release significant amounts of other, more abundant, potassium isotopes that collide with trapped \({}^{40}\)K. Additionally, due to the nearby location of cesium dispensers we also observe an increase in partial pressure of cesium, enhanced even more by the use of LIAD. For regular operation, when working with \({}^{39}\)K or Cs or their mixture, this cross-talk is minimized by choosing the temperature of dispensers which is a satisfactory compromise between the atom number and the lifetime of atoms trapped in an optical dipole trap (\(\sim\)5 s). For the study of sub-Doppler cooling, which takes on the order of 15 ms, this reduced lifetime is irrelevant thus the efficiency of the cooling process is investigated under these conditions. The second case considers loading the magneto-optical trap using only LIAD, with atomic sources having been turned off for more than a day resulting in a much improved background pressure. As expected, the trapped atom number is significantly smaller and the lifetime of trapped atoms is longer (see section V). The overall performance of sub-Doppler cooling is, however, essentially the same as in the first considered case. Under conditions optimized for the maximum atom number we achieve a loading rate of \(5\times 10^{5}\) atoms/s, trapping about \(5.5\times 10^{5}\) atoms, nearly two orders of magnitude more than in the first reported magneto-optical traps of fermionic potassium [6; 7]. For the background pressure optimized case the loading rate drops to \(1\times 10^{4}\) atoms/s and we can trap at best \(8.5\times 10^{4}\) atoms. Surprisingly, sub-Doppler cooling mechanisms that are typically present in magneto-optical traps of \({}^{40}\)K [1; 19; 20; 7] do not seem to work in our setup and the steady state temperature of the MOT is 250 uK, well above the Doppler limit of \(T_{\text{D}}=145\) uK [18]. We have not systematically investigated what could cause this difference because the figure of merit for us has been the combined efficiency of all cooling stages, not of each individual stage. As we show next, 250 uK is a sufficiently good starting point to proceed with further cooling. Figure 4: Magneto-optical trap loading curves normalized to the maximum trappable atom number. With dispensers and LIAD _turned on_ (dots) the maximum number of atoms is \(N_{\text{max}}=5.5\times 10^{5}\). With dispensers _turned off_ and LIAD _turned on_ (diamonds) this number drops to \(N_{\text{max}}=8.5\times 10^{4}\). Figure 3: Normalized number of atoms as a function of red-detuning from the cooling and from the repumping transitions. Here, \(\Gamma\) is the natural linewidth of the \(D_{2}\)-line. Figure 2: Atomic level scheme showing the \(D_{1}\) and the \(D_{2}\) lines of \({}^{40}\)K. The vertical arrows mark the beams dedicated to cooling methods investigated in this work. For the \(D_{2}\) line the figure shows detunings of MOT beams from the cooling transition \(\Delta^{\text{F}}_{\text{MOT}}\) and from the repumping transition \(\Delta^{\text{R}}_{\text{MOT}}\), detuning of the compressed MOT beams (\(\Delta^{\text{C}}_{\text{CMOT}}\) and \(\Delta^{\text{R}}_{\text{CMOT}}\)), single photon detunig of gray molasses beams \(\Delta_{\text{D}2}\) and the detuning from the two-photon resonance \(\delta_{\text{D}2}\). For the \(D_{1}\) line we use, by analogy, single photon detunig of gray molasses beams \(\Delta_{\text{D}1}\) and the detuning from the two-photon resonance \(\delta_{\text{D}1}\). The colors blue and red refer to the cooling and to the repumping beam, respectively. The hyperfine shifts are based on [18]. ## IV Sub-Doppler cooling We compress the cloud to cool it down and increase its density using a \(D_{2}\)-line compressed MOT stage. This is obtained by changing the cooling (repumping) beam detuning to -1\(\Gamma\) (-5\(\Gamma\)) and increasing the magnetic field gradient to 18 G/cm in < 0.6 ms (set by the response time of the coil current driver). Once the magnetic gradient has reached the new value we ramp down the power of the cooling and the repumping beam in 10 ms to 6\(I_{\mathrm{s}}\) and 1.4\(I_{\mathrm{s}}\), respectively. We are not able to simultaneously ramp the frequency and the power due to the limitation of the RF source driving acousto-optic modulators (AOMs). This might be the reason that we do not observe a significant temperature drop, ending up with a cloud at 160 uK. However, during this process there is no significant loss of atoms and we reach a phase space density (PSD) of \(\rho=1.6\times 10^{-7}\). We first investigate to what extent our setup allows for sub-Doppler cooling on the \(D_{2}\) line. Previous works using the \(D_{2}\) line reached temperatures below 50 uK in optical molasses with the cooling beam red-detuned by more than -4\(\Gamma\)[9; 10] from the \(F^{\prime}=11/2\) level or by using coherent superposition of ground hyperfine states with red-detuned gray molasses [21]. We essentially follow the work of Bruce et. al [21] where \(D_{2}\)-line gray molasses for \({}^{40}\)K have been implemented for the first time. After the magnetic field of the compressed MOT has been switched off, we set the compensation coils currents to nullify the background magnetic field [22] and detune the repumping and the cooling light to produce a coherent superposition of ground state hyperfine levels via the excited state \(F^{\prime}=9/2\). We investigate the dependence of the final atom number and the final temperature on both the single photon detuning from the \(F^{\prime}=9/2\) level, \(\Delta_{\mathrm{D2}}\), and on the two-photon detunig \(\delta_{\mathrm{D2}}\) between the cooling and the repumping light. Here, we have used the cloud size after a fixed expansion time as a proxy for temperature to generate the plots shown in figures 5a) and b). We find that the maximum atom number and the minimum temperature can be obtained for two-photon detuning of about \(\delta_{\mathrm{D2}}=-1\Gamma\) and single photon detuning of about \(\Delta_{\mathrm{D2}}=-12\Gamma\), very similar to values reported by Bruce et. al [21]. Unfortunately, we observe a nearly 80% loss of atoms with a negligible temperature drop to 140 uK, as determined with a time of flight method. Most of the re Figure 5: Plots of the cloud width (red dots) and the number of atoms (black squares) as a function of (a) red-detuning (single–photon detuning) from the \({}^{2}P_{3/2}\), \(F=9/2\) level (b) detuning from the Raman (two-photon) transition. Analogous measurements for the \(D_{1}\) line gray-optical molasses cooling as a function of (c) blue-detuning from the \({}^{2}P_{1/2}\), \(F=7/2\) level and (d) detuning from the Raman (two-photon) transition. ported sub-Doppler cooling techniques rely on simultaneous ramping of the frequency and the laser power which cannot be implemented with our current RF sources driving AOMs. This limitation might be the main reason for inefficient cooling using the \(D_{2}\) line. Another possible source of the reduced cooling efficiency may be optimization of the cooling paths for the 770 nm light, i.e. \(D_{1}\)-line light. MOT beams and gray molasses beams are coupled into the same optical fiber, and they share the same paths upon leaving the fiber to form a MOT. We control the polarization of the overlapped beams with half- and quarter-waveplates with the design wavelength of 767 nm (\(D_{2}\) line) but we optimize the polarization of the 770 nm light, which slightly degrades the target polarization of the MOT beams and \(D_{2}\) line molasses. This is a conscious choice as we have expected to use sub-Doppler cooling on the \(D_{1}\) line all along while treating the MOT as a source of pre-cooled atoms. As such, the experimental setup has been optimized to maximize the atom number in the MOT and minimize the final temperature of the cloud after \(D_{1}\)-line cooling. From this point of view it may not be surprising that sub-Doppler cooling mechanisms on the \(D_{2}\) line do not perform as well as reported by other groups [9; 10; 21]. We now focus our attention on the gray molasses cooling on the \(D_{1}\) line. It has been shown by several authors [15; 16; 19; 23] that this cooling method assures highly efficient and fast cooling with negligible atom loss. In fact, it has been implemented already for all alkali atoms where sub-Doppler cooling mechanisms on the \(D_{2}\) line do not work efficiently due to the small hyperfine structure energy splitting of the \(P_{3/2}\) state. Gray molasses require good control over the background magnetic field and it is necessary to cancel all external stray magnetic fields for the best performance. We have nullified stray fields using a method we have developed and illustrated with \({}^{39}\)K [22] which enabled us to cool that isotope to \(\sim\)8 \(\mu\)K. Gray molasses cooling starts immediately after the \(D_{2}\)-line compressed MOT. The magnetic field and the \(D_{2}\) light are turned off and the compensation of the stray magnetic fields is engaged. We block the light from the master laser (D2 ML) with a shutter, simultaneously opening a shutter that sends the \(D_{1}\) line to tapered amplifiers. Due to the opening times of the shutters and their jitter we turn on the \(D_{1}\) line light when the shutter is fully open. These technical restrictions introduce a delay of 0.8 ms between the beginning of the \(D_{1}\) gray molasses stage and the turn off of the magnetic field. During that time the cloud drops freely and gets diluted due to its rather high temperature, but we are able to achieve nearly 100% transfer of atoms to gray molasses with light intensities of 8.7\(I_{\mathrm{s}}\) and 2.2\(I_{\mathrm{s}}\) for the cooling and the repumping beam, respectively. We proceed with optimization of single- and two-photon detuning [see figures 5c) and 5d)] and the cooling time. We have found that during a 10 ms cooling stage during which we simultaneously ramp down the cooling (repumping) beam intensity from 8.7\(I_{\mathrm{s}}\) to 2.9\(I_{\mathrm{s}}\) (2.2\(I_{\mathrm{s}}\) to 0.8\(I_{\mathrm{s}}\)) we are able to decrease the temperature to 10 \(\mu\)K, on par with temperatures reported by other groups [19; 20] while losing less than 20% of atoms. With nearly \(3\times 10^{5}\) atoms in a mixture of states we obtain free-space phase space density \(\rho=3.5\times 10^{-5}\). For the determination of the final temperature we have used a time-of-flight method using both fluorescence and absorption images (examples of both types of images taken at different expansion times are shown in figure 6). Both methods have shown a very good agreement, giving essentially the same temperature of 10 \(\mu\)K. Here, the shortest expansion time after release from gray molasses is 4.3 ms, limited by the 14 ms repetition time of mechanical shutters S1 and S2 (see figure 1), whereas the longest expansion is 9 ms, limited by our ability to reliably image diluted samples. ## V Magnetic trapping High laser cooling efficiency of the sample enabled us to transfer atoms to a conservative magnetic potential. A magnetic trap has a large capture volume and enables sympathetic cooling of \({}^{40}\)K with \({}^{23}\)Na,\({}^{41}\)K or \({}^{87}\)Rb [24; 25; 12]. In such a cooling scheme, the final temperature of fermions is set by the ability to evaporate the coolant (e.g. \({}^{23}\)Na, \({}^{41}\)K or \({}^{87}\)Rb) and one can enter the degenerate regime without notable atom loss of \({}^{40}\)K. To transfer atoms from gray molasses to the magnetic trap we perform hyperfine pumping of the atomic population to the \({}^{2}S_{1/2},F=7/2\) hyperfine state by switching off the repumping light, then after 500 \(\mu\)s switching off the cooling light and turning on the quadrupole field. We use the highest gradient we can safely sustain in our setup, \(B^{\prime}_{\mathrm{r}}=57.6\) G/cm (note that the horizontal gradient is nearly \(B^{\prime}_{\mathrm{axial}}=115\) G/cm) which is sufficiently high to capture atoms distributed between \(m_{\mathrm{F}}=-3/2,-5/2,-7/2\) states. If after hyperfine pumping the populations was eqaully distributed between Zeeman sublevels we would expect a 37.5% transfer efficiency - we reach just under 50% of that value. Figure 6: Examples of absorption (a,b) and fluorescence (c,d) images taken 4.6 ms (a,c) and 7.6 ms (b,d) after turning off the \(D_{1}\)-line gray molasses. Due to the timing of the shutters used in the laser systems the expansion time could not be shorter than 4.3 ms. The upper limit on expansion time is only slightly larger than 7.6 ms due to the loss of optical density resulting from the small atom number. We have investigated the same two cases we have considered for the MOT loading: dispensers turned on to maximize the trapped atom number and dispensers turned off for a day and loading enhancement with LIAD. For the atom number maximized case we are able to magnetically trap about \(5\times 10^{4}\) atoms, with a lifetime of 0.6 s. This lifetime can be extended to over 2.8 s after keeping the dispensers off for a day and enhancing the loading rate with LIAD. This, however, results in a much lower number of trapped atoms, just above 3,000. The lifetime measurements are shown in figure 7. We have not implemented spin polarization as we have focused on a proof-of-principle demonstration of magnetic trapping but there is nothing fundamental nor technical that would prevent us from trapping a little over \(10^{5}\) spin polarized atoms in our current setup under the condition of reduced lifetime. It is worth emphasizing that that the reported lifetime is a lower bound on what can be achieved in our setup as both the sample's spin polarization and/or trapping atoms in the \(F=9/2\) hyperfine state would lead to the increase of the storage time. We were not able to reliably measure the temperature of the magnetically trapped cloud as it got diluted already after a short expansion time. This might indicate that the transfer from gray molasses introduced some heating, which would not be surprising given the nature of optical pumping and the fact that gray molasses are slightly offset from the minimum of the magnetic potential. We believe, however, that it is primarily due to the small atom number that is approaching the detection sensitivity of our imaging system. For the lifetime measurement in the magnetic trap the small atom number is less of an issue. We release atoms after a given hold time and re-trap them in the MOT, where the fluorescence signal is collected for 20 ms. The imaging time has been chosen such that loading of the MOT from the background gas during imaging is negligible. ## VI Summary We have presented sub-Doppler cooling of \({}^{40}\)K using a source with natural composition of potassium isotopes and a single chamber apparatus. With \(3\times 10^{5}\) atoms at a temperature of 10 \(\mu\)K after \(D_{1}\) gray molasses cooling stage we demonstrate that even without isotopically enriched sources it is possible to achieve state-of-the art cloud parameters in a single chamber setup. Our results open doors to using un-enriched fermionic potassium in modern experiments including quantum computing with atoms in optical tweezers and sympathetic cooling of magnetically trapped \({}^{40}\)K with another species. The observed deterioration of the vacuum quality when atomic sources are operational can be a serious problem in many experiments involving evaporation. For proof-of-principle works in optical tweezers, including formation of \({}^{40}\)KCs ground state molecules that we are pursuing, the vacuum quality might be a secondary issue. Optical or magnetic transport can be later used to move the sample to another chamber with much better vacuum as has been demonstrated both for atoms [26; 27] and for molecules [28], where the cloud was moved by 46 cm in just 50 ms. Our work might encourage precision spectroscopic measurements of stable potassium isotopes [29] in simplified experimental setups (no enriched sources, no Zeeman slowers, no 2D MOTs etc.) while still providing samples at \(\sim\)10 \(\mu\)K thus significantly reducing the Doppler effect. Our results on sub-Doppler cooling of small numbers of atoms might be useful to \(\beta\)-decay experiments, where small numbers of laser cooled \({}^{37}\)K and \({}^{38m}\)K isotopes have been used [30]. Here, it is not only a matter of the reduction of the Doppler effect but, we believe, the demonstrated gain in phase space density that might improve the quality of measurements. We would like to acknowledge P. Arciszewski and J. Dobosz for their contribution to the development of the experimental setup used in this work. This research was funded by the Foundation for Polish Science within the HOMING programme and the National Science Centre of Poland (grant No. 2016/21/D/ST2/02003 and a postdoctoral fellowship for M.S., grant No. DEC-2015/16/S/ST2/00425).
2310.05964
Exploring Embeddings for Measuring Text Relatedness: Unveiling Sentiments and Relationships in Online Comments
After the COVID-19 pandemic caused internet usage to grow by 70%, there has been an increased number of people all across the world using social media. Applications like Twitter, Meta Threads, YouTube, and Reddit have become increasingly pervasive, leaving almost no digital space where public opinion is not expressed. This paper investigates sentiment and semantic relationships among comments across various social media platforms, as well as discusses the importance of shared opinions across these different media platforms, using word embeddings to analyze components in sentences and documents. It allows researchers, politicians, and business representatives to trace a path of shared sentiment among users across the world. This research paper presents multiple approaches that measure the relatedness of text extracted from user comments on these popular online platforms. By leveraging embeddings, which capture semantic relationships between words and help analyze sentiments across the web, we can uncover connections regarding public opinion as a whole. The study utilizes pre-existing datasets from YouTube, Reddit, Twitter, and more. We made use of popular natural language processing models like Bidirectional Encoder Representations from Transformers (BERT) to analyze sentiments and explore relationships between comment embeddings. Additionally, we aim to utilize clustering and Kl-divergence to find semantic relationships within these comment embeddings across various social media platforms. Our analysis will enable a deeper understanding of the interconnectedness of online comments and will investigate the notion of the internet functioning as a large interconnected brain.
Anthony Olakangil, Cindy Wang, Justin Nguyen, Qunbo Zhou, Kaavya Jethwa, Jason Li, Aryan Narendra, Nishk Patel, Arjun Rajaram
2023-09-15T04:57:23Z
http://arxiv.org/abs/2310.05964v2
Exploring Embeddings for Measuring Text Relatedness: Unveiling Sentiments and Relationships in Online Comments ###### Abstract After the COVID-19 pandemic caused internet usage to grow by 70%, there has been an increased number of people all across the world using social media. Applications like Twitter, Meta Threads, YouTube, and Reddit have become increasingly pervasive, leaving almost no digital space where public opinion is not expressed. This paper investigates sentiment and semantic relationships among comments across various social media platforms, as well as discusses the importance of shared opinions across these different media platforms, using word embeddings to analyze components in sentences and documents. It allows researchers, politicians, and business representatives to trace a path of shared sentiment among users across the world. This research paper presents multiple approaches that measure the relatedness of text extracted from user comments on these popular online platforms. By leveraging embeddings, which capture semantic relationships between words and help analyze centimeters across the web, we can uncover connections regarding public opinion as a whole. The study utilizes pre-existing datasets from YouTube, Reddit, Twitter, and more. We made use of popular natural language processing models like Bidirectional Encoder Representations from Transformers (BERT) to analyze sentiments and explore relationships between comment embeddings. Additionally, we aim to utilize clustering and Kl-divergence to find semantic relationships within these comment embeddings across various social media platforms. Our analysis will enable a deeper understanding of the interconnections of online comments and will investigate the notion of the internet functioning as a large, interconnected brain. Embedding models, sentiment analysis, semantic relationships, BERT, clustering, cosine similarity, Kl-Divergence, vector space, contextual embeddings, feature extraction ## I Introduction The aftermath of the 2020 COVID-19 pandemic created a newfound surge of online platforms and user-generated content that revolutionized interaction, with comments being pivotal for reflecting opinions. Because of this, the relevance of social media increased, which sparked interest in this research. Public sentiment offers insight into the general populace's standings on certain topics. By using Sentiment analysis (SA) which involves classifying text as containing positive, negative, or neutral sentiment, overall sentiment and much more can be accessed. Natural Language Processing (NLP) has advanced in the field of text embeddings [1], which entails encoding text into vectors to capture word relationships [2]. This study uses models like BERT [3] and methods like dimensionality reduction and clustering to identify global trends. Kl-Divergence measures semantic and sentiment convergence or divergence over time, another method we employed. Previous works in the field of NLP have explored methods like dimensionality reduction and Canonical Correlation Analysis to understand relationships between different vectors [4-5]. However, these approaches fall short of revealing a collective consensus or divergence of opinions across more than one user. In contrast, our research aims to extend this analysis to explore collective consensus or divergence of opinions on various topics. Employing methods like K-means clustering and systems like ADRMine for pharmacovigilance and medical purposes also falls under the vast topic of NLP [6]. Techniques like the bag of character n-grams have also been utilized to identify relationships, such as detecting hate speech and toxic content on Twitter [7]. Embeddings have been used to detect deepfake images and videos as well [8]. Neri et al. [9] utilized 1000 Facebook posts and examined individual comment's opinions. This methodology provided insights into public opinions toward products. While these studies have provided an understanding of individual sentiments, they fall short in measuring collective sentiment accurately, as well as using multiple platforms. This is an obstacle because certain platforms may contain a concentrated demographic of people who hold biases, meaning deriving public sentiment from just one platform will not give a comprehensive understanding of global, shared opinion. Our research, on the other hand, aims to analyze relationships between multiple comments and compare sentiments and semantics across multiple platforms on a larger scale. The field of opinion mining has exploited deep learning strategies to delve into the semantics of social media [10]. These approaches use models like Multi-Layered Bi-Directional Long-Short-Term Memory (MBiLSTM) [11] to achieve high accuracy when classifying comments. While these approaches are closely related to our proposal, their goals stop at classification. Other studies have introduced novel approaches to sentiment analysis, like Balahur's study of analyzing sentiment in Twitter data with minimal linguistic processing [12], as well as Wang and Li analyzing sentiment within photos [13]. However, our study focuses on analyzing sentiment strictly within text and uncovering more nuanced patterns. Additionally, studies analyzing sentiment in specific tweets [14] or utilizing the Skip-gram model used by Asr, Zinkov, and Jones primarily aim to optimize a model or use strictly SA [15], not find more patterns. While using the Skip-gram model has been proven to find asymmetrical relationships with high accuracies, parameterization is elaborate and complex [16]. Our focus extends beyond optimizing word embedding algorithms, deriving an expanded spectrum of patterns through multiple models. Extracting meaningful insights from social media comments poses a challenge due to their unstructured nature. Because of this, we propose a novel study addressing key challenges, distinguishing us from prior works. By analyzing sentiment across platforms, we transcend SA limitations. This approach captures the interconnections and attitudes among online users. Moving beyond sentiment classification, our study underlying relationships via clustering and Kl-Divergence. This research offers a more complete understanding of sentiment dynamics. This paper is organized as follows: Section II covers tested models, their working diagrams, datasets used, and trend analysis metrics. Section III discusses accuracy metrics, visualizes the model results, and expresses key takeaways. Section IV concludes the paper, and discusses options for building further on this topic. ## II Methods ### Data Collection Data for this project was sourced from YouTube, Reddit, Twitter, and Amazon. Various contexts were analyzed, exploring cross-platform internet dynamics, and mitigating potential biases. We used the "Trending YouTube Video Statistics and Comments" dataset [17], containing views, likes, category, and comments for top daily YouTube videos. Another YouTube dataset from Kaggle contained over 270,000 comments [18]. To diversify our analysis, we incorporated the Sentiment140 dataset, with 1,600,000 tweets and sentiment labels, which are useful for prediction models [19]. For a broader scope, 5 distinct Subreddit pages represented topics like news, gaming, movies, the NFL, and relationships [20]. Finally, our testing encompassed an Amazon Product Reviews dataset, a compendium of more than 568,000 consumer reviews [21]. Our combined dataset contained over 2.6 million data points. Among these datasets, there were 4 different platforms. However, except for the Amazon reviews, there were two different datasets for each platform, totaling up to 7 datasets. Fairly equal data distribution among platforms ensured an accurate, comprehensive view of patterns. In various tests, we utilized this dataset, not singular platforms. #### 2.1.1 Dataset Cleaning Comments on social media contain noise, irregularities, and format variations, necessitating rigorous data cleaning. Emojis were removed, hashtags were dropped, and URLs were replaced. All letters were set to lowercase due to their unhelpful and potentially distracting nature during SA. Initial word position affects how the model treats it, but punctuation removal distorts the original meaning. #### 2.2 Embeddings Embedding models were used to project text into a multi-dimensional space. Differing from previous research, we used more than one NLP model to garner more information, gain a more complete understanding of the interconnections of the internet, and investigate which models performed best for our specific tasks. After evaluating various embedding models, we investigated both word and sentence embeddings to better understand the text at two different levels of granularity. #### 2.2.1 BERT One of the embedding models we used in our analysis was BERT (Bidirectional Encoder Representations from Transformers), an NLP model developed by Google. BERT is unique in its use of bidirectional context analysis, with a novel approach of Masked Language Modeling (MLM) that allows BERT to predict missing words in a sentence [3]. Figure 1 shows a pipeline of BERT's embedding process. #### 2.2.2 GloVe Another embedding model tested in our experiments is the GloVe (Global Vectors) model, an unsupervised learning algorithm for developing word embeddings created by Stanford University. It derives semantic relationships using both local and global statistics. #### 2.2.3 AlBERT A Lite BERT, named ALBERT, utilizes architecture similar to BERT. However, we discovered it performs much better than BERT due to its capability to generate superior SA accuracies. It is also less computationally intensive, utilizing cross-layer parameter sharing. #### 2.2.4 RoBERTa Created by Facebook AI, RoBERTa (a Robustly Optimized BERT Training Approach), utilizes a similar ar architecture to BERT. When initially pre-trained, researchers conducted a more extensive hyper-parameter optimization search than what was done with BERT. We looked into 3 checkpoints of RoBERTa including sentiment-RoBERTa-large-english (SiEBERT), Sn-Xlm-RoBERTa-Base-Snli-Mnli-Anli-Xmli (XLM-RoBERTa), and Ko-sRoBERTa-Multitask, and found varying results. #### 2.2.5 Instructor-xl Instructor-xl is an embedding model that generates text embeddings for specific domains and tasks by using task-related text instructions. It extracts relevant information from its pre-trained data to create detailed embeddings based on domain-specific jargon, all without requiring fine-tuning or hyper-parameter adjustments. However, when creating clusters, convenience for quality was not a good trade-off. It was computationally expensive, and the clusters generated were of poor quality. #### 2.2.6 All Mpnet Base All Mpnet Base is another embedding model that maps both sentences and paragraphs into a vector space. All Mpnet Base can determine sentence similarity, which aligns with our goals of finding relationships and patterns between comments across various social media platforms. #### 2.2.7 ELMo This word embedding model, similar to GloVe, is called ELMo (Embeddings from Language Models). It uses a deep, bi-directional LSTM model to create word representations, which allows ELMo to generate different embeddings for the same word used in different contexts. #### 2.2.8 FastText The last model we tested was the FastText model, chosen because it generated embeddings much quicker than other models. However, it was found the model could not classify well, with relatively low accuracies for SA. ### Relationship Measurements To determine the relationship across comments, we utilized many techniques, including standard cosine similarity. Once the words had been embedded, the following equation could be used to determine the similarity between texts, where a similarity of 1 indicates a strong similarity and -1 indicates opposite meanings. \[\cos\theta=\frac{\mathbf{A}\cdot\mathbf{B}}{\|\mathbf{A}\|\cdot\|\mathbf{B}\| }=\frac{\sum_{i=1}^{n}A_{i}\cdot B_{i}}{\sqrt{\sum_{i=1}^{n}A_{i}^{2}}\cdot \sqrt{\sum_{i=1}^{n}B_{i}^{2}}} \tag{1}\] Equation 1 demonstrates how to determine the cosine similarity between two vectors, A and B, and \(n\) is the length of the vectors. \(i\) is the index variable used to sum over the \(n\) elements of the vectors. Other methods we used to measure relations include clustering, for which we used K-means, Principal Component Analysis (PCA), weighted averages, and Kl-Divergence. PCA is a valuable technique for extracting features. Because of our large dataset, PCA was useful for retaining only the most important features in a vector, which can expedite training times and reduce noise. As seen in Figure 2, the plateau represents a point of diminishing returns. Clustering was utilized to place each comment in a specific category or "cluster". To determine how many clusters to create, we used the elbow method to find the optimal value of K. The y-axis, "Inertia", indicates the sum of the squared distances of data to an assigned cluster centroid. The x-axis indicates the optimal number of clusters, as indicated in Figure 3. The point the graph starts to plateau or form an "elbow" indicates the most optimal number of n_clusters for the data. We measured the quality of the clusters with the silhouette scoring metric, where a score of -1 means objects are closer to neighboring clusters than to the centroid of their assigned cluster, 0 meaning there are overlapping clusters, and a score close to +1 indicates high quality, distinct clusters. Finally, we measured the relative entropy, or Kl-Divergence of two probability distributions. A divergence of 0 means the two distributions are identical, and 1 indicates unique probability distributions. Experiments involving Kl-Divergence revealed themes in terms of sentiment and semantics across specific time periods. The formula for KL-Divergence is Fig. 1: BERT Model. Source: Adapted from [22] Fig. 2: Principal Component Analysis (PCA) Graph for Feature Extraction, a Preprocessing Method shown in Equation 2, where \(P\left(\textbf{i}\right)\) and \(Q\left(\textbf{i}\right)\) are the probabilities of the ith category according to the distributions of P and Q, respectively. \[KL\left(P/\textbf{i}/Q\right)=\Sigma\left(P\left(\textbf{i}\right)\ast\log(P \left(\textbf{i}\right)/Q\left(\textbf{i}\right))\right) \tag{2}\] ## III Results ### Initial Embedding Model Testing We tested a variety of embedding models to determine which would best fit our needs, eventually narrowing our choice down to one. Most of these required no finetuning. We were able to compare the performance of the various models across multiple different tasks such as clustering, Kl-Divergence, and sentiment analysis. All tests were run with the mixed-platform dataset. #### 3.1.1 RoBERTa We investigated three different RoBERTa-based models, finding two had quite similar results. The first model was Ko_RoBERTa_Multitask, a Hugging Face model based on RoBERTa's architecture. We tested this model on multiple different tasks, including clustering and sentiment analysis. In SA, we found its accuracy to be 74.28% when tested on the mixed platform dataset. When clustering, the model achieved the lowest silhouette score of all the models, 0.027. The next RoBERTa-based model was called XLM-RoBERTa. This model achieved 77.36% accuracy when tested for sentiment analysis and a 0.14 silhouette score for its clusters. Figure 4 is a visualization for K-means clustering of embeddings using the XLM-RoBERTa model. The clusters lacked any type of structure or definition. The final RoBERTa-based model tested was sentiment-RoBERTa-large-english (SiEBERT). The model performed optimally with a 99.37% accuracy when tested on a mixed dataset for SA, as well as a 0.83 silhouette score for clustering. #### 3.1.2 Instructor-xl Multiple tests were run on Instructor-xl to find trends, but the model was rendered useless in comparison to others. Despite its convenience with the domain feature, the quality of the embeddings it generated was subpar. Primarily aiming to leverage its clustering feature, we achieved a silhouette score of 0.066. Instructor-xl struggled with complex language and thus created many overlapping clusters. #### 3.1.3 Albert We trained this model on multiple datasets. Given its vector embeddings were more detailed than Instructor-xl, it was able to generate better clusters, with a silhouette score of 0.24. However, it was still relatively low compared to SiEBERT. #### 3.1.4 FastText We trained Facebook's FastText model on the mixed-platform dataset, achieving only 63.29% accuracy. This is likely due to outdated context measurement and insufficient pre-training. Embedding quality directly affects SA accuracy, and here, it was inadequate. Because of this, we decided not to pursue it any further in terms of other experiments, like clustering. #### 3.1.5 Comparison of Results The final model selection was SiEBERT. After numerous other tests, SiEBERT proved to have the best performance across all methods, namely categorical Kl-Divergence, K-means clustering, and sentiment classification. Fig. 4: XLM-RoBERTa K-means Clustering Graph Fig. 3: Elbow Method Graph for Ko_RoBERTa_Multitask 3.2 Further Analysis with SiEBERT 3.2.1 SA Once SA was run on multiple individual-platform datasets, as well as the mixed dataset containing content from 4 different platforms, the general sentiment was determined for each to help better understand the clustering assignments. In each instance, the weighted average of sentiment classifications was taken, with an average of -1 being fully negative, and +1 being fully positive. We found everything to be largely balanced, as most only tipped slightly towards a negative sentiment. SiEBERT's sentiment prediction distributions were approximately 40% positive and 60% within the mixed dataset. This shows there is an almost equal distribution of sentiment in the data, with an edge toward negative sentiment. The following table contains the weighted averages found in different datasets. #### 3.2.2 Clustering The Elbow Method was used to conveniently find the optimal number of clusters, rather than manual hyper-parameter optimization. The number of clusters generated reveals how the comments varied, but given only 4-6 clusters were needed for our mixed dataset, it was understood semantics and the overall attitude within each cluster were strongly correlated. The main experiment run using this model was clustering random comments from a mixed-platform dataset that only generated clusters if the comments met a certain similarity threshold, 0.8. This ensured the comments within a cluster revolved around the same object, entity, or theme. By running these tests, the magnitude of shared sentiment can be better understood. These tests were successful, generating clusters with a consistent silhouette score of 0.83. As seen in this visualization, the clusters generated were a lot more distinct than in previous models, illustrated in Figure 5. Because there were multiple, well-separated clusters, it was made clear the internet may not be highly interconnected. This may be true within each cluster, as thoughts and semantics are related, but as a whole, viewpoints strongly differ regarding similar topics. #### 3.2.3 KL Divergence The second main test we ran with SiEBERT was categorical Kl-Divergence. Using an Amazon review dataset with more than 500,000 reviews, we looked at 2 time periods, 2004-2008, and 2009-2012, and a divergence of 0.87 was determined. This showed the way reviews were written, the sentiment displayed by the text, and the selection of words drastically changed over time, as a Kl-Divergence near 1.0 indicates very different probability distributions. Given this information, we concluded the internet is not very consistent and is easily influenced. #### 3.2.4 Additional Analysis Various other tests were run for further data analysis, like calculating the general sentiment. This can be used in favor of politicians and businessmen. Using a dataset from Twitter, we found the overall sentiment towards Prime Minister of India Narendra Modi was -0.2, where an average of -1 meant everything classified as negative. However, acknowledging the possible usage of bots would skew results, this sentiment may not be entirely true, as bots can also impact the opinion to a degree. However, based on these results, this shows there is not an overwhelming favor to one side or the other, and the opinions expressed are equally divided. ## IV Conclusion This paper explores the significance of word embeddings in analyzing components of major platforms. While evaluating various embedding models, we narrowed down our model selection to RoBERTa-based models, which showed the most promising results with an average SA accuracy of 85%. Most notably, SiEBERT excelled in SA and classified with an accuracy of 99.37%. Internet opinions appear evenly split, evidence being the sentiment classification distributions. The use of Kl-Divergence highlighted the semantic deviation in Amazon reviews over 12 years, indicating large changes in the way user reviews were written and expressed over time. Clustering online comments with SiEBERT obtained a silhouette score of 0.83. SiEBERT revealed distinct clusters but also Fig. 5: SiEBERT Clustering Graph diverse internet viewpoints. Several other experiments were conducted, specifically focusing on finding attitudes toward public figures, like the Prime Minister of India, Narendra Modi. Our findings revealed a slightly negative public opinion towards Modi as of 2015, which, if used in further studies with newer data, could provide valuable information for politicians on how they are perceived on various platforms. However, the usage of bots and automated responses may have skewed the results. This notion applies to all of our experiments, where bots could most definitely impact the results found. Our experiments on our data (although a relatively small representation of the internet) all point to the notion that the internet is not that interconnected after all, and the data analyzed had a relatively equal balance of positive and negative sentiment. In the future, an option for improvement of this research involves creating new data with a way to filter out bots or pre-generated responses to mitigate the influence of bots. This would help to find the genuine opinions of the people. Possible topics for exploration include extending the principles to explore sentiment variations across global regions, which could be used to analyze political rifts and tensions. Additionally, expanding to more niche social media platforms would allow for a broader look at the internet as a whole. Incorporating data from platforms catering to specific demographics could reveal biases and nuanced patterns. This includes region-exclusive and age-targeted platforms. Expanding this project's focus, we could analyze specific comment sections of individual videos, studying divergence or convergence trends over time. An intriguing aspect to explore would be to determine if top comments shape the tone of subsequent comments over time. Furthermore, studies could delve deeper into SA within specific industries, particularly marketing and politics. This would open doors for more practical applications of SA findings. ### _Acknowledgements_ We would like to thank all those who have assisted us throughout the research process, especially the BLAST AI research program. Finally, we would like to express our gratitude to the researchers whose previous work has served as an inspiration and foundation for this study. We would also like to acknowledge Kaggle and the authors of the Kaggle datasets we used.
2309.13915
Sample Complexity of Neural Policy Mirror Descent for Policy Optimization on Low-Dimensional Manifolds
Policy gradient methods equipped with deep neural networks have achieved great success in solving high-dimensional reinforcement learning (RL) problems. However, current analyses cannot explain why they are resistant to the curse of dimensionality. In this work, we study the sample complexity of the neural policy mirror descent (NPMD) algorithm with deep convolutional neural networks (CNN). Motivated by the empirical observation that many high-dimensional environments have state spaces possessing low-dimensional structures, such as those taking images as states, we consider the state space to be a $d$-dimensional manifold embedded in the $D$-dimensional Euclidean space with intrinsic dimension $d\ll D$. We show that in each iteration of NPMD, both the value function and the policy can be well approximated by CNNs. The approximation errors are controlled by the size of the networks, and the smoothness of the previous networks can be inherited. As a result, by properly choosing the network size and hyperparameters, NPMD can find an $\epsilon$-optimal policy with $\widetilde{O}(\epsilon^{-\frac{d}{\alpha}-2})$ samples in expectation, where $\alpha\in(0,1]$ indicates the smoothness of environment. Compared to previous work, our result exhibits that NPMD can leverage the low-dimensional structure of state space to escape from the curse of dimensionality, explaining the efficacy of deep policy gradient algorithms.
Zhenghao Xu, Xiang Ji, Minshuo Chen, Mengdi Wang, Tuo Zhao
2023-09-25T07:31:22Z
http://arxiv.org/abs/2309.13915v2
Sample Complexity of Neural Policy Mirror Descent for Policy Optimization on Low-Dimensional Manifolds ###### Abstract Policy-based algorithms equipped with deep neural networks have achieved great success in solving high-dimensional policy optimization problems in reinforcement learning. However, current analyses cannot explain why they are resistant to the curse of dimensionality. In this work, we study the sample complexity of the neural policy mirror descent (NPMD) algorithm with convolutional neural networks (CNN) as function approximators. Motivated by the empirical observation that many high-dimensional environments have state spaces possessing low-dimensional structures, such as those taking images as states, we consider the state space to be a \(d\)-dimensional manifold embedded in the \(D\)-dimensional Euclidean space with intrinsic dimension \(d\ll D\). We show that in each iteration of NPMD, both the value function and the policy can be well approximated by CNNs. The approximation errors are controlled by the size of the networks, and the smoothness of the previous networks can be inherited. As a result, by properly choosing the network size and hyperparameters, NPMD can find an \(\epsilon\)-optimal policy with \(\widetilde{O}(\epsilon^{-\frac{d}{\alpha}-2})\) samples in expectation, where \(\alpha\in(0,1]\) indicates the smoothness of environment. Compared to previous work, our result exhibits that NPMD can leverage the low-dimensional structure of state space to escape from the curse of dimensionality, providing an explanation for the efficacy of deep policy-based algorithms. ## 1 Introduction Deep Reinforcement Learning (DRL) is a popular approach for solving complex decision-making problems in various domains. DRL methods, especially policy-based ones including DDPG (Lillicrap et al., 2015), TRPO (Schulman et al., 2015), and PPO (Schulman et al., 2017), are able to handle high-dimensional state space efficiently by leveraging function approximation with neural networks. For instance, in Atari games (Brockman et al., 2016), the states are images of size \(210\times 160\) with RGB color channels, resulting in a continuous state space of dimension \(100,800\), to which tabular algorithms such as policy iteration (Puterman, 1994) and policy mirror descent (PMD, Lan 2023) are not applicable. Surprisingly, such a high dimension does not seem to significantly impact the efficacy of the aforementioned DRL algorithms. Despite the empirical success of these DRL methods in high dimensions, there currently exist no satisfactory results in theory that can explain the reason behind them. Most of the existing works about function approximation in RL focus on linear function class (Agarwal et al., 2021; Alfano and Rebeschini, 2022; Yuan et al., 2023). They assume the value function and the policy can be well approximated by linear functions of features representing states and actions (Jin et al., 2020), which is restrictive and requires feature engineering. One way to relax such a linearity assumption is to consider the reproducing kernel Hilbert space (RKHS) which allows nonlinear function approximation (Agarwal et al., 2020; Yang et al., 2020) through random features (Rahimi and Recht, 2007). However, the commonly used reproducing kernels such as the Gaussian radial basis function and randomized ReLU kernel suffer from the curse of dimensionality in sample complexity without additional smoothness assumptions (Bach, 2017; Yehudai and Shamir, 2019; Hsu et al., 2021). Moreover, some researchers study neural network approximation in the neural tangent kernel (NTK) regime, which is equivalent to an RKHS (Jacot et al., 2018; Liu et al., 2019; Wang et al., 2019). As a consequence, these results inherit the curse of dimensionality from general RKHS. Some other works investigate neural network approximation from a non-parametric perspective (Fan et al., 2020; Nguyen-Tang et al., 2022), but they consider value-based methods where only value function is approximated and also suffer from the curse of dimensionality. There are alternative lines of work on policy optimization with general function approximation, but they either assume the functions can be well approximated without any further verification (Xie et al., 2021; Lan, 2022; Alfano et al., 2023), or require some strong assumptions such as third-time differentiability (Yin et al., 2022). One possible explanation for the empirical effectiveness of DRL algorithms is the adaptivity of neural networks to the intrinsic low-dimensional structure of the state space. In the Atari games example, the images share common textures and are rendered using just a small number of internal states, such as the type, position, and angle of each object, thus the intrinsic dimension of the state space is in fact very small compared to the data dimension of 100,800. However, the extent of this adaptivity is yet to be explored in DRL literature. To bridge this gap between theory and practice, we propose to investigate neural policy optimization within environments possessing low-dimensional state space structures. Specifically, we consider the infinite-horizon discounted Markov decision process (MDP) with continuous state space \(\mathcal{S}\), finite action space \(\mathcal{A}\), and discount factor \(\gamma\). We focus on the sample complexity of the neural policy mirror descent (NPMD) method. NPMD is based on the actor-critic framework (Konda and Tsitsiklis, 1999; Grondman et al., 2012) where both the policy (actor) and value function (critic) are approximated by neural networks. It can be viewed as an extension to the PMD method (Lan, 2023) with neural network approximation. The NPMD-type methods including TRPO (Schulman et al., 2015)) and PPO (Schulman et al., 2017) are widely used in applications like game AI (Berner et al., 2019) and fine-tuning large language models (Ziegler et al., 2019; Ouyang et al., 2022). Moreover, instead of working on general Euclidean space, we assume the state space to be a \(d\)-dimensional manifold embedded in the \(D\)-dimensional Euclidean space where \(d\ll D\). We summarize our main contributions as follows: 1. We first investigate the universal approximation of the convolutional neural network (CNN), which is a popular architecture for image data. We show that under the Lipschitz MDP condition (Assumption 4), CNN can well approximate both the value function and the policy (Theorem 2, Corollary 1). Compared to the results in Liu et al. (2019) and Yuan et al. (2023), our analysis decouples the regularity conditions from algorithmic specifications. For example, in Liu et al. (2019), the value functions are assumed to be in a network width-dependent set that approximates the NTK-induced RKHS, while our regularity assumptions are not based on the network architecture in advance. In Yuan et al. (2023), the approximation error is highly related to the design of the feature map. 2. Based on CNN function approximation, we then derive \(\widetilde{O}\big{(}|\mathcal{A}|^{\frac{d}{2\alpha}+2}(1-\gamma)^{-\frac{4d}{ \alpha}-10}\epsilon^{-\frac{d}{\alpha}-2}\big{)}\) sample complexity bound for NPMD with CNN approximation to find a policy whose value function is at most \(\epsilon\) to the global optimal in expectation (Corollary 2). Here, \(\alpha\in(0,1]\) is the exponent of the Lipschitz condition and \(\widetilde{O}(\cdot)\) hides the logarithmic terms and some coefficients related to distribution mismatch and concentrability (see Assumptions 2 and 3). Compared to the results in Fan et al. (2020) and Nguyen-Tang et al. (2022), the curse of dimensionality (exponential dependence on \(D\)) is avoided by exploiting the intrinsic \(d\)-dimensional structure. To the best of \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Algorithm & Approximation & Regularity & Complexity & Remark \\ \hline \hline NPG (P) & Linear & Linear (D) & \(\widetilde{O}(\epsilon^{-2})\) & strong regularity \\ (Yuan et al., 2023) & \begin{tabular}{c} NTK \\ (2-layer ReLU) \\ \end{tabular} & RKHS (D) & \(\widetilde{O}(\epsilon^{-4})\)1 & strong regularity \\ \hline NPPO (P) & NTK & RKHS (D) & \(\widetilde{O}(\epsilon^{-4})\) 2 & strong regularity \\ \hline FQI (V) & \begin{tabular}{c} FNN \\ (Deep ReLU) \\ \end{tabular} & Hölder (I) & \(\widetilde{O}(\epsilon^{-\frac{D}{\alpha}-2})\) & \begin{tabular}{c} curse of \\ dimension \\ \end{tabular} \\ \hline FQI (V) & FNN & Besov (I) & \(\widetilde{O}(\epsilon^{-\frac{2D}{\alpha}-2})\) & \begin{tabular}{c} curse of \\ dimension \\ \end{tabular} \\ (Nguyen-Tang et al., 2022) & \begin{tabular}{c} CNN \\ (Deep ReLU) \\ \end{tabular} & Lipschitz (I) 3 & \(\widetilde{O}(\epsilon^{-\frac{d}{\alpha}-2})\) & \(d\ll D\) \\ \hline \end{tabular} \end{table} Table 1: Comparison with existing results. The algorithms are classified as value-based (V) or policy-based (P). NPG refers to the natural policy gradient (Kakade, 2001). FQI is analogous to deep Q-learning (Mnih et al., 2013). NPPO and NPMD are analogous to the PPO algorithm (Schulman et al., 2017). The regularity assumptions are made on value functions to be approximated, which can be algorithm-dependent (D) or algorithm-independent (I). The complexity is the expected number of samples required to find an \(\epsilon\)-optimal policy whose value function is at most \(\epsilon\) to the global optimal in expectation. our knowledge, this is the first sample complexity result for policy gradient methods with deep neural network function approximation. The rest of this paper is organized as follows. In Section 2, we introduce the background and preliminaries including MDP, CNN, and Riemannian manifold. In Section 3, we present the NPMD algorithm with a detailed formulation. We provide our main results in Section 4, and we make some concluding remarks in Section 5. Proof details are provided in the appendix. Some preliminary results for this paper have been presented in Ji et al. (2022) which focuses on policy evaluation only. We extend the scope to policy optimization. ### Related Work Our work is based on previous studies on policy gradient methods with function approximation as well as deep supervised learning on manifolds. Policy gradient methods.The policy gradient method (Williams, 1992; Sutton et al., 1999) is first developed under the compatible function approximation framework. The natural policy gradient (NPG) method (Kakade, 2001) extends the policy gradient method by incorporating the geometry of the parameter space to improve convergence properties. Trust region policy optimization (TRPO, Schulman et al., 2015) and proximal policy optimization (PPO, Schulman et al., 2017) are modern variants of policy gradient methods with neural network function approximation that use constraints or penalties to prevent aggressive updates, resulting in more stable and efficient learning. Other notable methods include deep deterministic policy gradient (DDPG, Lillicrap et al., 2015), asynchronous advantage actor-critic (A3C, Mnih et al., 2016), and mirror descent policy optimization (MDPO, Tomar et al., 2020) algorithms. These modern methods are often used to handle high-dimensional state spaces and have been shown to achieve state-of-the-art results in a variety of RL domains. For example, the PPO algorithm and its variants are used in training some of the most advanced artificial intelligence, such as OpenAI Five (Berner et al., 2019) and GPT-4 (OpenAI, 2023). From a theoretical perspective, policy gradient methods such as NPG and PPO can be unified under the generalized PMD framework (Geist et al., 2019; Shani et al., 2020; Xiao, 2022; Lan, 2023). Linear function approximation.The majority of existing research on function approximation considers the linear function class (Agarwal et al., 2021; Alfano and Rebeschini, 2022; Yuan et al., 2023), which is the only known option for the compatible function approximation framework by far (Sutton et al., 1999). However, these linear function approximation methods are restrictive. Only in simple environments, such as linear MDP (Jin et al., 2020), can high approximation quality be guaranteed, which necessitates carefully designed features. Regrettably, the task of crafting such features is either infeasible or demands substantial effort from domain experts, and any misspecification of features could lead to exponential error (Du et al., 2020). Reproducing kernel approach.The reproducing kernel Hilbert space (RKHS) has been adopted to relax the limitation of the linear function class and to enable more expressive nonlinear function approximation (Agarwal et al., 2020; Yang et al., 2020). To achieve efficient computation, random features are employed (Rahimi and Recht, 2007). Nevertheless, the RKHS suffers from the curse of dimensionality, which hinders its performance on high-dimensional problems. Neural tangent kernel.One approach to investigating the function approximation capabilities of neural networks is through the use of the neural tangent kernel (NTK, Jacot et al. 2018; Liu et al. 2019; Wang et al. 2019; Yang et al. 2020). The NTK approach can be viewed as training a neural network with gradient descent under a specific regime, and as the width of the neural network approaches infinity, it converges to an RKHS. As a consequence, like other RKHS approaches, the NTK approach suffers from the curse of dimensionality, limiting its performance on high-dimensional problems. Additionally, some literature has pointed out that the NTK is susceptible to the kernel degeneracy problem (Chen and Xu, 2020; Huang et al., 2020), which may impact its overall learnability. Non-parametric neural network approximation.The non-parametric approach has been adopted to study the sample complexity of neural function approximation in RL under mild smoothness assumptions, such as Fan et al. (2020) and Nguyen-Tang et al. (2022). These analyses are mainly focused on value-based methods and are not applicable to policy gradient methods due to the lack of smoothness in neural policies. Deep supervised learning on manifolds.Parallel to DRL, existing work on deep supervised learning extensively studies the adaptivity of neural networks to the intrinsic low-dimensional data manifold embedded in high-dimensional ambient space, and how this adaptivity helps neural networks escape from the curse of dimensionality. In deep supervised learning, it has been shown that the sample complexity's exponential dependence on the ambient dimension \(D\) can be replaced by the dependence on the manifold dimension \(d\)(Chen et al., 2019; Schmidt-Hieber, 2019; Liu et al., 2021). These analyses focus on fitting a single target function whose smoothness is predetermined by the nature of the learning task, while in our setting, the target functions are policies and their associated value functions, whose smoothness can get worse in each iteration. ### Notation For \(n\in\mathbb{N}\), \([n]\coloneqq\{i\mid 1\leq i\leq n\}\). For \(a\in\mathbb{R}\), \(\lceil a\rceil\) denotes the smallest integer no less than \(a\). For \(a,b\in\mathbb{R}\), \(a\lor b\coloneqq\max(a,b)\). For a vector, \(\left\lVert\cdot\right\rVert_{p}\) denotes the \(p\)-norm for any \(1\leq p\leq+\infty\). For a matrix, \(\left\lVert\cdot\right\rVert_{0}\) denotes the number of nonzero entries and \(\left\lVert\cdot\right\rVert_{\infty}\) denotes the maximum magnitude of entries. For a finite set, \(\left\lvert\cdot\right\rvert\) denotes its number of elements. For a function \(f\colon\mathcal{X}\to\mathbb{R}\), \(\left\lVert f\right\rVert_{\infty}\) denotes the maximal value of \(\left\lvert f\right\rvert\) over \(\mathcal{X}\). Given distribution \(\rho\) on \(\mathcal{X}\), we use \(f(\rho)\coloneqq\mathbb{E}_{x\sim\rho}\big{[}f(x)\big{]}\) to denote the expected value of \(f(x)\) where \(x\sim\rho\). Given distributions \(\mu\) and \(\nu\) on \(\mathcal{X}\), the total variation distance is defined as \(d_{\mathrm{TV}}(\mu,\nu)\coloneqq\sup_{A\in\Sigma}|\mu(A)-\nu(A)|\), where \(\Sigma\) is the measurable sets on \(\mathcal{X}\). When \(\mu\) is absolutely continuous with respect to \(\nu\), the Pearson \(\chi^{2}\)-divergence is defined as \(\chi^{2}(\mu,\nu)\coloneqq\mathbb{E}_{\nu}[(\frac{\mathrm{d}\mu}{\mathrm{d}\nu} -1)^{2}]\), where \(\frac{\mathrm{d}\mu}{\mathrm{d}\nu}\) denotes the Radon-Nikodym derivative. Let \(\mathcal{A}\) be a finite set, we denote \(P^{|\mathcal{A}|}\coloneqq\{(p_{a})_{a\in\mathcal{A}}\mid p_{a}\in P\}\) as the Cartesian product of \(P\)'s indexed by \(\mathcal{A},\mathbf{1}\coloneqq(1)_{a\in\mathcal{A}}\in\mathbb{R}^{| \mathcal{A}|}\) as the vector with all entries being \(1\), \(\Delta_{\mathcal{A}}\coloneqq\big{\{}p\in\mathbb{R}^{|\mathcal{A}|}\mid\sum_{ a\in\mathcal{A}}p_{a}=1,p_{a}\geq 0\big{\}}\) as the probability simplex over \(\mathcal{A}\), and define the inner product \(\langle\cdot,\cdot\rangle:\mathbb{R}^{|\mathcal{A}|}\times\mathbb{R}^{| \mathcal{A}|}\to\mathbb{R}\) as \(\langle p,q\rangle\coloneqq\sum_{a\in\mathcal{A}}p_{a}q_{a}\). Let \(\pi\colon\mathcal{S}\to\Delta_{\mathcal{A}}\) be a map, we use \(h^{\pi}(s)\coloneqq\langle\log\pi(s),\pi(s)\rangle\) to denote the negative entropy of \(\pi\) at \(s\in\mathcal{S}\) where \(\log(\cdot)\) is performed entrywise and denote the Kullback-Leibler (KL) divergence between two distributions \(\pi^{\prime}(s)\) and \(\pi(s)\) by \(D_{\pi}^{\pi^{\prime}}(s)\coloneqq\langle\log\pi^{\prime}(s)-\log\pi(s),\pi^{ \prime}(s)\rangle\geq 0\). ## 2 Background We introduce the problem setting and briefly review the Markov decision process, Riemannian manifold, and convolutional neural networks. ### Markov Decision Process We consider an infinite-horizon discounted Markov decision process (MDP) denoted as \(\mathcal{M}=(\mathcal{S},\mathcal{A},\mathcal{P},c,\gamma)\), where \(\mathcal{S}\subseteq\mathbb{R}^{D}\) is a continuous state space in \(\mathbb{R}^{D}\), \(\mathcal{A}\) is a finite action space, \(\mathcal{P}\) is the transition kernel that describes the next state distribution \(s^{\prime}\sim\mathcal{P}(\cdot|s,a)\) at state \(s\in\mathcal{S}\) when action \(a\in\mathcal{A}\) is taken, \(c\colon\mathcal{S}\times\mathcal{A}\to[0,C]\) is a cost function bounded by some constant \(C>0\), and \(\gamma\in(0,1)\) is a discount factor. A _stochastic policy_\(\pi\colon\mathcal{S}\to\Delta_{\mathcal{A}}\) describes the behavior of an agent. For any state \(s\in\mathcal{S}\), \(\pi(\cdot|s)\in\Delta_{\mathcal{A}}\) gives a conditional probability distribution over the action space \(\mathcal{A}\), where \(\pi(a|s)\) is the probability of taking action \(a\) at state \(s\). Given a policy \(\pi\), the expected cost starting from state \(s\) is given by the _state value function_ \[V^{\pi}(s)=\mathbb{E}_{\begin{subarray}{c}a_{t}\sim\pi(\cdot|s_{t}),\\ s_{t+1}\sim\mathcal{P}(\cdot|s_{t},a_{t})\end{subarray}}\bigg{[}\sum_{t=0}^{ \infty}\gamma^{t}c(s_{t},a_{t})\biggm{|}s_{0}=s\bigg{]}.\] The goal of policy optimization is to learn an optimal policy \(\pi^{\star}\) by solving a stochastic optimization problem, where the objective function is the expected value function for a given initial state distribution \(\rho\):3 Footnote 3: The optimal policy \(\pi^{\star}\) does not depend on the choice of \(\rho\). \[V^{\star}(\rho)\coloneqq V^{\pi^{\star}}(\rho)=\underset{\pi}{\mathrm{minimize}} \ \mathbb{E}_{s\sim\rho}\big{[}V^{\pi}(s)\big{]}. \tag{1}\] A policy \(\pi\) is called \(\epsilon\)_-optimal_, if \[V^{\pi}(\rho)-V^{\star}(\rho)\leq\epsilon.\] In our _generative model_ setting, the algorithm cannot directly access the transition kernel \(\mathcal{P}\) and the cost function \(c\). Instead, the algorithm can provide any state-action pair to the environment and get \(s^{\prime}\sim\mathcal{P}(\cdot|s,a)\) and \(c_{s,a}=c(s,a)\) by invoking a sample oracle. We also assume a sample oracle that can output \(s\sim\rho\). The (expected) number of sample oracle calls required to obtain an \(\epsilon\)-optimal policy is referred to as the _sample complexity_ of the algorithm. The state value function is closely related to the _state-action value function_, which is the expected cost starting from state \(s\) and taking action \(a\): \[Q^{\pi}(s,a)=\mathbb{E}_{\begin{subarray}{c}s_{t+1}\sim\mathcal{P}(\cdot|s_{t },a),\\ a_{t+1}\sim\pi(\cdot|s_{t+1})\end{subarray}}\biggl{[}\sum_{t=0}^{\infty}\gamma^ {t}c(s_{t},a_{t})\biggm{|}s_{0}=s,a_{0}=a\biggr{]}.\] By definition, the value functions are bounded: \[0\leq V^{\pi}(s)\leq\frac{C}{1-\gamma},\quad 0\leq Q^{\pi}(s,a)\leq\frac{C}{1- \gamma}. \tag{2}\] The value functions satisfy the following relations: \[V^{\pi}(s)=\langle Q^{\pi}(s,\cdot),\pi(\cdot|s)\rangle=\mathbb{ E}_{a\sim\pi(\cdot|s)}[Q^{\pi}(s,a)], \tag{3}\] \[Q^{\pi}(s,a)=c(s,a)+\gamma\mathbb{E}_{s^{\prime}\sim\mathcal{P}( \cdot|s,a)}[V^{\pi}(s^{\prime})]. \tag{4}\] For the convenience of analysis, we define recursively \[\mathcal{P}_{0}^{\pi}=\rho,\quad\mathcal{P}_{t+1}^{\pi}=\mathbb{E}_{s\sim \mathcal{P}_{t}^{\pi},a\sim\pi(\cdot|s)}[\mathcal{P}(\cdot|s,a)], \tag{5}\] and define the _state visitation distribution_ and the _state-action visitation distribution_ respectively: \[\nu_{\rho}^{\pi}=(1-\gamma)\sum_{t=0}^{\infty}\gamma^{t}\mathcal{ P}_{t}^{\pi}, \tag{6}\] \[\overline{\nu}_{\rho}^{\pi}(s,a)=\nu_{\rho}^{\pi}(s)\times\pi(a|s ),\ \forall s\in\mathcal{S},a\in\mathcal{A}. \tag{7}\] The prefactor \(1-\gamma\) in (6) makes \(\nu_{\rho}^{\pi}\) be a distribution. The visitation distributions reflect the frequency of visiting state \(s\) or state-action pair \((s,a)\) along the trajectories starting from \(s_{0}\sim\rho\) and taking actions according to policy \(\pi\). It follows immediately from the definition that the state visitation distribution is lower bounded by the initial distribution (in terms of Radon-Nikodym derivative) with factor \(1-\gamma\): \[\frac{\mathrm{d}\nu_{\rho}^{\pi}}{\mathrm{d}\rho}=(1-\gamma)\sum_{t=0}^{\infty }\gamma^{t}\frac{\mathrm{d}\mathcal{P}_{t}^{\pi}}{\mathrm{d}\rho}\geq(1- \gamma)\frac{\mathrm{d}\mathcal{P}_{0}^{\pi}}{\mathrm{d}\rho}=1-\gamma. \tag{8}\] We can rewrite the value functions using the visitation distributions: \[V^{\pi}(\rho)=\sum_{t=0}^{\infty}\gamma^{t}\int_{\mathcal{S}}\sum_{ a\in\mathcal{A}}c(s,a)\pi(a|s)\,\mathrm{d}\mathcal{P}_{t}^{\pi}(s)=\frac{1}{1- \gamma}\mathbb{E}_{(s,a)\sim\mathcal{P}_{\rho}^{\pi}}[c(s,a)], \tag{9}\] \[Q^{\pi}(s,a)=c(s,a)+\gamma V^{\pi}(\mathcal{P}(\cdot|s,a))=c(s,a)+ \frac{\gamma}{1-\gamma}\mathbb{E}_{(s^{\prime},a^{\prime})\sim\mathcal{P}_{ \mathcal{P}(\cdot|s,a)}^{\pi}}[c(s^{\prime},a^{\prime})], \tag{10}\] where (10) is from (9) and (4). ### Riemannian Manifold We consider the state space \(\mathcal{S}\) to be a \(d\)-dimensional Riemannian manifold isometrically embedded in \(\mathbb{R}^{D}\). A _chart_ for \(\mathcal{S}\) is a pair \((U,\phi)\) such that \(U\subset\mathcal{S}\) is open and \(\phi:U\to\mathbb{R}^{d}\) is a homeomorphism, i.e., \(\phi\) is a bijection; its inverse and itself are continuous. Two charts \((U,\phi)\) and \((V,\psi)\) are called \(C^{k}\)_compatible_ if and only if \[\phi\circ\psi^{-1}\colon\psi(U\cap V)\to\phi(U\cap V)\quad\text{ and }\quad\psi\circ\phi^{-1}\colon\phi(U\cap V)\to\psi(U\cap V)\] are both \(C^{k}\) functions (\(k\) times continuously differentiable). A \(C^{k}\)_atlas_ of \(\mathcal{S}\) is a collection of \(C^{k}\) compatible charts \(\{(U_{i},\phi_{i})\}_{i\in I}\) such that \(\bigcup_{i\in I}U_{i}=\mathcal{S}\). An atlas of \(\mathcal{S}\) contains an open cover of \(\mathcal{S}\) and mappings from each open cover to \(\mathbb{R}^{d}\). **Definition 1** (Smooth manifold).: _A manifold \(\mathcal{S}\) is smooth if it has a \(C^{\infty}\) atlas._ We introduce the _reach_(Federer, 1959; Niyogi et al., 2008) of a manifold to characterize the curvature of \(\mathcal{S}\). **Definition 2** (Reach).: _The medial axis of \(\mathcal{S}\) is defined as \(\overline{\mathcal{T}}(\mathcal{S})\), which is the closure of_ \[\mathcal{T}(\mathcal{S})=\{x\in\mathbb{R}^{D}\mid\exists x_{1}\neq x_{2}\in \mathcal{S},\left\|x-x_{1}\right\|_{2}=\left\|x-x_{2}\right\|_{2}=\inf_{y\in \mathcal{S}}\left\|x-y\right\|_{2}\}.\] _The reach \(\omega\) of \(\mathcal{S}\) is the minimum distance between \(\mathcal{S}\) and \(\overline{\mathcal{T}}(\mathcal{S})\), that is,_ \[\omega=\inf_{x\in\overline{\mathcal{T}}(\mathcal{S}),y\in\mathcal{S}}\left\|x -y\right\|_{2}.\] Roughly speaking, reach measures how fast a manifold "bends". A manifold with a large reach "bends" relatively slowly. On the contrary, a small \(\omega\) signifies more complicated local geometric structures, which are possibly hard to fully capture. ### Convolutional Neural Networks We consider one-sided stride-one convolutional neural networks (CNNs) with the rectified linear unit (ReLU) activation function \(\operatorname{ReLU}(z)=\max(z,0)\). Specifically, a CNN we consider consists of a padding layer, several convolutional blocks, and finally a fully connected output layer. Given an input vector \(x\in\mathbb{R}^{D}\), the network first applies a padding operator \(P:\mathbb{R}^{D}\rightarrow\mathbb{R}^{D\times C}\) for some integer \(C\geq 1\) such that \[Z=P(x)=\begin{bmatrix}x&0&\cdots&0\end{bmatrix}\in\mathbb{R}^{D\times C}.\] Then the matrix \(Z\) is passed through \(M\) convolutional blocks. We will denote the input matrix to the \(m\)-th block as \(Z_{m}\) and its output as \(Z_{m+1}\) (so that \(Z_{1}=Z\)). We now define convolution as illustrated in Figure 1. Let \(\mathcal{W}=(\mathcal{W}_{j,i,l})_{j,i,l}\in\mathbb{R}^{C^{\prime}\times I \times C}\) be a filter where \(C^{\prime}\) is the output channel size, \(I\) is the filter size and \(C\) is the input channel size. For \(Z\in\mathbb{R}^{D\times C}\), the convolution of \(Z\) with \(\mathcal{W}\), denoted with \(\mathcal{W}*Z\), results in \(Y\in\mathbb{R}^{D\times C^{\prime}}\) with \[Y_{k,j}=\sum_{i=1}^{I}\sum_{l=1}^{C}\mathcal{W}_{j,i,l}Z_{k+i-1,l},\] where we set \(Z_{k+i-1,l}=0\) for \(k+i-1>D\). In the \(m\)-th convolutional block, let \(\mathcal{W}_{m}=\{\mathcal{W}_{m}^{(1)},\ldots,\mathcal{W}_{m}^{(L_{m})}\}\) be a collection of filters and \(\mathcal{B}_{m}=\{\mathcal{B}_{m}^{(1)},\ldots,\mathcal{B}_{m}^{(L_{m})}\}\) be a collection of biases of proper sizes. The \(m\)-th block maps its input matrix \(Z_{m}\in\mathbb{R}^{D\times C}\) to \(Z_{m+1}\in\mathbb{R}^{D\times C}\) by \[Z_{m+1}=\text{ReLU}\left(\mathcal{W}_{m}^{(L_{m})}*\cdots*\text{ReLU}\left( \mathcal{W}_{m}^{(1)}*Z_{m}+\mathcal{B}_{m}^{(1)}\right)\cdots+\mathcal{B}_{m }^{(L_{m})}\right) \tag{11}\] with ReLU applied entrywise. For notational simplicity, we denote this series of operations in the \(m\)-th block with a single operator from \(\mathbb{R}^{D\times C}\) to \(\mathbb{R}^{D\times C}\) with \(\text{Conv}_{\mathcal{W}_{m},\mathcal{B}_{m}}\), so (11) can be abbreviated as \[Z_{m+1}=\text{Conv}_{\mathcal{W}_{m},\mathcal{B}_{m}}(Z_{m}).\] Overall, we denote the mapping from input \(x\) to the output of the \(M\)-th convolutional block as \[G(x)= (\text{Conv}_{\mathcal{W}_{M},\mathcal{B}_{M}})\circ\cdots\circ( \text{Conv}_{\mathcal{W}_{1},\mathcal{B}_{1}})\circ P(x). \tag{12}\] Given (12), a CNN applies an additional fully connected layer to \(G\) and outputs \[f(x)=W\otimes G(x)+b,\] where \(W\in\mathbb{R}^{D\times C}\) and \(b\in\mathbb{R}\) are a weight matrix and a bias, respectively, and \(\otimes\) denotes the sum of entrywise product, that is, \(W\otimes G(x)=\sum_{i,j}W_{i,j}[G(x)]_{i,j}\). Thus, we define a class of CNNs of the same architecture as \[\mathcal{F}(M,L,J,I,R_{1},R_{2})=\] \[\big{\{}f\mid f(x)=W\otimes G(x)+b,\|W\|_{\infty}\vee|b|\leq R_{2 },\text{ where }G(x)\text{ is in }\eqref{eq:cnn}\text{ with }M\text{ blocks.}\] \[\quad\quad\quad\mathcal{W}_{m}^{(l)}\in\mathbb{R}^{C_{m}^{(l+1)} \times I_{m}^{(l)}\times C_{m}^{(l)}},\mathcal{B}_{m}^{(l)}\in\mathbb{R}^{D \times C_{m}^{(l+1)}},\text{ where }C_{m}^{(l)}\leq J,I_{m}^{(l)}\leq I,\ \forall l\in[L],m\in[M];\] \[\quad\quad\quad\max_{m,l}\|\mathcal{W}_{m}^{(l)}\|_{\infty}\vee \|\mathcal{B}_{m}^{(l)}\|_{\infty}\leq R_{1}\big{\}}. \tag{13}\] ## 3 Neural Policy Mirror Descent In this section, we present our neural policy mirror descent (NPMD) algorithm. It is an extension of the policy mirror descent (PMD) method with a _critic network_\(Q_{w}\) parameterized by \(w\) to approximate the state-action value function, and an _actor network_\(f_{\theta}\) parameterized by \(\theta\) to determine the policy. Both networks belong to a neural network function class \(\mathcal{F}\), which we will specify later in Section 4. The NPMD algorithm starts from a uniform policy \(\pi_{0}\). At the \(k\)-th iteration (indexed from \(0\)), the policy \(\pi_{k}\) is determined by the actor network \(f_{\theta_{k}}\) along with a hyperparameter \(\lambda_{k}\). The NPMD algorithm first performs a critic update, training the critic network \(Q_{w_{k}}\) to fit the state-action value function of the current policy. Then, the NPMD algorithm performs an actor update, indirectly obtaining an improved policy \(\pi_{k+1}\) by updating the actor network to \(f_{\theta_{k+1}}\). ### Critic Update For the critic update at the \(k\)-th iteration, the goal is to approximate the exact state-action value function \(Q^{\pi_{k}}\) with the critic network \(Q_{w_{k}}\). The component of \(Q_{w_{k}}\) corresponding to each action \(a\in\mathcal{A}\) is a neural network \(Q_{w_{k}}(\cdot,a)\in\mathcal{F}\) parameterized by \(w_{k,a}\), which takes \(s\in\mathcal{S}\) as input and outputs a scalar. For simplicity, we let all \(|\mathcal{A}|\) networks share the same architecture and denote \(w_{k}\coloneqq(w_{k,a})_{a\in\mathcal{A}}\in\mathcal{W}_{k}\). We define the critic loss as \[\mathcal{L}_{\text{critic}}(w_{k};\pi_{k})=\mathbb{E}_{s\sim\nu_{\rho}^{\pi_{k }}}\|Q_{w_{k}}(s,\cdot)-Q^{\pi_{k}}(s,\cdot)\|_{2}^{2}, \tag{14}\] where \(Q_{w_{k}}(s,\cdot)\) and \(Q^{\pi_{k}}(s,\cdot)\) are \(|\mathcal{A}|\)-dimensional vectors and \(\nu_{\rho}^{\pi_{k}}\) is the state visitation distribution defined as (6). Directly minimizing the critic loss (14) is difficult since \(Q^{\pi_{k}}\) is unknown in advance. Instead, we sample \(N\) points \(\{s_{i}\}_{i=1}^{N}\) independently from the distribution \(\nu_{\rho}^{\pi_{k}}\) and use the empirical risk on these samples to approximate (14). For notation simplicity, we omit the iteration index \(k\) of samples. The empirical risk \(\widehat{\mathcal{L}}_{\text{critic}}\) is defined as \[\widehat{\mathcal{L}}_{\text{critic}}(w_{k};\Xi_{k})=\frac{1}{N}\sum_{i=1}^{N} \sum_{a\in\mathcal{A}}\bigg{|}Q_{w_{k}}(s_{i},a)-c(s_{i},a)-\frac{\gamma}{1- \gamma}c(s^{\prime}_{i,a},a^{\prime}_{i,a})\bigg{|}^{2}, \tag{15}\] where \(N\) is sufficiently large, each pair \((s^{\prime}_{i,a},a^{\prime}_{i,a})\) is sampled from distribution \(\overline{\nu}_{\mathcal{P}(\cdot|s_{i},a)}^{\pi_{k}}\), and \(\Xi_{k}\) denotes the collection of samples. We let \(w_{k}\) be the solution to the empirical risk minimization (ERM) problem, namely \[w_{k}=\operatorname*{argmin}_{w\in\mathcal{W}_{k}}\widehat{\mathcal{L}}_{ \text{critic}}(w;\Xi_{k}). \tag{16}\] ### Actor Update For the actor update, the goal is to learn an improved policy. If no function approximation is considered, an ideal PMD update is given by Lan (2023): \[\pi_{k+1}^{\star}(s)=\operatorname*{argmin}_{\pi(\cdot|s)\in \Delta_{\mathcal{A}}}\langle Q_{w_{k}}(s,\cdot)+\tau_{k}\nabla h^{\pi_{k}}(s, \cdot),\pi(\cdot|s)\rangle+\frac{1}{\eta_{k}}D_{\pi_{k}}^{\pi}(s),\ \forall s\in \mathcal{S}, \tag{17}\] where \(h^{\pi_{k}}(s)\) is the entropy regularizer, the gradient is taken with respect to \(\pi_{k}(\cdot|s)\), \(D_{\pi_{k}}^{\pi}(s)\) is the Kullback-Leibler (KL) divergence between \(\pi\) and \(\pi_{k}\), \(\tau_{k}\) is a decreasing regularization parameter, and \(\eta_{k}\) is the step size. With neural function approximation, we train a neural policy \(\pi_{k+1}\) to approximate the ideal policy \(\pi_{k+1}^{\star}\). For any \(k\geq 0\), the neural policy \(\pi_{k}\) takes the form \[\pi_{k}(a|s)=\frac{\exp\bigl{(}\lambda_{k}^{-1}f_{\theta_{k}}(s,a)\bigr{)}}{ \sum_{a^{\prime}\in\mathcal{A}}\exp\bigl{(}\lambda_{k}^{-1}f_{\theta_{k}}(s,a ^{\prime})\bigr{)}}, \tag{18}\] where \(\theta_{k}\coloneqq(\theta_{k,a})_{a\in\mathcal{A}}\) is the collection of neural network parameters and \(\lambda_{k}>0\) is a temperature parameter (will be discussed later). For any \(a\in\mathcal{A}\), \(f_{\theta_{k}}(\cdot,a)\in\mathcal{F}\) is a neural network parameterized by \(\theta_{k,a}\), which takes \(s\in\mathcal{S}\) as input and outputs a scalar. Again, we let all \(|\mathcal{A}|\) neural networks share the same parameter space and denote \(\theta_{k}\coloneqq(\theta_{k,a})_{a\in\mathcal{A}}\in\Theta_{k}\). With definition (18), the ideal PMD update (17) admits a closed-form solution. **Lemma 1**.: _The exact solution of (17) with neural policy \(\pi_{k}\) defined as (18) is given by_ \[\pi_{k+1}^{\star}(a|s)=\frac{\exp\bigl{(}g_{k+1}^{\star}(s,a)\bigr{)}}{\sum_{a ^{\prime}\in\mathcal{A}}\exp\bigl{(}g_{k+1}^{\star}(s,a^{\prime})\bigr{)}}, \tag{19}\] _where \(g_{k+1}^{\star}=(1-\eta_{k}\tau_{k})\lambda_{k}^{-1}f_{\theta_{k}}-\eta_{k}Q_ {w_{k}}\)._ The proof of Lemma 1 is given in Appendix B.1. In view of Lemma 1, approximating \(\pi_{k+1}^{\star}\) with \(\pi_{k+1}\) is equivalent to approximating \(g_{k+1}^{*}\) with the scaled actor network \(\lambda_{k+1}^{-1}f_{\theta_{k+1}}\). We define the actor loss to be minimized as \[\mathcal{L}_{\text{actor}}(\theta_{k+1};\theta_{k},w_{k})\] \[= \mathbb{E}_{s\sim\nu_{\rho}^{\pi_{k}}}\big{\|}\lambda_{k+1}^{-1}f _{\theta_{k+1}}(s,\cdot)-(1-\eta_{k}\tau_{k})\lambda_{k}^{-1}f_{\theta_{k}}(s, \cdot)+\eta_{k}Q_{w_{k}}(s,\cdot)\big{\|}_{2}^{2}, \tag{20}\] where \(\lambda_{k}\) is the current temperature, \(\lambda_{k+1}\) is the next temperature, \(\tau_{k}\) is the regularization factor, and \(\eta_{k}\) is the step size. For notation simplicity, we omit the hyperparameters \(\tau_{k}\), \(\eta_{k}\), \(\lambda_{k+1}\) and \(\lambda_{k}\) in \(\mathcal{L}_{\text{actor}}\). Similar to the critic update, instead of minimizing (20) directly, we minimize the empirical risk defined as \[\widehat{\mathcal{L}}_{\text{actor}}(\theta_{k+1};\theta_{k},w_{ k},\Xi_{k})\] \[= \frac{1}{N}\sum_{i=1}^{N}\sum_{a\in\mathcal{A}}\big{|}\lambda_{k+ 1}^{-1}f_{\theta_{k+1}}(s_{i},a)-(1-\eta_{k}\tau_{k})\lambda_{k}^{-1}f_{ \theta_{k}}(s_{i},a)+\eta_{k}Q_{w_{k}}(s_{i},a)\big{|}^{2}, \tag{21}\] where \(\Xi_{k}\) contains the same sampled states \(\{s_{i}\}_{i=1}^{N}\) as used in the critic update from \(\nu_{\rho}^{\pi_{k}}\). The improved actor parameter \(\theta_{k+1}\) is given by the solution to the ERM problem: \[\theta_{k+1}=\operatorname*{argmin}_{\theta\in\Theta_{k+1}}\widehat{\mathcal{ L}}_{\text{actor}}(\theta;\theta_{k},w_{k},\Xi_{k}). \tag{22}\] When the sample size \(N\) is sufficiently large, we have \(\lambda_{k+1}^{-1}f_{\theta_{k+1}}\approx g_{k+1}^{\star}\) and hence \(\pi_{k+1}\approx\pi_{k+1}^{\star}\). **Remark 1**.: _The temperature parameter \(\lambda_{k}\) is introduced mainly for technical reasons. For any infinite-horizon discounted MDP, there always exists a deterministic optimal policy \(\pi^{\star}\)(Puterman, 1994), while the neural policy \(\pi_{k}\) adopted to approximate \(\pi^{\star}\) is fully stochastic in the sense that \(\pi_{k}(a|s)>0\) for any \((s,a)\in\mathcal{S}\times\mathcal{A}\). Using the temperature parameter \(\lambda_{k}\) allows us to control the spikiness of \(\pi_{k}\). As \(\lambda_{k}\) approaches zero, \(\pi_{k}\) is prone to the action with the maximal value of \(f_{\theta_{k}}\). This makes stochastic policy \(\pi_{k}\) closer to the deterministic policy \(\pi^{\star}\)._ **Remark 2**.: _Our algorithm uses neural policy \(\pi_{k+1}\) to approximate the ideal policy \(\pi_{k+1}^{\star}\) at each iteration. This allows us to keep the most up-to-date policy with only one actor network. If no actor is used to approximate \(\pi_{k+1}^{\star}\), ideally, we can obtain an implicitly defined policy by iteratively calling (17). However, this requires us to store all the \(k+1\) critic networks to calculate \(\pi_{k+1}^{\star}\), which is not scalable._ We summarize NPMD in Algorithm 1. Note that Algorithm 1 requires samples from the visitation distributions. We provide a sampling algorithm in Appendix A. ## 4 Main Results In this section, we present our main results on the sample complexity of Algorithm 1. As mentioned in Section 1, we focus on RL environments with low-dimensional structures, for which we make the following smooth manifold assumption on the state space. **Assumption 1** (State space manifold).: _The state space \(\mathcal{S}\) is a \(d\)-dimensional compact Riemannian manifold isometrically embedded in \(\mathbb{R}^{D}\) where \(d\ll D\). There exists \(B>0\) such that \(\left\lVert x\right\rVert_{\infty}\leq B\) for any \(x\in\mathcal{S}\). The surface area of \(\mathcal{S}\) is \(\mathrm{Area}(\mathcal{S})<\infty\), and the reach of \(\mathcal{S}\) is \(\omega>0\)._ We first derive the iteration complexity with well-approximated value functions and neural policies, then derive the number of samples to meet the requirement for approximation. Combining the results together, we establish the overall sample complexity for Algorithm 1. ### Iteration Complexity We make the following assumptions on the initial and visitation distributions for iteration complexity. **Assumption 2** (Full support).: _The initial distribution \(\rho\) has full support on \(\mathcal{S}\), that is, for any measurable subset \(S\subseteq\mathcal{S}\), \(\rho(S)>0\)._ Assumption 2 requires that every state can be visited when doing sampling. In view of (8), as long as \(\rho\) has full support, the visitation distribution also has full support even if the policy itself is deterministic. We measure the distribution mismatch between the optimal visitation distribution \(\nu_{\rho}^{\pi^{\star}}\) and the initial distribution \(\rho\) by a mismatch coefficient denoted as \(\kappa\): \[\kappa\coloneqq\left\lVert\frac{\mathrm{d}\nu_{\rho}^{\pi^{\star}}}{\mathrm{d }\rho}\right\rVert_{\infty}. \tag{23}\] Under Assumption 2, the Radon-Nikodym derivative \(\frac{\mathrm{d}\nu_{\rho}^{\pi^{\star}}}{\mathrm{d}\rho}\) is well-defined. We assume \(\kappa<\infty\). Accordingly, we define the shifted discount factor as \[\gamma_{\rho}\coloneqq 1-(1-\gamma)/\kappa. \tag{24}\] **Assumption 3** (Concentrability).: _There exists \(C_{\nu}<\infty\) such that for all \(k\geq 0\) iterations of Algorithm 1,_ \[\chi^{2}(\nu_{\rho}^{\pi},\nu_{\rho}^{\pi_{k}})+1\leq C_{\nu},\] _where \(\pi\) takes \(\pi_{k+1}\) or \(\pi^{\star}\), \(\chi^{2}(\nu_{\rho}^{\pi},\nu_{\rho}^{\pi_{k}})=\mathbb{E}_{\nu_{\rho}^{\pi_{k} }}[(\frac{\mathrm{d}\nu_{\rho}^{\pi}}{\mathrm{d}\nu_{\rho}^{\pi_{k}}}-1)^{2}]\) is the \(\chi^{2}\)-divergence._ Assumption 3 requires the concentrability of the visitation distributions. The distance between visitation distributions is measured by the \(\chi^{2}\)-divergence, which is well-defined under Assumption 2 since the absolute continuity holds for fully supported distributions. This type of concentrability assumption is commonly adopted in the RL literature (Agarwal et al., 2021; Yuan et al., 2023) and is tighter than the absolute density ratio \(\left\|\frac{\mathrm{d}\nu_{\rho}^{\pi}}{\mathrm{d}\nu_{\rho}^{\pi_{k}}} \right\|_{\infty}\). Assumptions 2 and 3 together forms the _optimism_ in RL, that is, the initial distribution is not too far away from the optimal visitation distribution in terms of \(\chi^{2}\)-divergence, and the policy \(\pi_{k}\) at each iteration is sufficiently exploratory to find out the optimal policy. With Assumptions 2 and 3, we have the following one-step improvement lemma for Algorithm 1. The proof is provided in Appendix C.2. **Lemma 2**.: _Suppose Assumptions 2 and 3 hold. If \(1-\eta_{k}\tau_{k}\geq 0\), then Algorithm 1 yields_ \[\big{(}V^{\pi_{k+1}}(\rho)-V^{\star}(\rho)\big{)}+\tfrac{1}{ \kappa\eta_{k}}\mathbb{E}_{s\sim\nu_{\rho}^{\pi^{\star}}}\big{[}D_{\pi_{k+1}}^ {\pi^{\star}}(s)\big{]}\\ \leq\gamma_{\rho}\big{(}V^{\pi_{k}}(\rho)-V^{\star}(\rho)\big{)}+ \tfrac{1-\eta_{k}\tau_{k}}{\kappa\eta_{k}}\mathbb{E}_{s\sim\nu_{\rho}^{\pi^{ \star}}}\big{[}D_{\pi_{k}}^{\pi^{\star}}(s)\big{]}+\tfrac{2\tau_{k}\log|\mathcal{ A}|}{\kappa(1-\gamma_{\rho})}\\ +\tfrac{4\sqrt{C_{\nu}}}{\kappa(1-\gamma_{\rho})}\Big{(}\sqrt{ \mathcal{L}_{\mathrm{cricit}}(w_{k};\pi_{k})}+\tfrac{1}{\eta_{k}}\sqrt{ \mathcal{L}_{\mathrm{actor}}(\theta_{k+1};\theta_{k},w_{k})}\Big{)}.\] Lemma 2 demonstrates that the optimality gap in value function decreases at rate \(\gamma_{\rho}\) up to approximation errors introduced by the critic and actor updates, plus an additional regularization term. When these errors are properly controlled, we can establish iteration complexity for Algorithm 1. **Theorem 1**.: _Suppose Assumptions 2 and 3 hold. If \(\eta_{k}=\frac{1-\gamma_{\rho}}{\tau_{k}}\), \(\tau_{k}=C\gamma_{\rho}^{k+1}\) for all \(k\geq 0\), the critic loss and the actor loss satisfy respectively_ \[\mathbb{E}\big{[}\mathcal{L}_{\mathrm{cricit}}(w_{k};\pi_{k})\big{]}\leq C^{2} \gamma_{\rho}^{2(k+1)},\quad\mathbb{E}\big{[}\mathcal{L}_{\mathrm{actor}}( \theta_{k+1};\theta_{k},w_{k})\big{]}\leq(1-\gamma_{\rho})^{2},\] _then after \(k\geq 1\) iterations, the expected optimality gap of \(\pi_{k}\) given by Algorithm 1 is_ \[\mathbb{E}\big{[}V^{\pi_{k}}(\rho)-V^{\star}(\rho)\big{]}\leq\gamma_{\rho}^{k} \big{(}C_{1}+C_{2}(k+1)\big{)}\cdot\frac{C}{1-\gamma},\] _where \(C_{1}=1+\log|\mathcal{A}|\), \(C_{2}=8\sqrt{C_{\nu}}+2\log|\mathcal{A}|\)._ _Moreover, for any \(\epsilon>0\), the number of iterations required for \(\mathbb{E}\big{[}V^{\pi_{k}}(\rho)-V^{\star}(\rho)\big{]}\leq\epsilon\) is_ \[\widetilde{O}\left(\log_{\frac{1}{\tau_{\rho}}}\left(\frac{C\left(\sqrt{C_{\nu} }+\log|\mathcal{A}|\right)}{\kappa(1-\gamma_{\rho})^{2}\epsilon}\right)\right).\] The proof of Theorem 1 is provided in Appendix C.3. The regularization term is controlled by exponentially decreasing the regularization parameter \(\tau_{k}\) and using an exponentially increasing step size \(\eta_{k}\) accordingly. The critic loss should be exponentially decreasing, while the actor loss should be small for any consecutive temperatures \(\lambda_{k}\) and \(\lambda_{k+1}\). We show in the following subsections that these requirements can be met by properly designing the CNN architecture and using sufficiently many training samples. As long as the conditions are satisfied, Theorem 1 guarantees to find an \(\epsilon\)-optimal policy (in expectation) after \(\widetilde{O}(\log\frac{1}{\epsilon})\) iterations. ### Function Approximation on Lipschitz MDP with CNN The iteration complexity in Theorem 1 is valid if the state-action value function \(Q^{\pi_{k}}\) and the policy \(\pi_{k+1}^{\star}\) are well approximated by the critic and actor networks at each iteration. However, we have not yet specified the CNN architecture that can meet these requirements. In this section, we study the function approximation on _Lipschitz MDP_, which possesses Lipschitz transition kernel \(\mathcal{P}\) and Lipschitz cost function \(c\). Here, the Lipschitzness is defined with respect to the _geodesic distance_ on the state space \(\mathcal{S}\), which is a \(d\)-dimensional Riemannian manifold (Assumption 1). Recall the definition of geodesic distance: **Definition 3** (Geodesic distance).: _The geodesic distance between two points \(x,y\in\mathcal{S}\) is defined as_ \[d_{\mathcal{S}}(x,y)\coloneqq\inf_{\Gamma:\;[0,1]\to \mathcal{S}} \int_{0}^{1}\left\|\Gamma^{\prime}(t)\right\|_{2}\mathrm{d}t\] (25) _s.t._ \[\Gamma(0)=x,\ \Gamma(1)=y,\ \Gamma\text{ is piecewise }C^{1}.\] One can show the existence of a solution to the minimization problem (25) under mild conditions and that \(d_{\mathcal{S}}(\cdot,\cdot)\) is indeed a distance. More references can be found in Do Carmo and Flaherty Francis (1992). With geodesic distance, we define Lipschitz functions on the Riemannian manifold \(\mathcal{S}\). **Definition 4** (Lipschitz function).: _Let \(L\geq 0\) and \(\alpha\in(0,1]\) be constants. A function \(f\colon\mathcal{S}\to\mathbb{R}\) is called \((L,\alpha)\)-Lipschitz if for any \(x,y\in\mathcal{S}\),_ \[|f(x)-f(y)|\leq L\cdot d_{\mathcal{S}}^{\alpha}(x,y).\] For any fixed \(\alpha\), the Lipschitz constant \(L\) in Definition 4 measures the smoothness of the function. A function is considered smooth if it possesses a small Lipschitz constant, whereas a non-smooth function will exhibit a large Lipschitz constant. Throughout the remainder of this paper, when we mention Lipschitzness, we specifically mean the property of being Lipschitz continuous with a moderate constant. **Remark 3**.: _The geodesic distance (Definition 3) in Definition 4 is a global distance rather than a local one. This makes our definition of Lipschitz functions different from those based on local Euclidean distance and partition of unity as in Chen et al. (2019) and Liu et al. (2021). The two ways of defining Lipschitzness have some technical differences, but they agree with each other in our setting up to constant factors._ _When the atlas \(\{(U_{i},\phi_{i})\}_{i\in I}\) are local projections onto tangent spaces as in Chen et al. (2019), the local Euclidean distance between two points in the same open set \(U_{i}\) is no greater than their Euclidean distance in \(\mathbb{R}^{D}\), which is further less than their geodesic distance, hence the Lipschitzness defined with local distances implies Definition 4._ _On the other hand, when the curvature of manifold \(\mathcal{S}\) is not too large compared to the radius of the open set \(U_{i}\), then Definition 4 also implies Lipschitzness in the Euclidean sense on each local coordinate \(\phi_{i}(U_{i})\subset[0,1]^{d}\) (Lemma 12). Here, we adopt the global definition for simplicity._ We now formally define the Lipschitz MDP condition, which ensures the Lipschitzness of the state-action value function \(Q^{\pi}\) for any policy \(\pi\). **Assumption 4** (Lipschitz MDP).: _There exist constants \(L_{\mathcal{P}},L_{c}\geq 0\) and \(\alpha\in(0,1]\) such that for any tuple \((s,s^{\prime},a)\in\mathcal{S}\times\mathcal{S}\times\mathcal{A}\), the cost function \(c(\cdot,a)\colon\mathcal{S}\to\mathbb{R}\) is \((L_{c},\alpha)\)-Lipschitz and the transition kernel \(\mathcal{P}\) satisfies_ \[d_{\mathrm{TV}}(\mathcal{P}(\cdot|s,a),\mathcal{P}(\cdot|s^{ \prime},a))\leq L_{\mathcal{P}}\cdot d_{\mathcal{S}}^{\alpha}(s,s^{\prime}),\] _where \(d_{\mathrm{TV}}(\cdot,\cdot)\) is the total variation distance._ Assumption 4 requires that when two states are close to each other, taking the same action would admit similar transition distributions and corresponding costs. This assumption holds for many spatially smooth environments, especially those driven by physical simulations such as MuJuCo (Todorov et al., 2012) and classic control environments (Brockman et al., 2016). Under Assumption 4, we show in Lemma 3 that the state-action value function \(Q^{\pi}\) is Lipschitz regardless of the evaluated policy \(\pi\). The proof of Lemma 3 is provided in Appendix E.3. **Lemma 3**.: _If Assumption 4 holds, then for any policy \(\pi\) and any action \(a\in\mathcal{A}\), the state-action value function \(Q^{\pi}(\cdot,a)\colon\mathcal{S}\to\mathbb{R}\) is \((L_{Q},\alpha)\)-Lipschitz with \(L_{Q}=L_{c}+\frac{\gamma C}{1-\gamma}L_{\mathcal{P}}\) being the Lipschitz constant. That is, for any policy \(\pi\) and any tuple \((s,s^{\prime},a)\in\mathcal{S}\times\mathcal{S}\times\mathcal{A}\), we have_ \[|Q^{\pi}(s,a)-Q^{\pi}(s^{\prime},a)|\leq L_{Q}\cdot d_{\mathcal{S} }^{\alpha}(s,s^{\prime}).\] The Lipschitz constant \(L_{Q}=L_{c}+\frac{\gamma C}{1-\gamma}L_{\mathcal{P}}\) scales linearly with the magnitude of the cost function \(c\) as \(L_{c}\) does. In view of this property, we define a normalized Lipschitz constant which is invariant to the scaling of the cost: \[\overline{L}_{Q}\coloneqq(1-\gamma)L_{c}/C+\gamma L_{\mathcal{P}}. \tag{26}\] This normalized Lipschitz constant is a convex combination of \(L_{c}/C\) and \(L_{\mathcal{P}}\), so it will not exceed the larger one for any \(\gamma\). We make a few remarks on the Lipschitz condition. **Remark 4**.: _Assumption 4 is a sufficient condition for the Lipschitzness of \(Q^{\pi_{k}}\), regardless of the smoothness of \(\pi_{k}\). The Lipschitzness of the target function is a minimal requirement for approximation theory and it is almost essential even for simple regression problems. However, even if Assumption 4 does not hold, \(Q^{\pi_{k}}\) being Lipschitz is still possible. An extreme example is when state space \(\mathcal{S}=\mathbb{S}^{1}\) is a circle and the transition is a fixed rotation whichever action is taken. In this case, the transition kernel is not Lipschitz since the total variation distance between transitions is always \(1\). Meanwhile, \(Q^{\pi_{k}}\) is Lipschitz for any \(\pi_{k}\) provided that \(c\) is Lipschitz. Similar arguments can be found in Fan et al. (2020) and Nguyen-Tang et al. (2022) for non-smooth MDP having smooth Bellman operator, but they implicitly involve the smoothness of neural policy \(\pi_{k}\)._ **Remark 5**.: _In practice, many environments adhere to the Lipschitz condition, with only an extremely small portion of states being exceptions. For example, in the Box2D Car Racing environment (Brockman et al., 2016), the cost remains constant for each frame until a tile is reached, for which the agent will be given a huge reward. Even though this type of partially smooth environment does not fulfill the Lipschitz condition on a global scale, it is reasonable to expect the existence of an environment that is globally smooth and satisfies Assumption 4. Such a globally smooth environment could serve as a regularization of the original non-smooth environment, which inevitably introduces bias to the problem. When this bias is negligible compared to the extent of smoothness, we can study the smooth approximation under Assumption 4 without loss of generality._ With Lemma 3 established, we show that a CNN of the form (13) can uniformly approximate \(Q^{\pi_{k}}(\cdot,a)\) for any \(a\in\mathcal{A}\). The approximation error depends on the specified CNN architecture. **Theorem 2** (Critic approximation).: _Suppose Assumptions 1 and 4 hold. For any integers \(I\in[2,D]\) and \(\widetilde{M},\widetilde{J}>0\), we let_ \[M =O(\widetilde{M}),\ L=O(\log{(\widetilde{M}\widetilde{J})}+D+\log D ),\ J=O(D\widetilde{J}),\] \[R_{1} =(8ID)^{-1}\widetilde{M}^{-\frac{1}{L}}=O(1),\ \log R_{2}=O(\log^{2}( \widetilde{M}\widetilde{J})+D\log{(\widetilde{M}\widetilde{J})}),\] _where \(O(\cdot)\) hides a constant depending on \(\log L_{Q}\), \(\log\frac{C}{1-\gamma}\), \(d\), \(\alpha\), \(\omega\), \(B\), and the surface area \(\mathrm{Area}(\mathcal{S})\). Then for any policy \(\pi\) and any action \(a\in\mathcal{A}\), there exists a CNN \(Q_{w}(\cdot,a)\in\mathcal{F}(M,L,J,I,R_{1},R_{2})\) such _that_ \[\left\|Q_{w}(\cdot,a)-Q^{\pi}(\cdot,a)\right\|_{\infty}\leq\frac{C}{1-\gamma}( \overline{L}_{Q}+1)(\widetilde{M}\widetilde{J})^{-\frac{\alpha}{d}}.\] _Here, \(L_{Q}\) and \(\overline{L}_{Q}\) are defined as in Lemma 3 and (26)._ We provide a proof overview for Theorem 2 in Appendix E.1 and the detailed proof in Appendix E.2. Compared to our preliminary work (Theorem 1 in Ji et al. (2022)) that deals with a larger class of Besov functions by using cardinal B-spline approximation as a crucial step, we simplify the proof for Lipschitz functions where first-order spline approximation is sufficient. To bound the approximation error by \(\epsilon\), we require the number of parameters in \(Q_{w}(\cdot,a)\) to be \(O(MLJ^{2}I)=\widetilde{O}(D^{3}I\epsilon^{-\frac{d}{\alpha}})\). Note that the exponent over \(\epsilon\) is the intrinsic dimension \(d\) rather than the data dimension \(D\), and the hidden terms in \(\widetilde{O}(\cdot)\) also have no exponential dependence on \(D\). This implies that CNN approximation does not suffer from the curse of dimensionality when the data support has a low-dimensional manifold structure. Next, we consider function approximation for policy \(\pi_{k+1}^{\star}\) in the actor update. As mentioned in Section 3.2, our goal is to learn a deterministic optimal policy, which is equivalent to a discrete mapping from \(\mathcal{S}\) to \(\mathcal{A}\). Such a mapping is not continuous given the discrete nature of \(\mathcal{A}\), so it is difficult to be approximated directly. Instead, we iteratively update a temperature-controlled neural policy \(\pi_{k}\) in the form of (18) to approximate the deterministic optimal policy \(\pi^{\star}\). Although \(\pi_{k}\) is a stochastic policy by construction, it can approximate the deterministic policy that chooses the action \(a\in\mathcal{A}\) with the maximal value of \(f_{\theta_{k}}(s,a)\) by using a sufficiently small temperature \(\lambda_{k}>0\). Therefore, as long as the actor network \(f_{\theta_{k}}(s,\cdot)\) is learned to admit a maximizer \(a\in\operatorname{supp}(\pi^{\star}(\cdot|s))\) for any state \(s\in\mathcal{S}\), it can serve as a good approximation of the optimal policy \(\pi^{\star}\). To learn such an actor network, we iteratively train a new actor network \(f_{\theta_{k+1}}\) based on the current critic and actor networks \(Q_{w_{k}}\) and \(f_{\theta_{k}}\). According to Lemma 1, the target function for the next actor network \(f_{\theta_{k+1}}\) is given by \[\lambda_{k+1}g_{k+1}^{\star}=(1-\eta_{k}\tau_{k})\lambda_{k}^{-1}\lambda_{k+1 }f_{\theta_{k}}-\eta_{k}\lambda_{k+1}Q_{w_{k}},\] which is a weighted sum of the current critic and actor networks. This approximation target is not Lipschitz, since \(Q_{w_{k}}\) is just an approximation of the Lipschitz function \(Q^{\pi_{k}}\), not a Lipschitz function in itself. Consequently, Theorem 2 cannot be directly transferred to actor approximation. To address the issue, we introduce the _approximately Lipschitz_ condition to describe the smoothness inherited from approximating a Lipschitz function. **Definition 5** (Approximate Lipschitzness).: _Let \(L,\epsilon\geq 0\) and \(\alpha\in(0,1]\) be constants. A function \(f\colon\mathcal{S}\to\mathbb{R}\) is called \((L,\alpha,\epsilon)\)-approximately Lipschitz if for any \(x,y\in\mathcal{S}\),_ \[|f(x)-f(y)|\leq L\cdot d_{\mathcal{S}}^{\alpha}(x,y)+2\epsilon.\] _Here, \(L\) is called the Lipschitz constant, and \(\epsilon\) is called the proximity constant. When \(\epsilon=0\), \(f\) is \((L,\alpha)\)-Lipschitz as defined in Definition 4._ Definition 5 relaxes the Lipschitz condition by allowing a proximity constant \(\epsilon\). When \(\epsilon=0\), the condition reduces to the Lipschitz continuity, and the Lipschitz constant is exactly the one in Definition 4. When \(\epsilon\geq\left\lVert f\right\rVert_{\infty}\), the condition is vacuously true for all \(L\geq 0\). When \(\epsilon\) is somewhere between \(0\) and \(\left\lVert f\right\rVert_{\infty}\), a larger \(\epsilon\) allows a potentially smaller Lipschitz constant \(L\). As Lemma 4 shows, the approximate Lipschitzness is a property of any uniform approximation of a Lipschitz function. **Lemma 4**.: _If \(\overline{f}_{0}\colon\mathcal{S}\to\mathbb{R}\) is \((L,\alpha)\)-Lipschitz, \(f\colon\mathcal{S}\to\mathbb{R}\) satisfies \(\left\lVert f-\overline{f}_{0}\right\rVert_{\infty}\leq\epsilon\) for some \(\epsilon>0\), then \(f\) is \((L,\alpha,\epsilon)\)-approximately Lipschitz._ Proof.: For any \(x,y\in\mathcal{X}\), \[\left\lvert f(x)-f(y)\right\rvert =\left\lvert\overline{f}_{0}(x)-\overline{f}_{0}(y)+f(x)- \overline{f}_{0}(x)-f(y)+\overline{f}_{0}(y)\right\rvert\] \[\leq\left\lvert\overline{f}_{0}(x)-\overline{f}_{0}(y)\right\rvert +\left\lvert f(x)-\overline{f}_{0}(x)\right\rvert+\left\lvert-f(y)+\overline{ f}_{0}(y)\right\rvert\] \[\leq L\cdot d_{\mathcal{S}}^{\alpha}(x,y)+2\epsilon.\] The first inequality comes from the triangle inequality. The second inequality is from \(L\)-Lipschitz continuity of \(\overline{f}_{0}\) and that \(\left\lVert f-\overline{f}_{0}\right\rVert_{\infty}\leq\epsilon\). It follows immediately from Theorem 2 and Lemma 4 that there exists an \((L_{Q},\epsilon,\epsilon_{Q})\)-approximately Lipschitz \(Q_{w_{k}}\) that can uniformly approximate \(Q^{\pi_{k}}\) up to \(\epsilon_{Q}\) error. Therefore, we can impose (approximately) Lipschitz restrictions on the CNN function class without damaging its approximation power for the state-action value function. To be more precise, for any CNN class \(\mathcal{F}=\mathcal{F}(M,L,J,I,R_{1},R_{2})\), we define its Lipschitz-restricted version \(\mathcal{F}_{\text{Lip}}(A,L_{f},\alpha,\epsilon_{f})\) as \[\mathcal{F}_{\text{Lip}}(A,L_{f},\alpha,\epsilon_{f})=\{f\in\mathcal{F}\mid \left\lVert f\right\rVert_{\infty}\leq A,\ f\text{ is }(L_{f},\alpha,\epsilon_{f})\text{- approximately Lipschitz}\}. \tag{27}\] Moreover, we denote the parameter space of the Lipschitz-restricted critic and actor network classes as \(\mathcal{W}_{\text{Lip}}\) and \(\Theta_{\text{Lip}}\) respectively: \[\mathcal{W}_{\text{Lip}}(A,L_{Q},\alpha,\epsilon_{Q})=\big{\{}w \mid Q_{w}(\cdot,a)\in\mathcal{F}_{\text{Lip}}(A,L_{Q},\alpha,\epsilon_{Q}), \ \forall a\in\mathcal{A}\big{\}}, \tag{28}\] \[\Theta_{\text{Lip}}(A,L_{f},\alpha,\epsilon_{f})=\big{\{}\theta \mid f_{\theta}(\cdot,a)\in\mathcal{F}_{\text{Lip}}(A,L_{f},\alpha,\epsilon_ {f}),\ \forall a\in\mathcal{A}\big{\}}. \tag{29}\] Then by setting \(\mathcal{W}_{k}=\mathcal{W}_{\text{Lip}}\) and \(\Theta_{k}=\Theta_{\text{Lip}}\) in Algorithm 1, we ensure the \(k\)-th critic network \(Q_{w_{k}}\) and actor network \(f_{\theta_{k}}\) are both approximately Lipschitz, and such a restriction on \(\mathcal{W}_{k}\) will not affect the approximation power of \(Q_{w_{k}}\) for \(Q^{\pi_{k}}\). In addition, the target function \(\lambda_{k+1}g_{k+1}^{*}\) for the next actor is also approximately Lipschitz since it is a weighted sum of two approximately Lipschitz functions. By carefully selecting the temperature parameters to match the configuration of \(\eta_{k}\) and \(\tau_{k}\) in Theorem 1, the approximate Lipschitzness of target functions for actor updates in all iterations can be uniformly controlled. **Lemma 5**.: _For \(k\geq 0\), we let \(\eta_{k}=\frac{1-\gamma_{\rho}}{\tau_{k}}\), \(\tau_{k}=C\gamma_{\rho}^{k+1}\), \(\lambda_{k}=\frac{C\gamma_{\rho}^{k}}{1-\gamma_{\rho}}\), \(\mathcal{W}_{k}=\mathcal{W}_{\mathrm{Lip}}(\frac{C}{1-\gamma},L_{Q},\alpha, \epsilon_{Q})\) and \(\Theta_{k}=\Theta_{\mathrm{Lip}}(\frac{C}{(1-\gamma_{\rho}^{2})(1-\gamma)}, \frac{L_{Q}}{1-\gamma_{\rho}^{2}},\alpha,\frac{\epsilon_{Q}}{1-\gamma_{\rho}^ {2}})\) with some \(\epsilon_{Q}\geq 0\). Then the target actor \(\lambda_{k+1}g_{k+1}^{*}\) defined in Lemma 1 is \((\frac{L_{Q}}{1-\gamma_{\rho}^{2}},\alpha,\frac{\epsilon_{Q}}{1-\gamma_{\rho}^ {2}})\)-approximately Lipschitz and is uniformly bounded by \(\frac{C}{(1-\gamma_{\rho}^{2})(1-\gamma)}\)._ Proof.: By Lemma 1 and our choice of \(\eta_{k}\), \(\tau_{k}\) and \(\lambda_{k}\), the target function of the \(k\)-th critic update is \[\lambda_{k+1}g_{k+1}^{*} =(1-\eta_{k}\tau_{k})\lambda_{k+1}\lambda_{k}^{-1}f_{\theta_{k}}- \eta_{k}\lambda_{k+1}Q_{w_{k}}\] \[=\gamma_{\rho}^{2}f_{\theta_{k}}-Q_{w_{k}}.\] We initialize \(\theta_{0}=0\) in Algorithm 1, thus \(\theta_{0}\in\Theta_{\mathrm{Lip}}(0,0,\alpha,0)\subseteq\Theta_{0}\). It is easy to verify that if \(f\colon\mathcal{S}\to\mathbb{R}\) is \((L_{f},\alpha,\epsilon_{f})\)-approximately Lipschitz and \(g\colon\mathcal{S}\to\mathbb{R}\) is \((L_{g},\alpha,\epsilon_{g})\)-approximately Lipschitz, \(c\in\mathbb{R}\), then \(f+g\) is \((L_{f}+L_{g},\alpha,\epsilon_{f}+\epsilon_{g})\)-approximately Lipschitz, and \(c\cdot f\) is \((|c|L_{f},\alpha,|c|\epsilon_{f})\)-approximately Lipschitz. Combining this fact and our choice of \(\mathcal{W}_{k}\) and \(\Theta_{k}\), we have that \(\lambda_{1}g_{1}^{*}\) is \((\frac{L_{Q}}{1-\gamma_{\rho}^{2}},\alpha,\frac{\epsilon_{Q}}{1-\gamma_{\rho}^ {2}})\)-approximately Lipschitz and is uniformly bounded by \(\frac{C}{(1-\gamma_{\rho}^{2})(1-\gamma)}\). Lemma 5 shows the target actor is approximately Lipschitz in each iteration with \(\mathcal{W}_{k}\) and \(\Theta_{k}\) inserted. It remains to derive the approximation error for actor update with Lipschitz-restricted class \(\Theta_{k+1}=\Theta_{\mathrm{Lip}}\). We first show in Theorem 3 that any bounded and approximately Lipschitz function on \(\mathcal{S}\) can be well approximated by a CNN with enough parameters, and this CNN is also bounded and approximately Lipschitz. **Theorem 3**.: _Suppose Assumption 1 holds, the target function \(f_{0}\colon\mathcal{S}\to\mathbb{R}\) is bounded and \((L_{f},\alpha,\epsilon_{f})\)-approximately Lipschitz. For any integers \(I\in[2,D]\) and \(\widetilde{M},\widetilde{J}>0\), we let_ \[M =O(\widetilde{M}),\ L=O(\log(\widetilde{M}\widetilde{J})+D+\log D ),\ J=O(D\widetilde{J}),\] \[R_{1} =(8ID)^{-1}\widetilde{M}^{-\frac{1}{2}}=O(1),\ \log R_{2}=O(\log^{2}( \widetilde{M}\widetilde{J})+D\log{(\widetilde{M}\widetilde{J})}),\] _where \(O(\cdot)\) hides a constant depending on \(\log L_{f}\), \(\log\|f_{0}\|_{\infty}\), \(\alpha\), \(\omega\), \(B\), and the surface area \(\mathrm{Area}(\mathcal{S})\). Then there exists a CNN \(f\in\mathcal{F}(M,L,J,I,R_{1},R_{2})\) such that_ \[\left\|f-f_{0}\right\|_{\infty}\leq(L_{f}+\left\|f_{0}\right\|_{\infty})( \widetilde{M}\widetilde{J})^{-\frac{\alpha}{d}}+2\epsilon_{f}.\] _Moreover, \(f\) is \((L_{f},\alpha,\widehat{\epsilon}_{f})\)-approximately Lipschitz with \(\widehat{\epsilon}_{f}=(L_{f}+\left\|f_{0}\right\|_{\infty})(\widetilde{M} \widetilde{J})^{-\frac{\alpha}{d}}\) and is uniformly bounded by \(\|f_{0}\|_{\infty}\)._ The proof of Theorem 3 is provided in Appendix E.4. Theorem 3 shows the existence of an approximately Lipschitz CNN that is close under \(L^{\infty}\) norm to any approximately Lipschitz target function on \(\mathcal{S}\). As a corollary, we obtain the approximation error for the actor update. **Corollary 1** (Actor approximation).: _Suppose Assumptions 1 and 4 hold. For any integers \(I\in[2,D]\) _and \(\widetilde{M},\widetilde{J}>0\), we let_ \[M =O(\widetilde{M}),\ L=O(\log{(\widetilde{M}\widetilde{J})}+D+\log D ),\ J=O(D\widetilde{J}),\] \[R_{1} =(8ID)^{-1}\widetilde{M}^{-\frac{1}{L}}=O(1),\ \log R_{2}=O(\log^{2}( \widetilde{M}\widetilde{J})+D\log{(\widetilde{M}\widetilde{J})}),\] _where \(O(\cdot)\) hides a constant depending on \(\log L_{Q}\), \(\log\frac{C}{1-\gamma}\), \(d\), \(\alpha\), \(\omega\), \(B\), and the surface area \(\mathrm{Area}(\mathcal{S})\). If \(\eta_{k}\), \(\tau_{k}\), \(\lambda_{k}\), \(\mathcal{W}_{k}\) and \(\Theta_{k}\) are as specified in Lemma 5 for all \(k\geq 0\), \(\epsilon_{Q}=\frac{C}{1-\gamma}(\overline{L}_{Q}+1)(\widetilde{M}\widetilde{J })^{-\frac{\alpha}{d}}\), then for any \(w_{k}\in\mathcal{W}_{k}\) and \(\theta_{k}\in\Theta_{k}\), there exists \(\theta\in\Theta_{k+1}\) such that_ \[\left\|f_{\theta}(\cdot,a)-\lambda_{k+1}g_{k+1}^{\star}(\cdot,a)\right\|_{ \infty}\leq\frac{3\epsilon_{Q}}{1-\gamma_{\rho}^{2}}.\] _Here, \(g_{k+1}^{\star}\), \(L_{Q}\) and \(\overline{L}_{Q}\) are defined as in Lemmas 1 and 3 and (26)._ We note that the requirement for proximity constant \(\epsilon_{Q}\) in Corollary 1 is the same as the approximation error in Theorem 2. This alignment maintains the consistency of the Lipschitz constraints imposed on \(\mathcal{W}_{k}\), \(\Theta_{k}\), and \(\Theta_{k+1}\). In this case, the actor approximation error is comparable to the critic approximation error, and they both depend on the CNN architecture. Therefore, a large CNN class \(\mathcal{F}=\mathcal{F}(M,L,J,I,R_{1},R_{2})\) guarantees the existence of good approximations to both the state-action value function and the policy. ### Sample Complexity We have demonstrated that CNN approximation can be applied to both the state-action value function and the policy. To make sure that the solutions to the ERM subproblems (16) and (22) indeed provide good approximations for \(Q^{\pi_{k}}\) and \(\pi_{k+1}^{\star}\), the number of samples must be sufficient. In this section, we derive the sample complexity for Algorithm 1. To be more precise, we consider the expected number of oracle accesses to the transition kernel \(\mathcal{P}\) and the cost function \(c\) for Algorithm 1 to find a policy \(\pi_{K}\) that satisfies \(\mathbb{E}\big{[}V^{\pi_{K}}(\rho)-V^{\star}(\rho)\big{]}\leq\epsilon\). We keep using the same notation for CNN class \(\mathcal{F}=\mathcal{F}(M,L,J,I,R_{1},R_{2})\) and its Lipschitz-restricted version \(\mathcal{F}_{\mathrm{Lip}}\) as defined in (27), as well as the parameter spaces \(\mathcal{W}_{\mathrm{Lip}}\) and \(\Theta_{\mathrm{Lip}}\) as denoted in (28) and (29). We show the following lemma that characterizes the number of samples \(N\) sufficient for accurate critic update at the \(k\)-th iteration. **Theorem 4** (Critic sample size).: _Suppose Assumptions 1 and 4 hold. For \(k\geq 0\), we let \(\eta_{k}\) and \(\tau_{k}\) be the same as in Theorem 1 and \(\mathcal{W}_{k}=\mathcal{W}_{\mathrm{Lip}}(\frac{C}{1-\gamma},L_{Q},\alpha, \epsilon_{Q})\) with_ \[M=O(N^{\frac{d}{d+2\alpha}}),\ L=O(\log N+D+\log D),\ J=O(D),\ I \in[2,D],\ R_{1}=O(1),\] \[\log R_{2}=O(\log^{2}N+D\log N),\ \epsilon_{Q}=(L_{Q}^{2}+C^{2}/(1- \gamma)^{2})D^{\frac{3\alpha}{2\alpha+d}}N^{-\frac{\alpha}{2\alpha+d}}.\] _If we take sample size \(N=\widetilde{O}(\frac{\sqrt{|\mathcal{A}|}}{1-\gamma}\gamma_{\rho}^{-(K+1)})^ {\frac{d}{\alpha}+2}\), then \(\mathbb{E}\big{[}\mathcal{L}_{\mathrm{critic}}(w_{k};\pi_{k})\big{]}\leq C^{2 }\gamma_{\rho}^{2(k+1)}\) holds for all \(k\leq K\) in Algorithm 1._ _Moreover, if Assumptions 2 and 3 hold, then for any \(\epsilon>0\), it suffices to let_ \[N=\widetilde{O}\left(\frac{\kappa C\left(\sqrt{C_{\nu}}+\log|\mathcal{A}|\right) \sqrt{|\mathcal{A}|}}{(1-\gamma)^{3}\epsilon}\right)^{\frac{d}{\alpha}+2}\] _so that \(\mathbb{E}\big{[}\mathcal{L}_{\mathrm{critic}}(w_{k};\pi_{k})\big{]}\leq C^{2} \gamma_{\rho}^{2(k+1)}\) for all \(k\leq K\), where \(K\) is the iteration number given in Theorem 1 that guarantees \(\mathbb{E}\big{[}V^{\pi_{k}}(\rho)-V^{\star}(\rho)\big{]}\leq\epsilon\). Here, \(O(\cdot)\) and \(\widetilde{O}(\cdot)\) hide the constant depending on \(D^{\frac{6\alpha}{2\alpha+d}}\), \(\overline{L}_{Q}\), \(\log L_{Q}\), \(\log\frac{C}{1-\gamma}\), \(d\), \(\alpha\), \(\omega\), \(B\), and the surface area \(\mathrm{Area}(\mathcal{S})\)._ The proof of Theorem 4 is provided in Appendix D.1. As shown in Theorem 4, when the iteration number \(k\) increases, the required number of samples \(N\) grows exponentially at rate \(\widetilde{O}(\gamma^{-\frac{d}{\alpha}-2})\). By Theorem 1, the total number of iterations is \(\widetilde{O}(\log\frac{1}{\epsilon})\), so the growing procedure will not continue for too long. As a result, the number of samples for the last iteration is \(\widetilde{O}(\epsilon^{-\frac{d}{\alpha}-2})\), and the overall sample complexity for critic updates is in the same order up to logarithm terms. Moreover, the exponent over \(\epsilon\) is again the intrinsic dimension \(d\) instead of the data dimension \(D\), which implies avoidance from the curse of dimensionality. We now turn to derive a similar bound for actor updates. As shown in Lemma 5, with adequately chosen temperature parameters, we can restrict the actor network class with constant parameters for all iterations, and the resulting target actor will have the same approximate Lipschitzness guarantee as the subsequent actor network. Hence we have the following Theorem 5 characterizing the sufficient sample size for accurate actor updates. **Theorem 5** (Actor sample size).: _Suppose Assumptions 1 and 4 hold. For \(k\geq 0\), we let \(\eta_{k}\) and \(\tau_{k}\) be the same as in Theorem 1, \(\lambda_{k}=\frac{C\gamma_{\rho}^{k+1}}{1-\gamma_{\rho}}\), \(\mathcal{W}_{k}=\mathcal{W}_{\mathrm{Lip}}(\frac{C}{1-\gamma},L_{Q},\alpha, \epsilon_{Q})\) and \(\Theta_{k}=\Theta_{\mathrm{Lip}}(\frac{C}{(1-\gamma_{\rho}^{2})(1-\gamma)}, \frac{L_{Q}}{1-\gamma_{\rho}^{2}},\alpha,\frac{\epsilon_{Q}}{1-\gamma_{\rho}^ {2}})\) with_ \[M=O(N^{\frac{d}{d+2\alpha}}),\ L=O(\log N+D+\log D),\ J=O(D),\ I \in[2,D],\ R_{1}=O(1),\] \[\log R_{2}=O(\log^{2}N+D\log N),\ \epsilon_{Q}=(L_{Q}^{2}+C^{2}/(1- \gamma)^{2})D^{\frac{3\alpha}{2\alpha+d}}N^{-\frac{\alpha}{2\alpha+d}},\] _If we take sample size \(N=\widetilde{O}\Big{(}\frac{\sqrt{|\mathcal{A}|}\gamma_{\rho}^{-(K+1)}}{(1- \gamma_{\rho}^{2})(1-\gamma)}\Big{)}^{\frac{d}{\alpha}+2}\), then \(\mathbb{E}\big{[}\mathcal{L}_{\mathrm{actor}}(\theta_{k+1};\theta_{k},w_{k}) \big{]}\leq(1-\gamma_{\rho})^{2}\) holds for all \(k\leq K\) in Algorithm 1._ _Moreover, if Assumptions 2 and 3, then for any \(\epsilon>0\), it suffices to let_ \[N=\widetilde{O}\left(\frac{\kappa^{2}C\left(\sqrt{C_{\nu}}+\log|\mathcal{A}| \right)\sqrt{|\mathcal{A}|}}{(1-\gamma)^{4}\epsilon}\right)^{\frac{d}{\alpha} +2}\] _so that \(\mathbb{E}\big{[}\mathcal{L}_{\mathrm{actor}}(\theta_{k+1};\theta_{k},w_{k}) \big{]}\leq(1-\gamma_{\rho})^{2}\) for all \(k\leq K\), where \(K\) is the iteration number given in Theorem 1 that guarantees \(\mathbb{E}\big{[}V^{\pi_{K}}(\rho)-V^{\star}(\rho)\big{]}\leq\epsilon\). Here, \(O(\cdot)\) and \(\widetilde{O}(\cdot)\) hide the constant depending on \(D^{\frac{6\alpha}{2\alpha+d}}\), \(\overline{L}_{Q}\), \(\log L_{Q}\), \(\log\frac{C}{1-\gamma}\), \(d\), \(\alpha\), \(\omega\), \(B\), and the surface area \(\mathrm{Area}(\mathcal{S})\)._ The proof of Theorem 5 is provided in Appendix D.2, which is similar to the proof of Theorem 4. Compared to Theorem 4, Theorem 5 requires a \(\widetilde{O}((1-\gamma_{\rho}^{2})^{-\frac{d}{\alpha}-2})\) times larger sample size because the target actor in each iteration has a worse approximate Lipschitzness than the target actor. Nevertheless, we can align the sample size to the larger one for actor updates so that both the actor and the critic will be accurate. Combining the results together, we establish the overall sample complexity for Algorithm 1. **Corollary 2** (Overall sample complexity).: _Suppose Assumptions 1 to 4 hold. For \(k\geq 0\), we let \(\eta_{k}\) and \(\tau_{k}\) be the same as in Theorem 1, \(\lambda_{k}=\frac{C\gamma_{\rho}^{k+1}}{1-\gamma_{\rho}}\), \(\mathcal{W}_{k}=\mathcal{W}_{\mathrm{Lip}}(\frac{C}{1-\gamma},L_{Q},\alpha, \epsilon_{Q})\) and \(\Theta_{k}=\Theta_{\mathrm{Lip}}(\frac{C}{(1-\gamma_{\rho}^{2})(1-\gamma)}, \frac{L_{Q}}{1-\gamma_{\rho}^{2}},\alpha,\frac{\epsilon_{Q}}{1-\gamma_{\rho} ^{2}})\) with_ \[M=O(N^{\frac{d}{d+2\alpha}}),\ L=O(\log N+D+\log D),\ J=O(D),\ I \in[2,D],\ R_{1}=O(1),\] \[\log R_{2}=O(\log^{2}N+D\log N),\ \epsilon_{Q}^{(k)}=(L_{Q}^{2}+C^{2} /(1-\gamma)^{2})D^{\frac{3\alpha}{2\alpha+d}}N^{-\frac{\alpha}{2\alpha+d}}.\] _Then for \(\epsilon>0\), it suffices to set \(N=\widetilde{O}\Big{(}\frac{\kappa^{2}C\big{(}\sqrt{C\upsilon}+\log\left| \mathcal{A}\right|\big{)}\sqrt{\left|\mathcal{A}\right|}}{(1-\gamma)^{4} \epsilon}\Big{)}^{\frac{d}{\alpha}+2}\), and the expected number of oracle calls for Algorithm 1 to find a \(\pi_{K}\) satisfying \(\mathbb{E}\big{[}V^{\pi_{K}}(\rho)-V^{\star}(\rho)\big{]}\leq\epsilon\) is_ \[\widetilde{O}\Big{(}\kappa^{\frac{2d}{\alpha}+5}C_{\nu}^{\frac{d}{2\alpha}+1} |\mathcal{A}|^{\frac{d}{2\alpha}+2}(1-\gamma)^{-\frac{4d}{\alpha}-10}C^{\frac {d}{\alpha}+2}\epsilon^{-\frac{d}{\alpha}-2}\Big{)}.\] _Here, \(O(\cdot)\) and \(\widetilde{O}(\cdot)\) hide the constant depending on \(D^{\frac{6\alpha}{2\alpha+d}}\), \(\overline{L}_{Q}\), \(\log L_{Q}\), \(\log\frac{C}{1-\gamma}\), \(d\), \(\alpha\), \(\omega\), \(B\), and the surface area \(\mathrm{Area}(\mathcal{S})\)._ Proof.: Let \(K\) be the iteration number in Theorem 1. By Theorems 4 and 5, our specification of \(N\) ensures that \(\mathbb{E}\big{[}\mathcal{L}_{\mathrm{critic}}(w_{k};\pi_{k})\big{]}\leq C^{2} \gamma_{\rho}^{2(k+1)}\) and \(\mathbb{E}\big{[}\mathcal{L}_{\mathrm{actor}}(\theta_{k+1};\theta_{k},w_{k}) \big{]}\leq(1-\gamma_{\rho})^{2}\) for all \(k\leq K\). Note that we have \(|\mathcal{A}|\) actions in total, and by Lemma 6, each sample requires \(O(\frac{1}{1-\gamma})\) oracle calls. As a result, the overall sample complexity is \[O\Big{(}\frac{KN|\mathcal{A}|}{1-\gamma}\Big{)} =\widetilde{O}\Big{(}\frac{1}{\log\frac{1}{\gamma_{\rho}}}\kappa^{ \frac{2d}{\alpha}+4}C_{\nu}^{\frac{d}{2\alpha}+1}|\mathcal{A}|^{\frac{d}{2 \alpha}+2}C^{\frac{d}{\alpha}+2}(1-\gamma)^{-\frac{4d}{\alpha}-9}\epsilon^{- \frac{d}{\alpha}-2}\Big{)}\] \[\leq\widetilde{O}\Big{(}\kappa^{\frac{2d}{\alpha}+5}C_{\nu}^{ \frac{d}{2\alpha}+1}|\mathcal{A}|^{\frac{d}{2\alpha}+2}C^{\frac{d}{\alpha}+2} (1-\gamma)^{-\frac{4d}{\alpha}-10}\epsilon^{-\frac{d}{\alpha}-2}\Big{)},\] where the inequality uses \(\frac{1}{\log\frac{1}{\gamma_{\rho}}}\leq\frac{1}{1-\gamma_{\rho}}=\frac{\kappa} {1-\gamma}\). Corollary 2 characterizes the expected number of oracle access to the environment for finding an \(\epsilon\)-optimal (in the sense of expected value function) policy \(\pi_{K}\). The resulting sample complexity \(\widetilde{O}\Big{(}\kappa^{\frac{2d}{\alpha}+5}C_{\nu}^{\frac{d}{2\alpha}+1} |\mathcal{A}|^{\frac{d}{2\alpha}+2}(1-\gamma)^{-\frac{4d}{\alpha}-10}C^{\frac {d}{\alpha}+2}\epsilon^{-\frac{d}{\alpha}-2}\Big{)}\) has no exponential dependence on the data dimension \(D\). In our assumption, \(d\ll D\), thus the sample complexity does not suffer from the curse of dimensionality. Conclusion and Discussion We have derived the overall sample complexity for NPMD (Algorithm 1) on Lipschitz MDP with intrinsically low-dimensional state space \(\mathcal{S}\). Our result gives a concrete characterization of the expected number of samples required for an \(\epsilon\)-optimal policy under mild regularity assumptions and shows no curse of dimensionality. We make a few remarks about this result. Tightness in \(\epsilon\).The sample complexity \(\widetilde{O}(\epsilon^{-\frac{d}{\alpha}-2})\) can be interpreted as two parts. The first part \(\widetilde{O}(\epsilon^{-2})\) comes from iterations of the NPMD algorithm and is optimal up to logarithm terms. It matches the complexity of PMD in tabular case (Lan, 2023) and NPG on linear MDP (Yuan et al., 2023). The second part \(\widetilde{O}(\epsilon^{-\frac{d}{\alpha}})\) comes from function approximation on \(d\)-dimensional state space manifold. Intuitively, it matches the number of states \(|\mathcal{S}_{\text{dis}}|\) appeared in the complexity of tabular PMD if we discretize the continuous state space into a finite set of points \(\mathcal{S}_{\text{dis}}\subset\mathcal{S}\).4 This part scales with the intrinsic dimension \(d\) of \(\mathcal{S}\) and can be interpreted as the result of neural networks adapting to the low-dimensional state space geometry. It is yet to be examined whether the overall complexity is tight in \(\epsilon\). Footnote 4: We note that directly performing discretization is difficult when \(\mathcal{S}\) has a complicated geometric structure. Dependence on the cost function scale.The sample complexity reasonably depends on \(C^{-1}\epsilon\), which can be viewed as the relative error. Therefore, as long as \(\epsilon\) scales with the cost function, the term \(C^{\frac{d}{\alpha}+2}\epsilon^{-\frac{d}{\alpha}-2}\) in the complexity bound remains the same. Although the scaling of the cost function can still affect the hidden constant depending on \(\log L_{Q}=\log\left(L_{c}+\frac{\gamma C}{1-\gamma}L_{\mathcal{P}}\right)\) and \(\log\frac{C}{1-\gamma}\), it will not have a major impact since the dominating term is the normalized Lipschitz constant \(\widetilde{L}_{Q}\), which is invariant to the scaling of the cost function. Distribution mismatch and concentrability.The distribution mismatch coefficient \(\kappa\) in (23) and concentrability coefficient \(C_{\nu}\) in Assumption 3 have been widely used in the analysis of PMD-type methods in both the tabular setting (Xiao, 2022) and linear function approximation (Agarwal et al., 2021; Alfano and Rebeschini, 2022; Yuan et al., 2023). The mismatch coefficient \(\kappa\) occurs when the initial distribution \(\rho\) is different from the optimal visitation distribution \(\nu_{\rho}^{\pi^{\star}}\). When Assumption 2 does not hold, that is, \(\rho\) has no full support, \(\kappa\) can possibly be infinity, leading to a vacuous complexity bound. This mismatch coefficient is unavoidable even for the analysis of tabular PMD (Xiao, 2022) and can affect the iteration complexity through \(\gamma_{\rho}\) as shown in Theorem 1. The concentrability coefficient \(C_{\nu}\) comes from the change of error measures. In the \(k\)-th iteration of NPMD, we cannot directly sample states from \(\nu_{\rho}^{\pi_{k+1}}\) or \(\nu_{\rho}^{\pi^{\star}}\) because the corresponding policies \(\pi_{k+1}\) and \(\pi^{\star}\) are not available yet. Instead, we sample states from \(\nu_{\rho}^{\pi_{k}}\) and solve the ERM subproblems with these samples. As a result, the actor and critic errors are naturally measured in \(\nu_{\rho}^{\pi_{k}}\), and \(C_{\nu}\) comes in when measuring the errors in \(\nu_{\rho}^{\pi_{k+1}}\) and \(\nu_{\rho}^{\pi^{\star}}\), which are related to the performance difference between consecutive policies. The concentrability coefficient is unavoidable when we have function approximation errors measured in the \(L^{2}\) norm. To remove \(C_{\nu}\), one has to consider the exact PMD case where there is no function approximation at all or derive \(L^{\infty}\) error bounds for critic and actor updates, both are intractable for continuous state space. As suggested in Yuan et al. (2023) for finite state space, both coefficients can be potentially improved if we decouple the sampling distribution in Algorithm 1 and the evaluation distribution in the policy optimization problem (1). Indeed, one can replace \(\rho\) in (1) by another distribution \(\rho^{\prime}\), and replace the initial distribution in Algorithm 2 by \(\rho\). In this case, we can choose \(\rho^{\prime}\) with full support to satisfy Assumption 2 and \(\kappa\) becomes \(\kappa^{\prime}=\left\|\frac{\mathrm{d}\rho^{\kappa^{\prime}}}{\mathrm{d}\rho ^{\prime}}\right\|_{\infty}\). The optimality gap can be translated as \[V^{\pi_{k}}(\rho)-V^{\star}(\rho)\leq\left\|\frac{\mathrm{d}\rho}{\mathrm{d} \rho^{\prime}}\right\|_{\infty}(V^{\pi_{k}}(\rho^{\prime})-V^{\star}(\rho^{ \prime})).\] Similarly, Assumption 3 can be replaced by \[\chi^{2}(\nu_{\rho^{\prime}}^{\pi},\nu_{\varrho}^{\pi_{k}})+1\leq C_{\nu}^{\prime}\] for some \(C_{\nu}^{\prime}\). However, we note that the pathological behavior of distribution mismatch and concentrability in the continuous state space is not always removable as it requires the Radon-Nikodym derivatives to exist, and it is difficult to ensure \(\kappa^{\prime}<\kappa\) and \(C_{\nu}^{\prime}<C_{\nu}\) for a better complexity. To the best of our knowledge, it remains an open problem to study function approximation policy gradient methods in continuous state space without Assumptions 2 and 3. Computational concerns.In each iteration of Algorithm 1, there are two ERM subproblems with approximately Lipschitz constraints, which are assumed to be solvable. However, solving such problems is non-trivial due to the non-convex objectives and the approximately Lipschitz constraints. In practice, one can use gradient-based methods such as gradient descent (GD) and its variants to minimize the objectives, but little is known about their theoretical convergence behavior. Recently there has been some work studying the GD dynamics in the NTK or mean-field regime (Jacot et al., 2018; Song et al., 2018), but the gap remains as they cannot fully explain the global convergence behavior of GD-like methods, and their results cannot adapt to the low-dimensional manifold structure. Meanwhile, there is no existing result on optimization with approximate Lipschitzness constraints. One can apply Lipschitz regularization methods, such as spectral regularization (Yoshida and Miyato, 2017; Gogianu et al., 2021), gradient regularization (Gulrajani et al., 2017), projected gradient descent for Lipschitz constant constraint (Gouk et al., 2021), and adversarial training (Miyato et al., 2018). Most of these techniques are heuristic, so they cannot exactly control the Lipschitzness of networks. Nevertheless, they could result in approximately Lipschitz networks as it is a relaxed condition. We leave the study of approximately Lipschitz-constrained optimization of neural networks for future work. Overparameterization.In modern deep learning practice, there exists a propensity to employ overparameterized models that have more parameters than the number of data. Our current analysis is based on the classical bias-variance trade-off argument and cannot handle the overparameterization case. Recently, Zhang and Wang (2022) have established deep non-parametric regression results that apply to overparameterized models, but their work does not exploit the low-dimensional structure. It is an interesting future direction to examine if their work can be extended to the manifold setting and fit into our analysis. We expect it to close the theory-practice gap in DRL further. Comparison with value-based methods.The sample complexity we derive for NPMD matches the bound for the value-based FQI method in Fan et al. (2020) when \(d=D\), while our result is significantly better when the intrinsic dimension \(d\ll D\). This shows that policy-based methods can achieve as good performance as value-based methods in theory. From a technical perspective, value-based methods only have smooth value functions (under the Bellman closedness assumption (Fan et al., 2020; Nguyen-Tang et al., 2022)) to approximate. On the other hand, policy-based methods require repetitively approximating new policies, whose Lipschitz constant will accumulate. We address the issue by introducing the notion of approximate Lipschitzness, imposing approximately Lipschitz constraints on the neural networks, and establishing approximation theory for them. Our analysis framework can be applied to more general scenarios where there is iterative refitting of neural networks. Beyond Lipschitz MDP.In this paper, we work on Lipschitz MDP (Assumption 4). In practice, the MDP can be either smoother or not as smooth as Lipschitz MDP. For the former case, one can have Holder smooth MDP with higher exponent, namely \(\alpha>1\), and expect there is a better sample complexity. If we only consider the policy evaluation and value-based algorithms, where the target value function is smooth, then this is possible as suggested by the results from deep supervised learning (Chen et al., 2022). However, for policy-based methods, it is unclear whether neural networks that approximate Holder function can have smoothness beyond approximate Lipschitzness. It is a future direction to examine the sample complexity of policy-based methods in smooth MDP. For the latter case, one can consider extending the Lipschitz condition to the more general Sobolev or Besov conditions to deal with spatial inhomogeneity in smoothness. Also, as mentioned in Remarks 4 and 5, one can use a smooth approximation of the non-smooth MDP as a surrogate in this case. ## Acknowledgement We thank Yan Li for the discussion in the early stage of this work.
2309.07960
Energy balance SED modelling can be effective at high redshifts regardless of UV-FIR offsets
Recent works have suggested that energy balance spectral energy distribution (SED) fitting codes may be of limited use for studying high-redshift galaxies for which the observed ultraviolet and far-infrared emission are offset (spatially `decoupled'). It has been proposed that such offsets could lead energy balance codes to miscalculate the overall energetics, preventing them from recovering such galaxies' true properties. In this work, we test how well the SED fitting code Magphys can recover the stellar mass, star formation rate (SFR), specific SFR, dust mass and luminosity by fitting 6,706 synthetic SEDs generated from four zoom-in simulations of dusty, high-redshift galaxies from the FIRE project via dust continuum radiative transfer. Comparing our panchromatic results (using wavelengths 0.4-500$\mu$m, and spanning $1<z<8$) with fits based on either the starlight ($\lambda_\mathrm{eff} \le 2.2\,\mu$m) or dust ($\ge 100\,\mu$m) alone, we highlight the power of considering the full range of multi-wavelength data alongside an energy balance criterion. Overall, we obtain acceptable fits for 83 per cent of the synthetic SEDs, though the success rate falls rapidly beyond $z \approx 4$, in part due to the sparser sampling of the priors at earlier times since SFHs must be physically plausible (i.e. shorter than the age of the Universe). We use the ground truth from the simulations to show that when the quality of fit is acceptable, the fidelity of Magphys estimates is independent of the degree of UV\FIR offset, with performance very similar to that previously reported for local galaxies.
P. Haskell, D. J. B. Smith, R. K. Cochrane, C. C. Hayward, D. Anglés-Alcázar
2023-09-14T18:00:06Z
http://arxiv.org/abs/2309.07960v1
# Energy balance SED modelling can be effective at high redshifts regardless of UV-FIR offsets ###### Abstract Recent works have suggested that energy balance spectral energy distribution (SED) fitting codes may be of limited use for studying high-redshift galaxies for which the observed ultraviolet and far-infrared emission are offset (spatially 'decoupled'). It has been proposed that such offsets could lead energy balance codes to miscalculate the overall energetics, preventing them from recovering such galaxies' true properties. In this work, we test how well the SED fitting code Magnphys can recover the stellar mass, star formation rate (SFR), specific SFR, dust mass and luminosity by fitting 6,706 synthetic SEDs generated from four zoom-in simulations of dusty, high-redshift galaxies from the FIRE project via dust continuum radiative transfer. Comparing our panchromatic results (using wavelengths 0.4-500 \(\mu\)m, and spanning \(1<z<8\)) with fits based on either the starlight (\(\lambda_{\rm eff}\leq 2.2\,\mu\)m) or dust (\(\geq 100\)\(\mu\)m) alone, we highlight the power of considering the full range of multi-wavelength data alongside an energy balance criterion. Overall, we obtain acceptable fits for 83 per cent of the synthetic SEDs, though the success rate falls rapidly beyond \(z\approx 4\), in part due to the sparser sampling of the priors at earlier times since SFHs must be physically plausible (i.e. shorter than the age of the Universe). We use the ground truth from the simulations to show that when the quality of fit is acceptable, the fidelity of Magnphys estimates is independent of the degree of UV/FIR offset, with performance very similar to that previously reported for local galaxies. keywords: galaxies: high-redshift - galaxies: fundamental parameters - methods: data analysis ## 1 Introduction Spectral energy distribution (SED) fitting offers a powerful method of estimating galaxy physical properties from photometry. SED fitting programs take as input the available photometry, which can be \(>30\) bands in the best studied fields to \(<10\) elsewhere, then use models of varying complexity to infer the shape of the full SED and hence the underlying physical properties (for an introduction to SED fitting see e.g. Walcher et al., 2011; Conroy, 2013). The energy balance code Magnphys (da Cunha et al., 2008 - hereafter DC08) performs \(\chi^{2}\) fitting using two sets of pre-built libraries of model SEDs with a representative range of SFHs and dust models for star-forming galaxies. The energy balance criterion works in such a way that Magnphys considers only combinations of SFH and dust emission that are energetically consistent, in the sense that the energy absorbed by dust in the rest-frame UV is re-radiated in the FIR. During the fit, Magnphys finds the SFH and dust model that best fits the data, and calculates probability density functions (PDFs) for a variety of property values by marginalising over all of the models which satisfy the energy balance criterion. To determine the fidelity of the properties derived from SED fitting, three testing techniques have been used in previous studies. The first is to compare the derived physical parameters to those derived using simpler methods. DC08 tested how well Magnphys could fit observations from the _Spitzer_ Infrared Nearby Galaxy Survey (SINGS Kennicutt et al., 2003), producing acceptable best-fit \(\chi^{2}\) results for 63 of the 66 galaxies. They also tested how well Magnphys could recover the properties of 100 of its own, randomly selected, models with noise added to the photometry. Here, \(M_{\rm star}\), SFR and \(L_{\rm dust}\) were reported to be recovered to a high degree of accuracy. Similarly, Noll et al. (2009) tested the alternative energy balance SED fitting code CIGALE (Boquien et al., 2019) using the SINGS galaxies, replacing DC08's UBV observations with those from Munoz-Mateos et al. (2009). Here, \(L_{\rm dust}\) estimates compared well (\(\pm 0.03\) dex) with those derived by Draine et al. (2007), similarly the SFR estimates compared well (\(0.06\pm 0.05\) dex) with those provided by Kennicutt (1998b) based on H\(\alpha\) emission (e.g. Kennicutt, 1998a). An alternative testing technique is to compare the results of different fitting programs when applied to the same dataset. This will not provide evidence that the results are correct, but does give confidence that a given code performs similarly to others. Best et al. (2022 - in preparation) tested three energy balance based fitters - Magnphys, CIGALE and BAGPIPES (Carnall et al., 2018) - together with AGN-fitter (Calistro Rivera et al., 2016). The four codes were each used to estimate \(M_{\rm star}\) and SFR for galaxies in the Bootes, Lockman Hole and ELAIS-N1 fields of the LOFAR Two Metre Sky Survey (LoTSS Shimwell et al., 2017) deep fields first data release (Duncan et al., 2021; Kondapally et al., 2021; Sabater et al., 2021; Tasse et al., 2021). The results of the runs were compared to determine how well they agreed with each other. For galaxies with no AGN, Magphys, CIGALE and BAGPIPES typically agreed to within 0.1 dex for stellar mass, with AGNfitter differing by 0.3 dex. Similar levels of agreement were found for the SFRs of galaxies found not to contain an AGN. For galaxies with an AGN the situation was more mixed as neither Magphys nor BAGPIPES are designed to handle AGN emission. Hunt et al. (2019) compared the results of applying Magphys, CIGALE and Grassl (Silva et al., 1998) to a sample of 61 galaxies from the Key Insights on Nearby Galaxies: a Far-Infrared Survey with _Herschel_ (KINGFISH) survey (Kennicutt et al., 2011), including 57 of the SINGS galaxies. They found that stellar masses estimated using 3.6\(\mu\)m luminosity agreed with all three codes to within 0.2 dex. Similarly, SED derived SFR estimates were within 0.2 dex of those derived using FUV+FIR luminosities and \(H\alpha+24\mu m\) luminosities. The results for \(M_{\rm dust}\) were more mixed, with Gasmai giving values 0.3 dex higher than Magphys or CIGALE or the value determined using a single temperature modified black body. A similar approach with an even broader selection of fourteen SED fitting codes was taken by Pacifici et al. (2023), who found agreement on stellar mass estimates across the ensemble, but some discrepancies in their SFR and dust attenuation results. More recently, Cheng et al. (2023) used a modified version of Magphys (Magphys+photo-z; Battisti et al., 2019) to determine the photometric redshifts of 16 sub-millimetre galaxies (SMGs). The results were compared to the redshifts derived using EAZY (Brammer et al., 2008), finding that for most sources the results were consistent. The final, and perhaps most promising technique for validating SED fitting is to use simulated galaxies where the 'right' answer is known in advance. Wuyts et al. (2009) used the HYPERZ (Bolzonella et al., 2000) SED fitting code on GADGET-2 (Springel, 2005) simulations to recover mass, age, E(B-V) and \(A_{V}\) under a variety of conditions. They concluded that recovery of properties for ellipticals was generally good (residuals between 0.02 and 0.03 dex) with slightly poorer results for disks (residuals of 0.03 to 0.35 dex), with residuals increasing further to 0.02 to 0.54 dex during periods of merger-triggered star formation. Hayward and Smith (2015, hereafter HS15) used Magphys on two GADGET-3 (Springel, 2005) simulations of an isolated disk and a major merger of two disk galaxies at \(z=0.1\). Snapshots were taken at 10 Myr intervals and the radiative transfer code SUNRISE (Jonsson, 2006) used to produce observations from 7 different lines of sight around the simulation. In both scenarios, the attenuated SED was recovered with an acceptable fit (\(\chi^{2}\) within the 99 per cent confidence threshold; see Smith et al. 2012 for details) except for the time around the peak starburst/coalescence phase of the merger simulation. In both scenarios, \(L_{\rm dust}\) was recovered well with \(M_{\rm star}\) recovered to within 0.3 dex and SFR within 0.2 dex. \(M_{\rm dust}\) was recovered less well, but still within 0.3 dex for the isolated galaxy and 0.5 dex for the merger. The conclusion from this study is that these properties of local galaxies can typically be recovered to within a factor of 1.5 - 3. Smith and Hayward (2018) studied a resolved simulated isolated disk, using spatial resolution as fine as 0.2 kpc. They found that Magphys produced statistically acceptable results for \(M_{\rm star}\), \(L_{\rm dust}\), SFR, sSFR and \(A_{V}\) for over 99 per cent of pixels within the r-band effective radius. At higher redshifts, Dudzeviciute et al. (2020, hereafter D20), used EAGLE (Schaye et al., 2015, Crain et al., 2015) simulations with SKIRT generated photometry (Baes et al., 2011, Camps and Baes, 2020) to validate the performance of Magphys for studying galaxies with redshifts up to 3.4. They found that Magphys gave a remarkably linear correlation with the true (simulated) values, though with significant scatter (at the level of 10, 15 and 30 per cent for the dust mass, SFR and stellar masses, respectively) and significant systematic offsets (of up to \(0.46\pm 0.10\) dex for the recovered stellar mass). These studies all provide evidence that SED fitting, particularly energy balance SED fitting, is working remarkably well and providing results often consistent with the ground truth once the uncertainties are accounted for. However, several authors have questioned whether using an energy balance criterion is appropriate when viewing galaxies for which the UV and FIR are spatially offset from one another (e.g. Casey et al., 2017; Miettinen et al., 2017; Simpson et al., 2017; Buat et al., 2019). In such cases, while 'energy balance' is still expected overall (i.e. energy conservation is presumably not violated), significant spatial decoupling may lead to difficulties in recovering the true properties. Under such circumstances, the attenuation - and thus the intrinsic UV luminosity - may be underestimated because the UV-bright, relatively dust-free regions can result in a blue UV-optical slope even if the bulk of the young stars are heavily dust-obscured. This concern has recently become testable with the sub-arcsecond resolution provided by the Atacama Large Millimetre/submillimetre Array (ALMA)1, enabling direct observation of UV/optical and FIR offsets. There are now numerous papers reporting spatial offsets. Hodge et al. (2016), Rujopakarn et al. (2016), Gomez-Guijarro et al. (2018) and Rujopakarn et al. (2019) have discovered kpc offsets between star forming regions and centres of stellar mass while investigating the star formation and dust distributions in \(2<z<4.5\) galaxies. Along these lines, Chen et al. (2017) found a significant offset in ALES67.1, a SMG at \(z=2.12\), Cochrane et al. (2021) reported the same in the massive star-forming galaxy SHiZELS-14 at \(z=2.24\), and Bowler et al. (2018) detected a 3 kpc offset between the rest-frame FIR and UV emission in the Lyman-break galaxy ID65666 at \(z\approx 7\). Footnote 1: [http://www.alma.info](http://www.alma.info) The concern over the impact of decoupling between the dust and starlight is such that new SED fitting codes such as MICHIZ (Liu, 2020) and Stardust (Kokorev et al., 2021) mention the _absence_ of energy balance as a key advantage in favour of using these codes for studying galaxies where spatial offsets are likely to be a factor. In Liu et al. (2021), MICHI2 produced results very similar to Magphys and CIGALE for a sample of high redshift galaxies, with stellar mass and dust luminosity estimates obtained to within 0.2 - 0.3 dex of those obtained using the two energy-balance codes. Similarly, Kokorev et al. (2021) used Stardust to fit 5,000 IR bright galaxies in the GOODS-N and COSMOS fields, producing results which compared well with those derived using CIGALE with a mean \(M_{\rm dust}\) residual of 0.09 dex, a mean \(L_{\rm IR}\) residual of 0.2 dex and a mean \(M_{\rm star}\) residual of 0.1 dex (albeit with a significant scatter of 0.3 dex). An additional test of the likely impact of spatial offsets was conducted by Seille et al. (2022), who used the CIGALE code to model the Antennae Galaxy, Arp244, which is known to have very different UV and IR distributions (Zhang et al., 2010). Seille et al. (2022) found that the total stellar mass and SFR were consistent, whether they attempted to fit the integrated photometry of the galaxy or sum the results of fitting 58 different regions of Arp244 independently and summed the results (i.e. performance very similar to that found by Smith and Hayward, 2018 for simulated galaxies without spatial offsets). In this context, we now seek to further test the efficacy of energy balance SED fitting for these more challenging dusty, high redshift, star-forming galaxies by using high-resolution simulations with differing degrees of spatial offset between the apparent UV/FIR emission. This paper is structured as follows. Section 2 describes the tools and methods used to create the observations and to fit the SEDs; Section 3 presents the results of the fitting including the derived values for several galaxy properties; Section 4 discusses these in the context of previous papers and Section 5 summarises the conclusions. Throughout this work we adopt a standard cosmology with \(H_{0}=70\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\), \(\Omega_{M}=0.3\), and \(\Omega_{\Lambda}=0.7\). ## 2 Method This section describes the simulation data and the creation of the synthetic observations. It also provides a brief introduction to Magphys, details of the simulations, and how they were subsequently analysed. ### Computing the SEDs of simulated galaxies We analyze a set of 4 cosmological zoom-in simulations from the FIRE project2 that were run using the FIRE-2 code (Hopkins et al., 2018) down to \(z=1\). The simulations use the code GIZMO (Hopkins, 2015)3, with hydrodynamics solved using the mesh-free Lagrangian Godunov "MFM" method. Both hydrodynamic and gravitational (force-softening) spatial resolution are set in a fully-adaptive Lagrangian manner with fixed mass resolution. The simulations include cooling and heating from a meta-galactic background and local stellar sources from \(T\approx 10-10^{10}\,\mathrm{K}\); star formation in locally self-gravitating, dense, self-shielding molecular, Jeans-unstable gas; and stellar feedback from OB & AGB mass-loss, SNe Ia & II, multi-wavelength photo-heating and radiation pressure with inputs taken directly from stellar evolution models. The FIRE-2 physics, source code, and all numerical parameters are _exactly_ identical to those in Hopkins et al. (2018). Footnote 2: [http://fire.northwestern.edu](http://fire.northwestern.edu) Footnote 3: [http://www.tapir.caltech.edu/~phopkins/Site/GIZMO.html](http://www.tapir.caltech.edu/~phopkins/Site/GIZMO.html) The specific sample of simulations studied in this paper include the halos first presented in Feldmann et al. (2016). The FIRE-2 simulations for these halos were introduced, along with a novel on-the-fly treatment of black hole seeding and growth in Angles-Alcazar et al. (2017). These halos were chosen because they are representative of the high-redshift, massive, dusty star-forming galaxies found in infrared-selected observational samples, Cochrane et al. (2019) showing that they present a clumpy dust distribution together with very different morphologies for stellar mass, dust, gas and young stars. At \(z=2\), the galaxies central to the halos have half-light radii of 0.73, 0.98, 0.81 and 0.91 kpc; for additional information on these galaxies see Angles-Alcazar et al. (2017) as well as Cochrane et al. (2019), Wellons et al. (2020), Parsotan et al. (2021) and Cochrane et al. (2022). To generate synthetic SEDs, Monte Carlo dust radiative transfer was performed on each time snapshot of the simulated galaxies in post-processing using the code SKIRT4. SKIRT assigns single-age stellar population SEDs to star particles in the simulations according to their ages and metallicites. It then propagates photon packets through the simulated galaxies' ISM to compute the effects of dust absorption, scattering, and re-emission. Snapshots of the galaxies' evolution were taken at 15 - 25 Myr intervals with each galaxy 'observed' from 7 positions that uniformly sampled inclination angles from view 0 (aligned with the angular momentum vector) in steps of \(30^{\circ}\) to view 6 (anti-aligned). For full details of the SKIRT calculations, see Cochrane et al. (2019, 2022). This procedure yielded 6,706 SEDs across the four simulated galaxies, spanning \(1<z<8\). Footnote 4: [http://www.skirt.ugent.be/](http://www.skirt.ugent.be/) To compute photometry from the SEDs, we convolved the SEDs with appropriate filter response curves for the 18 bands listed in Table 1. These filters were chosen for similarity with previous work in the LoTSS deep fields (e.g. Smith et al., 2021), providing good coverage of the spectrum from the UV to the FIR with which to test how Magphys performs in these idealised conditions. Figure 1 shows the filter coverage for an example SED at z = 1, along with the emergent SED generated by SKIRT. Figure 2 examines the relationship between the properties of our simulated galaxies and those of high redshift sub-millimetre galaxy populations in which spatial UV-FIR offsets have been observed. We compared four properties with observations, specifically the SFR relative to the galaxy main sequence (MS; upper left panel), the relationship between sub-mm flux density and \(M_{\mathrm{dust}}\) (upper right), the degree of \(V\) band extinction (lower left), as well as the magnitude of the UV/IR offsets (lower right) in relation to studies in the literature. In the upper left panel we have compared the SFR in each snapshot with the MS parameterisation from Schreiber et al. (2015) modified for our adopted Chabrier (2003) IMF using the method of Madau & Dickinson (2014), as a function of redshift. The magenta band indicates the typical \(\pm 0.3\) dex scatter associated with the MS (e.g. Tacchiella et al., 2022). The simulated galaxies lie either on or above the MS in the vast majority of cases, and are therefore consistent with dusty, star forming galaxies. The upper right panel of Figure 2 shows the sub-millimetre flux density, \(S_{870}\), as a function of the dust mass for the simulated galaxies and for the SMGs published in D20. While the simulations do not occupy the parameter space of the brightest SMGs, there is significant overlap, and they do lie along the same submm/dust mass relationship (see Hayward et al., 2011, Cochrane et al., 2023). The lower left panel shows how the \(V\)-band extinction (\(A_{V}\)) for the simulations (the blue solid line indicates the median, with shading indicating the values enclosed by the 16th and 84th percentiles of the distribution at each redshift) compares with the corresponding values for the SMG samples from D20 (in purple) and Hainline et al. (2011, indicated by the red points with error bars). Although the D20 sample is on average more obscured than our simulations, similarity to the Hainline et al. (2011) SMGs is evident. The lower right panel shows the range of offsets between the UV and FIR emission in redshift bins. The solid lines indicate the mean simulated offset (blue for peak-to-peak, red for light-weighted mean), with shaded regions indicating the area enclosed by the 16th and 84th percentiles at each redshift. The black, red and green symbols indicate ALMA sources from Rujopakarn et al. (2016), and Rujopakarn et al. (2019) and Lang et al. (2019). Finally, the short green line marks the mean offset from Lang et al. (2019) over 20 SMGs with \(1.6<z<2.5\). To summarise, Figure 2 demonstrates that the simulated sources are predominantly dusty star-forming galaxies. While the D20 SMG sample is more extreme, the degree of extinction and the magnitude of the UV-FIR spatial offsets in the simulations show significant overlap with values published in the literature. The simulations are therefore a useful testing ground for determining the extent of our ability to recover the true properties of galaxies with plausible UV-FIR offsets using Magphys. ### Magnphys Magphys is an SED modelling code using Bayesian inference to derive best-fit SEDs as well as estimates (best-fit, median likelihood, and probability distribution functions) for a wide range of galaxy properties. A full description can be found in DC08 and da Cunha et al. (2015), but we include a brief overview. Magnphys uses two libraries of model galaxies: the first, the library of star-formation histories (SFH), consists of 50,000 models each comprising a UV/optical SED and associated galaxy properties; the second, the dust library, comprises 25,000 models each with an IR SED and associated properties. The SFH library is built using the IMF of Chabrier (2003) and the stellar population synthesis (SPS) model of Bruzual & Charlot (2003). Exponentially declining star formation histories are superposed with random bursts, in such a way that a burst of star formation has occurred in half of the SFH library models within the last 2 Gyr. Common to both libraries is the use of the Charlot & Fall (2000) two-component dust model. In this model, stellar populations younger than 10 Myr are attenuated by a greater amount than older stellar populations, under the assumption that these young stars are still embedded within their 'birth clouds'. These stellar populations are subject to a total optical depth \(\tau_{\rm BC}+\tau_{\rm ISM}\), whereas older populations'see' an optical depth of only \(\tau_{\rm ISM}\), from the diffuse ISM. Charlot & Fall (2000) define the optical depth seen by stellar emission as \[\dot{\tau_{\lambda}}=\left\{\begin{array}{ll}\dot{\tau}_{\lambda}^{BC}+ \dot{\tau}_{\lambda}^{ISM}&\mbox{for stars $<$ 10\,\mathrm{Myr}$},\\ \dot{\tau}_{\lambda}^{ISM}&\mbox{for stars $\geq$ 10\,\mathrm{Myr}$}.\end{array}\right.\] where \(\dot{\tau}_{\lambda}\) is the total optical depth for \(\lambda\), \(\dot{\tau}_{\lambda}^{BC}\) is the optical depth of the birth clouds and \(\dot{\tau}_{\lambda}^{ISM}\) is the optical depth of the ISM. These latter two are defined in Magphys such that: \[\dot{\tau}_{\lambda}^{BC}=(1-\mu)\dot{\tau}_{V}(\lambda/5500\dot{\mathrm{A}})^ {-1.3}, \tag{1}\] \[\dot{\tau}_{\lambda}^{ISM}=\mu\dot{\tau}_{V}(\lambda/5500\dot{\mathrm{A}})^{-0.7}, \tag{2}\] where \(\dot{\tau}_{V}\) is the mean \(V\) band optical depth and \(\mu\) represents the fraction of \(\dot{\tau}_{V}\) arising from the ISM. The dust library is built from three main components: emission from very small grains (\(<0.01\,\mu\)m) which can reach high temperatures if they absorb a UV photon; large grains (between \(0.01\,-\,0.25\,\mu\)m) in thermal equilibrium with the interstellar radiation field; and polycyclic aromatic hydrocarbons (PAHs) which are responsible for emission line features in the mid-infrared. The contribution of each component to the SEDs of the birth clouds and the ISM is chosen to broadly reproduce the range of SEDs found in nearby star-forming galaxies. The total IR SED is then modelled as the sum of the ISM and the birth cloud components. The SFH and dust libraries are linked together in such a manner that the starlight absorbed by dust at short wavelengths is re-radiated at longer wavelengths, i.e. the energy is balanced. During the fit, as well as ensuring that energy conservation (i.e. energy balance) is satisfied by construction (i.e. the luminosity absorbed by dust equals that emitted by dust), Magphys combines those models in the optical library with those in the IR library that have similar contributions from dust in the ISM to the overall dust energy budget (the fraction of luminosity absorbed by the diffuse ISM component and that emitted by the diffuse ISM component, respectively). This is parameterised in Magphys using the \(f_{\mu}\) parameter; in the high-redshift version used in this work, values for the SFH and dust libraries must have \(\Delta f_{\mu}<0.2\) for the combination to be acceptable. In this way, each galaxy is fitted against a wide variety of 'empirical but physically-motivated' (DC08) SFHs and dust content. By calculating the best-fit \(\chi^{2}\) for each model combination that satisfies the conditions, a likelihood function is built for each galaxy property by assuming that \(L\propto\exp\left(-\frac{\chi^{2}}{2}\right)\). When all combinations of models in the libraries have been processed, a PDF is produced for each property by marginalising the individual likelihoods. Magphys outputs a pair of files for each fitted galaxy: one containing the best-fit SED (an example of both the attenuated and unattenuated versions are shown, alongside the model photometry in Figure 1), while the other contains the best-fit model values and the PDFs. \begin{table} \begin{tabular}{l l l} \hline Facility & Filter & \(\lambda_{\rm eff}(\mu\)m) \\ \hline CFHT & Megacam \(u\) & 0.39 \\ PanSTARRS & \(g\) & 0.48 \\ PanSTARRS & \(r\) & 0.61 \\ PanSTARRS & \(i\) & 0.75 \\ PanSTARRS & \(z\) & 0.87 \\ PanSTARRS & \(y\) & 0.96 \\ UKIDSS & \(J\) & 1.2 \\ UKIDSS & \(K\) & 2.2 \\ _Spitzer_ & IRAC ch1 & 3.4 \\ _Spitzer_ & IRAC ch2 & 4.5 \\ _Spitzer_ & IRAC ch3 & 5.6 \\ _Spitzer_ & IRAC ch4 & 8.0 \\ _Spitzer_ & MIPS 24 \(\mu\)m & 24 \\ _Herschel_ & PACS green & 100 \\ _Herschel_ & PACS red & 160 \\ _Herschel_ & SPIRE & 250 \\ _Herschel_ & SPIRE & 350 \\ _Herschel_ & SPIRE & 500 \\ \hline \end{tabular} \end{table} Table 1: The filters used to create synthetic observations from the simulated photometry. The first column gives the telescope/survey, the second the instrument/filter name, and the third the effective wavelength of the filter. Figure 1: An example SED obtained using Magphys, demonstrating the generally close agreement between the true and Magphys-derived SEDs. In the upper panel, the solid black line shows the best-fit Magphys-derived SED, while the dashed black line indicates the Magphys estimate of the unattenuated SED; the solid blue line represents the attenuated SED generated by SKIRT. The square markers represent the best-fit photometry, with the SKIRT photometry shown as the points with error bars (as described in the legend). The coloured lines above the lower horizontal axis show the normalised filter curves used in this study. The lower panel shows the residual value in \(\sigma\) units between each observation and the best-fit SED. The residual value is calculated as (observed flux - model flux)/observed error. This SED corresponds to simulated galaxy A1, snapshot 276, view 0, z=1.00. This study uses the high-redshift version of Magphys (da Cunha et al., 2015), which differs from the low-redshift version in two important ways: firstly, the prior distributions are modified to include higher dust optical depths, higher SFRs and younger ages; secondly, the effects of absorption in the inter-galactic medium (IGM) at UV wavelengths are taken into account. Some studies have sought to determine the extent to which AGN can influence the results of SED fitting (e.g. HS15, Best et al., _in preparation_). However, neither the simulations nor the SED fitting code used in this paper include AGN, and so this important aspect will not be discussed further. ### Processing the data To test how well Magphys is able to recover the intrinsic properties of the simulated galaxies, we ran Magphys four times on each synthetic SED, using different combinations of photometry and assumed redshift: * used all 18 filters; * used all 18 filters, but with all SEDs shifted to a redshift of 2. This run was used as a comparison to detect any bias in the results due to redshift effects. This is discussed in section 4.1; * used only the UV to near-IR filters (\(u-K\)); * used only the FIR filters (PACS 100 \(\mu\)m - SPIRE 500 \(\mu\)m). Runs C & D are discussed in section 3.2.3. We assumed a signal-to-noise ratio of 5 in every band, following Smith & Hayward (2018). One of the key aims of this work is to determine how Magphys performs when analyzing galaxies for which the observed UV and FIR emission are spatially 'decoupled.' To do this, we characterise the offset between the UV and FIR emission in three different ways: 1. the peak to peak offset: this is defined as the distance in parsecs between the points of maximum flux in the UV (0.3 \(\mu\)m) and FIR (100 \(\mu\)m) images; 2. the light-weighted mean offset: this is defined as the distance in parsecs between the light-weighted centres for the UV (0.3 \(\mu\)m) and FIR (100 \(\mu\)m) emission. 3. the Spearman rank coefficient (Myers & Well, 2003) comparing the degree of correlation between the UV (0.3 \(\mu\)m) image and the FIR (100 \(\mu\)m) image. A Spearman rank coefficient of \(\rho>0.8\) is Figure 2: The properties of the simulated galaxies in their observational context. **Upper left:** the relationship between the simulated galaxies’ SFRs and the galaxy main sequence (MS); for each snapshot, the \(y\)-axis shows the difference between the simulation SFR and the MS, with the magenta band indicating the typical \(\pm\) 0.3 dex scatter associated with the MS (e.g. Tacchella et al., 2022). **Upper right:** the relationship between the sub-mm flux density \(S_{570}\) and dust mass; the blue points represent the simulated data, while the orange points show galaxies from D20. **Lower left:** the variation in \(A_{V}\) as a function of redshift for the simulations (for which the median value at each \(z\) is shown by the solid line, within shading indicating values enclosed by the 16th and 84th percentiles of the distribution), along with a corresponding distribution from D20 (shown in purple). The SMG sample from Hainline et al. (2011) is shown by the red points. **Lower right:** the mean UV/FIR peak to peak (blue) and light-weighted mean (red) spatial offsets in redshift bins: the shading indicates the region enclosed by the 16th and 84th percentiles at each redshift, while the solid line indicates the median value. The red, green and black circles are values for individual sources taken from the literature (as indicated in the legend), while the solid green line marks the reported average spatial offset across 20 SMGs from Lang et al. (2019). considered necessary for a strong correlation. Spearman also returns a \(p\) value indicating a correlation confidence level, 99 per cent of our results returned \(p\) values indicating that the probability of the reported correlation being due to chance was \(<0.0001\). The images were filtered to allow only the data points with intensity above the 80th percentile in either the UV or FIR images to be included in the analysis. This was done to avoid the comparatively very large number of low intensity pixels from unduly dominating the result. The 80th percentile was chosen as a reasonable value after comparing the results using different percentile values of the UV and FIR images by eye. The three proxies were each calculated using the rest-frame UV and FIR maps for each snapshot and view to provide values that would be possible using real observational data with high enough spatial resolution and sensitivity. As an example, Figure 3 shows two images of the simulated galaxy A1 in the later stages of its evolution, other examples can be seen in Cochrane et al. 2019. The image on the left shows a significant offset between the UV (shown as the blue image) and FIR (shown as contours) intensity, while in the right image (which has the same colour scheme) the UV and FIR appear almost coincident. In both panels the red vectors show the peak-to-peak offset, while the black vectors show the light-weighted offset. The Spearman \(\rho\) value is given in the title of each panel. We also calculated the offsets using the projected maps of the simulated young stars (age \(<10\) Myr) and dust; however, there was no significant difference in the results and so the observed offsets are used throughout this paper. In the following sections, where we compare derived values to true (simulated) values these are expressed as residuals in dex between the 50th percentile of the derived value's likelihood function and the true value: \[\Delta\log(\mathrm{parameter})=\log_{10}(\mathrm{derived\ value})-\log_{10}( \mathrm{true\ value}). \tag{3}\] It follows that positive offsets (\(\Delta\)) represent Magphys over-estimates, and negative values indicate under-estimates. Throughout this work, where Magphys results are shown averaged across the seven views of a snapshot, they are the mean of the individual median likelihood estimates. ## 3 Results In this section we present results from the four runs described in Section 2.3. In all runs a successful fit was defined as one where the \(\chi^{2}\) value was equal to or below the 99 per cent confidence limit (\(\chi^{2}_{\mathrm{max}}\)), this was taken from standard \(\chi^{2}\) tables. The number of degrees of freedom was calculated as in Smith et al. (2012), which perturbed the output best-fit SEDs from Magphys with random samples from the standard normal distribution and found that it depended on the number of bands in the manner shown in Appendix B of that work. We are using the same Magphys model and have assumed that the relation does not vary with the particular choice of bands or the redshifts of the sources being studied. ### The fraction of mock observations with acceptable fits From run A we find that Magphys achieved a statistically acceptable fit (i.e. \(\chi^{2}\leq\chi^{2}_{\mathrm{max}}\)) for 83 per cent (5,567 out of 6,706) of the snapshots. Note that the value of \(\chi^{2}_{\mathrm{max}}\) varies with redshift because the SKIRT SEDs do not include wavelengths \(<0.08\,\mu\)m, meaning that we are unable to generate synthetic photometry for the bluest filters at \(z\gtrsim 3.9\). The derived \(\chi^{2}\) values are broadly independent of viewing angle for all galaxies; as an example, Figure 4 shows the \(\chi^{2}\) results for all snapshots and views for the galaxy A1. Figure 5 shows how the fit success rate, averaged across all snapshots and views for all four Figure 3: Visualisations of two views of galaxy A1, in the later stages of its evolution, showing differing degrees of UV–FIR offsets ranging from kpc-scale projected separation (left) to approximately co-spatial (right). In each panel, the image in blue shows the UV emission, the side colourbars showing the flux density of the emission in MJy/sr. The coloured contours show flux density for the FIR emission, ranging from green (\(3\times 10^{4}\) MJy/sr), to orange (\(5\times 10^{4}\) MJy/sr) to black (\(10^{5}\) MJy/sr). In each panel, the base of the red vector is positioned at the peak FIR emission and the head at the peak UV emission, the base of the black vector is positioned at the light-weighted mean FIR emission and the head at the light-weighted mean UV emission. The title of each plot gives the galaxy name along with redshift, best-fit \(\chi^{2}\) and Spearman \(\rho\) value. galaxies, changes with redshift. We see from this that Magphys can routinely produce acceptable fits to the synthetic photometry up to \(z=4\), but that the success rate drops to 50 per cent at \(z\approx 4.85\) and to zero after \(z\approx 5.9\). Different factors may be contributing to this effect. Firstly, the number of SFHs from the Magphys libraries that are compared with observations is a strong function of redshift. Magphys does not consider SFHs longer than the age of the Universe at a given redshift (the number of SFHs shorter than the Hubble time at each redshift is shown as the dashed line, relative to the right-hand axis in Figure 4) and at \(z\approx 5\) the number of such SFHs in the library is only 20 per cent of those available at z \(\approx 1\). It is therefore clear that the prior is significantly more densely sampled at lower redshifts, leading to more acceptable fits in cases such as this, where the SFH itself is constrained only weakly by the photometry (e.g. Smith and Hayward, 2018). Secondly, at these very early times in the simulations (\(z>5\)), the model galaxies are low mass (\(<10^{9}\,\mathrm{M_{\odot}}\)) and bursts of star formation have a disproportionate influence on a galaxy's bolometric luminosity. This highly stochastic star formation is not well-modelled by the star formation histories included in the Magphys libraries. It is possible that including additional bands of model photometry may provide better results, e.g. by an additional sub-millimetre datapoint providing an 'anchor' point to the Rayleigh-leans tail of the dust SED and in doing so enabling tighter constraints on the overall energy balance (though we note that the 500 \(\mu\)m band does sample this side of the dust SED out to \(z\approx 4\)). However, in this work we have chosen to focus on an example set of photometric data appropriate for studying dusty star-forming galaxies in general, and with an enforced SNR = 5 in every band we are not subject to some of the sensitivity (or resolution) limitations associated with using real _Herschel_ data to study galaxies at the highest redshifts. We therefore defer testing our results with different photometric coverage for a future work. Throughout the remainder of this study, we follow the same approach used in previous Magphys works both observational and numerical (e.g. HS15; Smith et al., 2012; Smith and Hayward, 2018; Smith et al., 2021), and consider only those views for which an acceptable fit was obtained. To investigate the influence of redshift on the Magphys fit rate further, we used Run B, in which the photometry is modified such that all SEDs were placed at \(z=2\). In this run, the size of the libraries and therefore the sampling of the priors used for SED fitting is the same for all snapshots. We find that the fit success rate increases to 93 per cent for the forced \(z=2\) runs, from 83 per cent for run A. Although it is tempting, we cannot attribute this change solely to the weakening of the SFH prior, since it is also possible that sampling different rest-frame wavelengths could impact the fit success rate (e.g. because of individual spectral features being redshifted into a particular observed bandpass; Smith et al., 2012). These effects are discussed further in section 4.1. ### Overall Magphys performance In studying the fidelity of the Magphys parameter estimates, we have chosen to focus on five properties likely to be of the widest interest, namely SFR and sSFR (both averaged over the last 100 Myr), \(M_{\mathrm{star}}\), \(M_{\mathrm{dust}}\) and \(L_{\mathrm{dust}}\). The true values for \(M_{\mathrm{star}}\), SFR (averaged over the last 100 Myr), and \(M_{\mathrm{dust}}\) were available from the simulation. The true values for \(L_{\mathrm{dust}}\) were calculated by integrating under the SKIRT-produced rest frame SED from 8 \(\mu\)m\(<\lambda<1000\,\mu\)m, following Kennicutt (1998a). \begin{table} \begin{tabular}{c c c} \hline z & filters & \(\chi^{2}_{\mathrm{max}}\) \\ \hline 8.4 \(\leq z<\) 6.6 & 15 & 21.67 \\ 6.6 \(\leq z<\) 5.0 & 16 & 23.01 \\ 5.0 \(\leq z<\) 3.9 & 17 & 24.75 \\ 3.9 \(\leq z<\) 1.0 & 18 & 26.72 \\ \hline \end{tabular} \end{table} Table 2: The number of filters available and the value of \(\chi^{2}_{\mathrm{max}}\) for different redshift ranges. Figure 4: Magphys produces statistically acceptable fits for virtually all snapshots at \(z<5\), irrespective of viewing angle. The best-fit \(\chi^{2}\) as a function of Universe age is shown for galaxy A1, colour-coded by view number. The \(\chi^{2}\) values have been averaged over bin widths of \(\Delta z=0.2\) (relative to the top horizontal axis) for clarity. The horizontal line indicates the \(\chi^{2}\) threshold below which a fit is deemed acceptable using the Smith et al. (2012) criterion, this value varies with redshift (see Table 2). The dashed line indicates the number of stellar models (relative to the right-hand y-axis) available to Magphys at a given redshift with which to compare the input SED. Although not shown here, qualitatively similar results are obtained for the other simulations (A2, A4 & A8). Figure 5: Magphys success rate in fitting SEDs. The percentage of successful fits averaged across all views and snapshots of all galaxies as a function of redshift, note that standard Poisson errors are too small to be visible. The horizontal line marks a success rate of 50 per cent. The fraction of fits that are statistically acceptable decreases with increasing redshift due to the constraint that the SFH must be shorter than the age of the Universe at that redshift, meaning that the size of the template library decreases with increasing redshift. #### 3.2.1 The fidelity of Magnphys results over time Figure 6 shows the evolution in the true and derived physical properties of our simulated galaxies as a function of redshift (with a second horizontal axis at the top of each column showing the age of the Universe at each redshift in our adopted cosmology). The different physical properties are shown along successive rows, while the different simulated galaxies are shown in successive columns, as indicated in the text at the top of each column. In each panel, the black line indicates the true values for each property, taken from the simulations, while the red line indicates the mean of the median-likelihood Magnphys estimates, where the averaging has been conducted over the seven different viewing angles. Similarly, the shaded red region in each panel indicates the area enclosed by the mean of the 16th and 84th percentiles of each parameter's Magnphys PDF (once more averaged over the seven views), to give the reader a feel for the typical error bar. Each lower panel shows the residual, e.g. \(\Delta\log\) (SFR), as defined in Equation 3. In general, Magnphys-derived values show a significant degree of consistency, both in the temporal sense and by comparison to the true values. The temporal sense is a valuable test in its own right as, although Magnphys fits each snapshot independently, the true values shown in Figure 6 mostly vary smoothly with time. That this is reflected in the Magnphys estimates once the error bars are taken in to account, offers broad encouragement for the use of Magnphys with observational data. Below, we discuss the degree of fidelity in the Magnphys parameter estimates overall by comparing with the true (simulated) values. It is clear based on even a cursory inspection of the trends visible in Figure 6 that the Magnphys estimates have broadly captured the behaviour visible in the true parameter values, such as increasing stellar mass and generally decreasing sSFR. Similar encouragement was found in the earlier work of HS15, though we now extend this to higher-redshift, dustier galaxies for the first time with a sample of very high-resolution simulations. The mean residuals, \(\Delta\log(\mbox{parameter})\), averaged over the full evolution of each simulated galaxy, are shown in Table 3. Averaging the results across all views of all snapshots of all galaxies, we find that the stellar mass is typically underestimated by Magnphys, recovered with a mean residual of \(\Delta\log(M_{\rm star})=-0.29\pm 0.09\). This \(3.22\sigma\) result covers a wide range of simulated scenarios, ranging from the early stages of formation, through periods of starburst, tidal disruptions and merger events. By way of comparison, in HS15 the stellar mass was recovered to within 0.2 dex (which was also the typical uncertainty in that work) for the vast majority of snapshots, across both the isolated disk and major merger simulations. The principal exception to this excellent recovery being a 0.4 dex underestimate of the stellar mass during the peak period of AGN activity (which we do not simulate here). D20 also reported a larger systematic underestimation of stellar mass, with a deviation of \(-0.46\pm 0.10\) dex; our results therefore fall between those of these two previous studies. We suggest two factors which may be contributing to this systematic underestimation of the stellar mass. Firstly, a sub-optimal choice of SFH (such as we know we have made in this work, since we can see that the simulated galaxies do not have parametric SFHs in Figure 6) has been shown to produce biased results (Carnall et al., 2019) and in particular an underestimate for stellar mass when applied to star forming galaxies (Mitchell et al., 2013; Michalowski et al., 2014). Secondly, Mitchell et al. (2013) and Malek et al. (2018) have shown that the choice of attenuation law has an impact on the estimation on stellar mass (and it is also clear that the two-component geometry assumed by Magphys is not consistent with the ground truth in the simulations where the radiative transfer calculates the attenuation due to ISM dust _in situ_). In the second row of Figure 6, we show that the Magnphys SFRs for our simulated galaxies are typically accurate to within \(\Delta\log(\mbox{SFR})=-0.11\pm 0.06\) of the true values (\(1.83\sigma\)). Of the five properties highlighted in this study, Figure 6 shows SFR to be the one for which Magnphys produces perhaps the most accurate reflection of the true values once the uncertainties are considered. However, there are some points of disagreement that are worth mentioning. The first example of this is for galaxy A1 at \(z\approx 1.7\): this deviation of \(\approx-0.59\pm 0.16\) dex (\(3.7\sigma\)) coincides with a local minimum of \(M_{\rm dust}\), perhaps resulting from a strong outflow, and is associated with a brief reduction in the SFR that is not apparent when averaging over 100 Myr. The second example is for galaxy A2 around \(1.0\leq z\leq 1.5\) at the point where the galaxy has the highest stellar mass (\(M_{\rm star}>10^{11}\) M\({}_{\odot}\)), and is the most quiescent that we have simulated (sSFR \(\approx 10^{-10}\) yr\({}^{-1}\)). For comparison, HS15 found that SFR was typically recovered to around 0.2-0.3 dex accuracy5. D20 reported that SFR was typically underestimated by approximately 20 per cent - very similar to our value of \(\Delta\log(\mbox{SFR})=-0.11\pm 0.06\) dex - attributing this to differences in their adopted SFHs, dust model and geometry. Footnote 5: We note that HS15 compared Magnphys 100 Myr-averaged SFRs with instantaneous SFRs rather than values averaged over 100 Myr, as we do here. Due to the bursty SFHs of the simulated galaxies, these values can differ significantly (Sparre et al., 2017; Flores Velázquez et al., 2021). This topic is further discussed below in connection with \(A_{V}\) recovery. The observed effects in sSFR mirror those in stellar mass and SFR as expected. Averaging over all snapshots and views, we obtain a mean offset of \(\Delta\log(\mbox{sSFR})=0.18\pm 0.13\), a \(1.38\sigma\) result which is consistent with the findings of HS15. Figure 6 highlights the excellent recovery of the true dust mass; averaging over all snapshots reveals a mean residual of \(\Delta\log(M_{\rm dust})=-0.19\pm 0.17\) (\(1.12\sigma\)), suggesting that the results are typically consistent with the true values once the uncertainties are taken into account, consistent with the findings of D20. Overall \(L_{\rm dust}\) is well recovered with a mean residual of \(\Delta\log(L_{\rm dust})=0.09\pm 0.04\); this \(2.25\sigma\) result is again in line with the results of HS15. However, the fifth row of Figure 6 may suggest a weak trend for a larger \(|\Delta\log(L_{\rm dust})|\) in the sense that the Magnphys estimates increasingly underestimate the true values as the simulations progress and the galaxies develop lower sSFR (though note that the scale of the residual panel for \(L_{\rm dust}\) is half as large as for the other parameters, which exaggerates the size of the effect). It is possible that the assumptions inherent in the two-component dust model used by Magphys, originally optimised to reproduce the observations of local star-forming galaxies (DC08), are no longer appropriate for the high-mass (\(M_{\rm star}\approx 10^{11}\)), highly star-forming (SFR \(>20\) M\({}_{\odot}\) yr\({}^{-1}\)) galaxies that are simulated here. Finally, while it is not always the case, \(A_{V}\) is in general underestimated, with a mean residual of \(\Delta A_{V}=-0.22\pm 0.07\) (\(3.14\sigma\)), similar to the overall fidelity of the stellar mass recovery. This underestimation of the degree of extinction at \(V\) band may be linked to the typical underestimation of the overall dust luminosity, though it is interesting to note this does not prevent excellent recovery of the star formation rate for the majority of snapshots. Figure 6: The overall MAGPHYS parameter estimation (red) compared with the true values from the simulation (black); MAGPHYS captures the overall true properties as a function of redshift. The columns refer to galaxies A1, A2, A4 and A8, respectively. In each row, the upper plot presents the evolution against universe age (upper \(x\)-axis) and redshift (lower \(x\)-axis), and the lower plot shows the residuals on the same \(x\)-axes (note that the range for \(\Delta L_{\rm dark}\) is smaller than that for other properties). The top row presents the evolution of stellar mass, while the four subsequent rows present the corresponding evolution of SFR, sSFR, \(M_{\rm data}\) and \(L_{\rm data}\) respectively. In each main panel, the black line indicates the true values, the red line plots the mean across all views of the median recovered value, and the shaded area indicates the region enclosed by the typical error bar on each parameter (i.e. the mean difference between the 16th/84th percentile and the median, for the upper and lower bounds, respectively). In the final row, the black and red lines in the upper plot show the true and recovered values of \(A_{V}\) for the different views, while the lower plot shows the residuals for each view. #### 3.2.2 Searching for systematic trends in the Magniys fit results We used our simulations to determine the consistency of the Magniys-derived galaxy properties across the range of values presented by the simulations. To do this, we binned the residuals defined using equation 3 across the full range of each property (stellar mass, SFR, sSFR, dust mass and dust luminosity) from the simulations and plotted the median bin residual. To gauge the significance of our results, we also averaged across all occupants of each bin to calculate the typical uncertainty associated with each Magniys fit (although this is by no means constant in our results), and the scatter within each bin. The median residual, typical error bar, and the 16th and 84th percentile values for the scatter were plotted. Systematic trends might be expected to appear as deviations from horizontal lines in these figures; however, our results show that in all cases, the Magniys results are remarkably consistent across the full range of values once the two sources of scatter are taken into account, and no further systematic trends can be identified. The plots are shown in Appendix A. #### 3.2.3 The importance of panchromatic data in energy balance fitting We now discuss runs C and D, originally mentioned in section 2.3. Run C used only the UV-NIR photometry from \(u\) to \(K\) band (\(0.4\,\mu m<\lambda_{\rm eff}<2.2\,\mu m\)), while run D retained only the FIR data from the PACS and SPIRE instruments (\(100\,\mu m<\lambda_{\rm eff}<500\,\mu m\)). While it is not possible to'switch off' the energy balance criterion in Magniys, runs C and D enable us to make a direct comparison of the results of 'traditional' SED fitting (i.e. attempting to recover the stellar mass or dust content of a galaxy from the optical/NIR data alone) with both the true values and the full panchromatic run. In both the starlight-only and FIR-only runs, Magniys must rely on the physically-motivated model and the energy balance assumption to estimate the properties usually associated with the missing observations (e.g. estimating the dust mass purely on the basis of the observed starlight, or the stellar mass using only FIR data). Figure 7 shows the results of these runs comparing the mean \(\log\Delta\) and typical uncertainty for the five properties for each of the three runs A, C & D: full filter set, stellar-only and FIR-only. The left panel of Figure 7 shows the view and snapshot-averaged \(\Delta\log(M_{\rm star})\) for the three runs. It is immediately clear that although the average \(\Delta\log(M_{\rm star})\) is very similar for the stellar-only (0.31 dex) and all-filter (0.29 dex) runs, including the full set of data does reduce the typical uncertainty (shown by the error bars) from \(\pm 0.20\) dex to \(\pm 0.09\) dex. Unsurprisingly, attempting to estimate the stellar mass using only the FIR data leads not only to a large \(\Delta\log(M_{\rm star})\) but also a significantly larger typical uncertainty (\(\approx 0.42\) dex). In the second panel, we show the corresponding results for \(\Delta\log({\rm SFR})\). The power of panchromatic fitting is again clear, since the largest \(\Delta\log({\rm SFR})\) and typical uncertainty occur for the stellar-only fits, which can be influenced by the dominance of the lowest-attenuation sightlines (meaning that the amount of obscured star formation can be underestimated) as well as subject to the well-known age-dust degeneracy (e.g. Cimatti et al., 1997). Our results show that FIR-only SFR estimates are more reliable than those using the \(u\) to \(K\)-band photometry alone, since the FIR-only mean \(\Delta\log({\rm SFR})\approx 0.19\pm 0.11\) is significantly closer to the true values than the corresponding stellar-only fits which have \(\Delta\log({\rm SFR})\approx 0.30\pm 0.29\). The situation is even more pronounced for the recovery of the sSFR, with \(\Delta\log({\rm sSFR})\) for the three runs shown in the central panel of Figure 7. Although the mean \(\Delta\log({\rm sSFR})\) for the stellar-only run is closest to the true values, the typical uncertainties on the panchromatic run are more than a factor two smaller than the stellar-only estimates. The larger error bar represents a wide range of possible activity levels, making it impossible to unravel the age/dust degeneracy; by adding FIR data, the sSFR is better constrained. This, in turn, enables a constrained determination of the SFR and hence the cause of any observed reddening. For \(\rm M_{\rm dust}\), Figure 7 shows that the addition of stellar data makes very little difference to the mean \(\Delta\log(M_{\rm dust})\) with FIR-only giving results within 0.18 dex and the full filter set 0.19 dex; this is comparable to the typical uncertainties (0.20 dex as opposed to 0.17 dex). Using only the stellar data, the mean \(\Delta\log({\rm M_{\rm dust}})\) is 0.26 dex but the typical uncertainty is significantly increased to 0.64 dex, reflecting the difficulty associated with estimating the dust content of distant galaxies using data probing the starlight alone. Finally, the right-hand panel of Figure 7 shows the recovery of \(L_{\rm dust}\) across the three runs. Interestingly, although the typical uncertainties are similar for the FIR-only and panchromatic runs, the inclusion of the UV/NIR data along with the energy balance criterion perhaps increases the mean \(\Delta\log({\rm L_{\rm dust}})\), although the significance of this difference is low. ### Measuring the effect of UV/FIR 'decoupling' on the fidelity of Magniys results As discussed above, the primary goal of this work is to examine the fidelity of the Magniys results as a function of the degree of correlation or apparent offset between UV and FIR emission using the three proxies for this 'decoupling' described in Section 2.3. The results are shown in Figure 8, in which the mean \(\Delta\) in dex for each parameter is plotted against the different measures for the degree of separation. Each of the five panels shows the residuals for one of the properties plotted against the degree of separation/correlation as measured by the three proxies. The coloured lines indicate the median residual in log-spaced bins, while the coloured shaded areas show the mean range enclosed by the 16th and 84th percentiles (i.e. the typical 1\(\sigma\) error in the limit of Gaussian statistics), and the grey shaded area shows the 16th and 84th percentile range of the scatter within each bin. The bin occupancy is shown by the grey background histogram \begin{table} \begin{tabular}{c c c c c c c} \hline Galaxy & \(\Delta\log(M_{\rm star})\) & \(\Delta\log({\rm SFR})\) & \(\Delta\log({\rm sSFR})\) & \(\Delta\log(M_{\rm dust})\) & \(\Delta\log(L_{\rm dust})\) & \(\Delta A_{V}\) \\ \hline A1 & -0.37 \(\pm\) 0.08 & -0.10 \(\pm\) 0.06 & 0.26 \(\pm\) 0.12 & -0.05 \(\pm\) 0.14 & -0.10 \(\pm\) 0.04 & -0.30 \(\pm\) 0.07 \\ A2 & -0.28 \(\pm\) 0.08 & -0.21 \(\pm\) 0.07 & 0.06 \(\pm\) 0.13 & -0.11 \(\pm\) 0.15 & -0.10 \(\pm\) 0.04 & -0.20 \(\pm\) 0.07 \\ A4 & -0.27 \(\pm\) 0.08 & -0.08 \(\pm\) 0.06 & 0.18 \(\pm\) 0.13 & -0.26 \(\pm\) 0.19 & -0.07 \(\pm\) 0.04 & -0.20 \(\pm\) 0.07 \\ A8 & -0.24 \(\pm\) 0.09 & -0.05 \(\pm\) 0.06 & 0.19 \(\pm\) 0.14 & -0.35 \(\pm\) 0.21 & -0.08 \(\pm\) 0.03 & -0.19 \(\pm\) 0.07 \\ \hline Mean & -0.29 \(\pm\) 0.09 & -0.11 \(\pm\) 0.06 & 0.18 \(\pm\) 0.13 & -0.19 \(\pm\) 0.17 & -0.09 \(\pm\) 0.04 & -0.22 \(\pm\) 0.07 \\ \hline \end{tabular} \end{table} Table 3: Mean residuals – \(\Delta\log({\rm parameter})\), as defined in Equation 3 – for each property for each galaxy and the average across all galaxies; a negative value indicates an underestimate. The quoted uncertainties indicate the typical uncertainty that Magniys derives on that galaxy parameter (equal to half the difference between the 16th and 84th percentiles of the derived PDF). relative to the right-hand axis. In many cases the scatter is larger than the typical uncertainties, this is likely to be the result of two effects. Firstly, it reflects the fact that the Magnphys results contain a range of uncertainties that cannot be adequately summarized by a single error bar (the uncertainties show significant variation and contain outliers). Secondly, the uncertainties produced by M\({}_{\rm{}^{2}}\)Apphys are likely to be underestimates. This is inevitably the case since the range of SEDs contained in any pre-computed library must by definition be smaller than the actual range of galaxy SEDs in the Universe; for example neither real galaxies or those in our simulations have truly parametric SFHs. In addition, the M\({}_{\rm{}^{2}}\)Apphys libraries may not be equally appropriate at all stages of our simulations. The average performance of M\({}_{\rm{}^{2}}\)Apphys is remarkably consistent, both as a function of the peak-to-peak distance between the UV and FIR images, and as a function of the light-weighted mean UV to FIR distance. In these cases, the mean \(\Delta\) is less than \(\pm 0.3\) dex for all parameters, across the separations ranging from 0 to 10 kpc. In the lower plot of each panel we show the corresponding variation in \(\Delta\) (in dex) as a function of the Spearman \(\rho\) calculated by comparing the UV and FIR images (recall that only the brightest 20 per cent of pixels were included in this calculation). Here again, the logarithmic difference between the derived and true properties appears independent of \(\rho\) once the mean uncertainties are taken in to account. ## 4 Discussion ### The redshift dependence of the M\({}_{\rm{}^{2}}\)Apphys fit success rate In section 3.1 we showed that the fit success rate was a strong function of redshift, with 83 per cent of the mock observations having acceptable \(\chi^{2}\) overall, but no good fits being obtained at \(z>5.9\). Fixing each mock to be observed at \(z=2\) (Run B) resulted in an increase in the overall success rate to 93 per cent. A likely explanation for this is that the number of SFHs in the M\({}_{\rm{}^{2}}\)Apphys library is a strong function of redshift (shown as the dashed line in figure 4, due to the requirement of considering only SFHs shorter than the Hubble time at the observed redshift), which results in significantly worse sampling of the priors at early epochs, particularly when the SFHs of galaxies are so weakly constrained by photometry (e.g. Smith and Hayward, 2018). In support of this idea, Figure 9 shows the ratio of the best-fit \(\chi^{2}\) obtained for our fiducial results (native redshift run A) to the corresponding value for the SEDs fixed to \(z=2\) (run B). It is clear that there is a systematic trend for the native \(\chi^{2}\) to be worse at \(z>2\) (corresponding to a Universe age of \(\lesssim 3.2\) Gyr in our adopted cosmology) and better at \(z<2\). However this trend is by no means absolute, indicating that other effects such as the precise details of the rest-wavelengths being sampled and the number of available filters may also be playing a role. Interestingly, that the ratio of \(\chi^{2}\) for run A to that of run B does not converge on the right-hand side of this plot may indicate that the size of the M\({}_{\rm{}^{2}}\)Apphys prior library still impacts the fit quality even at \(z<2\), though of course the difference is that at these comparatively late epochs the priors are sufficiently well-sampled to obtain statistically acceptable fits to the data. ### The fidelity of M\({}_{\rm{}^{2}}\)Apphys results for dusty, high-redshift galaxies The principal aim of this study is to determine how the fidelity of the energy balance code M\({}_{\rm{}^{2}}\)Apphys is impacted when it is applied to high-redshift galaxies for which the observed UV and FIR emission are offset, or spatially 'decoupled'. For such galaxies, the observed UV light potentially originates from young star clusters that are not spatially co-located with the young stars that dominate the dust heating and thus FIR emission. Consequently, it is possible that the relatively unobscured young stars could yield a blue UV-optical slope and cause SED modeling codes to underestimate the attenuation. It has been shown that the use of panchromatic data is important when fitting such galaxies (Roebuck et al., 2019), and fitters such as M\({}_{\rm{}^{2}}\)Apphys use energy balance to produce physically motivated, panchromatic models that seek to minimise this underestimation. We determine the efficacy of this approach by analyzing the logarithmic difference, \(\Delta\), between the true and median-likelihood estimates for stellar mass, SFR, specific SFR, dust mass and dust luminosity as a function of three proxies for the degree of 'decoupling' between the UV and FIR data. In all cases, the performance of M\({}_{\rm{}^{2}}\)Apphys appears independent of the degree of UV/FIR 'decoupling' as measured by all three proxies. We therefore conclude that energy balance SED fitting codes can perform just as well in the presence of such effects as they do when the dust and young stars are co-located within a galaxy. We suspect that the explanation for this success is that the Charlot and Fall (2000) dust attenuation model used by M\({}_{\rm{}^{2}}\)Apphys is sufficiently flexible to handle this 'decoupling' in many cases and that the \(\chi^{2}\) algorithm is doing its job by identifying cases for which the model cannot yield a self-consistent solution (i.e. very low attenuation but high FIR luminosity). This has been shown to be the case for an un-modeled AGN contribution to the SED: Smith et al. (2021) noted that using the \(\chi^{2}\) threshold from Smith et al. (2012), which we have also implemented here, and the effect of flagging the vast majority of LOFAR-detected AGN as bad fits unless the AGN contribution to the emergent luminosity was very small. Of course, it is expected (e.g. Witt and Gordon, 2000) and observed (e.g. Kriek and Conroy, 2013; Boquien et al., 2022; Nagaraj et al., 2022) that the attenuation law is not universal and instead varies by galaxy type. Should additional flexibility be required in future, we note that other works have explored Figure 7: Using M\({}_{\rm{}^{2}}\)Apphys to model panchromatic data gives better overall constraints on galaxy properties than sampling only a subset of the available wavelengths. \(\Delta\) log(parameter) for each parameter of interest, averaged across all galaxies for three different M\({}_{\rm{}^{2}}\)Apphys runs: (i) including all available photometry, (ii) stellar only - including only those bands that sample the starlight (\(0.4\mu m<\lambda_{\rm{eff}}<2.2\mu\)m), and (iii) FIR only - including only the FIR data (\(100\mu m<\lambda_{\rm{eff}}<500\mu m\)), with each set of results colour-coded as in the legend. The error bars on each data point represent the mean uncertainty for each M\({}_{\rm{}^{2}}\)Apphys estimate, based on using the 16th and 84th percentiles of the estimated PDFs. Figure 8: The fidelity of MagnPhys is largely independent of the extent of any UV/FIR offset, as measured by the three proxies, once the uncertainties are considered. \(\Delta\log(\text{parameter})\) as a function of three proxies for the difference between the UV and FIR images - panel (a) presents the data for \(M_{\text{star}}\), (b) for SFR, (c) for sSFR, (d) for \(M_{\text{dust}}\) and (e) for \(L_{\text{dust}}\). For each property, the data points represent the mean over all views and snapshots in that bin. The shaded area of the same colour indicates area enclosed by the mean 16th and 84th percentile values within the bin. The grey shaded area shows area enclosed by the 16th and 84th percentile values for the scatter within each bin. The top plot in each panel shows the logarithmic difference \(\Delta\), as a function of the peak-to-peak distance between the UV and FIR images; the second and third panels show the corresponding log \(\Delta\) as a function of the light-weighted mean UV-FIR offset and the Spearman rank correlation coefficient \(p\) between the 20 per cent brightest pixels in either the UV or FIR images. The short coloured lines adjacent to the left-hand y-axis represent the overall mean value. The grey histograms in each panel (a) to (e) show the bin occupancy relative to the right-hand axis. implementing modifications to the standard dust law, including Battisti et al. (2019) who added a 2175A feature to remove a systematic redshift effect, as well as Lo Faro et al. (2017) and Trayford et al. (2020) who allowed the power law indices of equations 1 & 2 to vary. However, the fact that there is no scope to easily modify the dust parameterisation assumed in Magphys leaves us no option but to defer further investigation of this potentially important aspect for a future work. The reason that some have claimed that energy balance should fail in galaxies with significant IR-UV offsets is that the unobscured lines of sight should dominate the UV emission, meaning that the attenuation that would be inferred from the observed UV-optical emission would be less than the total attenuation experienced by the stellar population as a whole. However, energy balance codes such as Magphys use the FIR luminosity as a simultaneous constraint on the attenuation, and it would simply not be possible to obtain a satisfactory fit to both the UV-optical and FIR regions of the SED assuming low attenuation when the FIR luminosity is high.6 Furthermore, we note that even in 'normal' galaxies that do not exhibit significant UV-FIR offsets, stars of a given age are not all subject to the same amount of attenuation (e.g. the Charlot & Fall 2000 dust model). Instead, even for a single age and line of sight, there is a distribution of dust optical depths, and this distribution varies with both the stellar age and line of sight considered. The Charlot & Fall (2000) model attempts to capture this complex age and line of sight dependence using only two effective optical depths. Though this underlying model is certainly very crude compared to both the simulations and real galaxies, HS15 have already shown that it is adequate to correct for the effects of dust attenuation in at least some low-redshift galaxies. There is no _a priori_ reason to believe that it should 'break' above some offset threshold (which was the motivation for this study). Our results demonstrate that even when the width of the optical depth distribution experienced by young stars is very wide (i.e. in our simulations some young stars are almost completely unobscured, whereas others have line-of-sight UV optical depths \(>>1\)), the Charlot & Fall (2000) model can still adequately capture the overall effects of dust attenuation in most cases. Footnote 6: It is tempting to investigate this by making a plot similar to figure 8 but including only those fits that exceed the \(\chi^{2}\) threshold we use to identify the bad fits. However, since the best-fit model is statistically unacceptable, we cannot believe the parameter estimates produced by Magphys in these cases, meaning that such a test is not meaningful. ## 5 Conclusions Recent works (e.g. Hodge et al., 2016; Casey et al., 2017; Miettinen et al., 2017; Simpson et al., 2017; Buat et al., 2019) have questioned whether energy balance SED fitting algorithms are appropriate for studying high-redshift star-forming galaxies, due to observations of offsets between the UV and FIR emission (e.g. Hodge et al., 2016; Rujopakarn et al., 2016; Chen et al., 2017; Bowler et al., 2018; Gomez-Guijarro et al., 2018; Rujopakarn et al., 2019). Clumpy dust distributions within these galaxies may cause a small fraction of relatively unobscured young stars to influence the blue UV-optical slope and result in an underestimation of the attenuation even if the bulk of the young stars are completely dust-obscured. We have used four cosmological zoom-in simulations of dusty, high-redshift galaxies from the FIRE-2 project, together with the radiative transfer code SKIRT, to generate over 6,700 synthetic galaxy SEDs spanning a redshift range \(8>z>1\). We used these model data to test the fidelity of the galaxy properties recovered using the energy balance fitting code Magphys with 18 bands of UV-FIR photometry, building on our previous related studies (HS15, Smith & Hayward, 2015, 2018). Our principal findings are as follows: * We find that the high-\(z\) version of Magphys was able to produce statistically acceptable best-fit SEDs for 83 per cent of the synthetic SEDs that we trialled. The fit success rate fell to 50 per cent for galaxies at \(z>4.85\) and zero for galaxies at \(z>5.9\). This reduction in fit success rate has two main contributing factors: 1. the fixed Magphys libraries, combined with the requirement that model SFHs should be shorter than the age of the Universe at any given redshift reduces the size of the Magphys library available at higher redshifts, mean that the priors become increasingly poorly sampled at earlier times; 2. the evolution of the simulated galaxies is increasingly stochastic at the earliest times in our simulations due to their lower mass, causing bursts of star formation to have a disproportionate influence on a galaxy's bolometric luminosity that cannot be reconciled with the Magphys prior libraries. * Where statistically acceptable best-fits were obtained, we found that Magphys fits are able to broadly capture the true evolution of the four zoom-in simulations that we studied (steady build-up of stellar mass, generally decreasing sSFR, evolution of dust mass), despite individual snapshots being fit independently. In addition, we find that the fidelity of this recovery is remarkably consistent across a broad range of galaxy properties sampled by the simulations, showing no evidence for strong systematics as a function of stellar mass, SFR, sSFR, dust mass or dust luminosity. * Combining UV to FIR observations with an energy balance SED fitting code provides a powerful way to combine multi-wavelength data, and obtain the most reliable estimates of the ground-truth galaxy properties. The panchromatic results outperform those obtained by using either the stellar or dust emission alone. * We find no evidence that the performance of Magphys depends on the degree of spatial 'decoupling' between the UV and FIR data, despite suggestions to the contrary by several other works. Indeed, our results show that the fidelity of the galaxy properties derived is Figure 9: The best-fit \(\chi^{2}\) depends on the size of the Magphys library, which varies with the redshift assumed for the fit. This plot shows the ratio of best-fit \(\chi^{2}\) obtained for run A (at the native redshift) to that obtained in run B (where all SEDs were fixed to \(z=2\)). For galaxies on the left-hand side of this plot the prior gets larger in run B, while for galaxies viewed at later times, the opposite effect is apparent. very similar to that observed for local galaxies, e.g. in our previous work (Hayward & Smith, 2015). ## Acknowledgements The authors thank the anonymous referee for their careful review of this paper. PH & DIBS would like to thank the Flatiron Institute for their hospitality. The Flatiron Institute is supported by the Simons Foundation. PH would also like to thank Dr Alyssa Drake for helpful input and suggestions. DIBS acknowledges support from the UK Science and Technology Facilities Council (STFC) under grant ST/V000624/1. This research has made use of NASA's Astrophysics Data System Bibliographic Services. DAA acknowledges support by NSF grants AST-2009687 and AST-2108944, CXO grant TM2-23006X, and Simons Foundation award CCA-1018464. ## Data Availability The data presented in this paper will be shared on reasonable request to the first author. Selected snapshots of the four simulated galaxies analyzed here are available as part of the FIRE-2 public data release, see Wetzel et al. (2023) and [http://flathub.flatironinstitute.org/fire](http://flathub.flatironinstitute.org/fire)
2306.00134
A Quantum Optical Recurrent Neural Network for Online Processing of Quantum Times Series
Over the last decade, researchers have studied the synergy between quantum computing (QC) and classical machine learning (ML) algorithms. However, measurements in QC often disturb or destroy quantum states, requiring multiple repetitions of data processing to estimate observable values. In particular, this prevents online (i.e., real-time, single-shot) processing of temporal data as measurements are commonly performed during intermediate stages. Recently, it was proposed to sidestep this issue by focusing on tasks with quantum output, thereby removing the need for detectors. Inspired by reservoir computers, a model was proposed where only a subset of the internal parameters are optimized while keeping the others fixed at random values. Here, we also process quantum time series, but we do so using a quantum optical recurrent neural network (QORNN) of which all internal interactions can be trained. As expected, this approach yields higher performance, as long as training the QORNN is feasible. We further show that our model can enhance the transmission rate of quantum channels that experience certain memory effects. Moreover, it can counteract similar memory effects if they are unwanted, a task that could previously only be solved when redundantly encoded input signals were available. Finally, we run a small-scale version of this last task on the photonic processor Borealis, demonstrating that our QORNN can be constructed using currently existing hardware.
Robbe De Prins, Guy Van der Sande, Peter Bienstman
2023-05-31T19:19:25Z
http://arxiv.org/abs/2306.00134v1
# A Quantum Optical Recurrent Neural Network ###### Abstract Over the last decade, researchers have studied the synergy between quantum computing (QC) and classical machine learning (ML) algorithms. However, measurements in QC often disturb or destroy quantum states, requiring multiple repetitions of data processing to estimate observable values. In particular, this prevents online (i.e., real-time, single-shot) processing of _temporal_ data as measurements are commonly performed during intermediate stages. Recently, it was proposed to sidestep this issue by focusing on tasks with quantum output, thereby removing the need for detectors. Inspired by reservoir computers, a model was proposed where only a subset of the internal parameters are optimized while keeping the others fixed at random values [1]. Here, we also process quantum time series, but we do so using a quantum optical recurrent neural network (QORNN) of which _all_ internal interactions can be trained. As expected, this approach yields higher performance, as long as training the QORNN is feasible. We further show that our model can enhance the transmission rate of quantum channels that experience certain memory effects. Moreover, it can counteract similar memory effects if they are unwanted, a task that could previously only be solved when redundantly encoded input signals were available. Finally, we run a small-scale version of this last task on the photonic processor Borealis [2], demonstrating that our QORNN can be constructed using currently existing hardware. ## I Introduction In the pursuit of improved data processing, there is an increasing emphasis on combining machine learning (ML) techniques with quantum computing (QC). Building on the established belief that quantum systems can outperform classical ways of computing [3], quantum machine learning (QML) [4] investigates how to design and deploy software and hardware that harnesses the complexity of such systems. Inverse to this 'QC for ML' approach, also research is being carried out to improve QC algorithms and hardware using ML techniques. In classical machine learning, algorithms such as recurrent neural networks (RNNs) [5; 6], transformers [7; 8], long short-term memory (LSTM) networks [9], and reservoir computing (RC) [10] have led to state-of-the-art performances in natural language processing, computer vision, and audio processing. This makes them good sources of inspiration for new QML models. However, the common use of projective measurements in quantum computing leads to the requirement of processing the same input data multiple times to estimate the expectation values of detected observables. This is a real bottleneck for _temporal_ tasks, as such measurements are often carried out at intermediate processing stages, leading to back-actions on the state of the quantum system. On the one hand, this leads to laborious operating procedures and large overheads [11]. On the other hand, it prevents one from performing online time series processing (i.e. constantly generating output signals in real-time, based on a continuous stream of input signals). Recently, an approach was introduced that proposes to sidestep this detection issue by performing online processing of quantum states and thereby removing the need for detectors [1]. The model was inspired by the concept of RC [10], where random dynamical systems, also called reservoirs, are made to process temporal input data. RC research has demonstrated that training only a simple output layer to process the reservoir's output signals can achieve state-of-the-art performance in various computational tasks while significantly reducing training costs. Building on this idea, Ref. [1] tackled several computational tasks using a random network of harmonic oscillators and training only the interactions between that network and some input-carrying oscillators. Here, we introduce a quantum optical recurrent neural network (QORNN) in which all interactions are trainable within the Gaussian state formalism [12]. We first compare our model with the findings of Ref. [1] by conducting classical simulations of two computational tasks: the short-term quantum memory (STQM) task and the entangler task. We will provide detailed definitions of these tasks in the following sections. They respectively serve as benchmarks to assess the QORNN's linear memory capabilities and its ability to entangle different states in a time series. The results will demonstrate that the relaxation of the RC strategy to a QORNN indeed increases the performance, as long as training is tractable. We further demonstrate the capabilities of our model by applying it to a quantum communication task. Specifically, we show that the QORNN can enhance the capacity of a quantum memory channel. In such a chan nel, subsequent signal states are correlated through interactions with the channel's environment. Our network achieves this enhancement by generating an entangled quantum information carrier. Indeed, it is known that the asymptotic transmission rate of memory channels can be higher than the maximal rate achieved by separable (unentangled) channel uses. It is said that the capacity of such channels is'superadditive'. For a bosonic memory channel with additive Gauss-Markov noise [13], it was previously shown that the generation of such entangled carriers can be performed sequentially (i.e. without creating all channel entries all at once) while achieving near-optimal enhancement of the capacity [14]. Our model achieves the same result, while having a simpler encoding scheme and being more versatile, as it can adapt to hardware imperfections. Moreover, we show that a QORNN can also compensate for unwanted memory effects in quantum channels (the so-called quantum channel equalization or QCE task). Existing work on this task required the availability of redundantly encoded input signals [1]. This undermines the practicality of the method. Moreover, such a redundant encoding is impossible without full prior knowledge of the input states (e.g. when they result from a previous quantum experiment that is not exactly repeatable) because quantum states cannot be cloned. Here, we show that the increased flexibility of the QORNN allows us to lift the restriction of redundant encoding. Additionally, we find that the QORNN's performance can be improved by allowing the reconstruction of the channel's input to be performed with some delay. Finally, we run a small-scale version of the QCE task on the recently introduced photonic processor Borealis [2]. Although the results are restricted by the limited tunability of Borealis' phase modulators, we can still demonstrate that our model can be constructed using currently existing hardware. The rest of this paper is structured as follows. In Section II, we introduce our QORNN model. In Section III, we benchmark our model with the STQM task (III.1) and the entangler task (III.2). In Section IV.1, we show how the QORNN can lead to superadditivity in a bosonic memory channel. In Section IV.2, we show that the QORNN can tackle the QCE task without requiring its input signals to be encoded with redundancy. Finally, the results of the experimental demonstration of the QCE task are given in Section IV.3. ## II Model Our QORNN model is presented in Fig. 1. It incorporates an \(m\)-mode circuit \(\mathbf{S}\) that generally consists of beam splitters, phase shifters, and optical squeezers. Such a circuit can be described by a symplectic matrix. Hence, we will further refer to it as a symplectic circuit. The \(m_{\mathrm{io}}\) upper modes at the left (right) side of \(\mathbf{S}\) are the input (output) modes of the QORNN. The remaining modes of \(\mathbf{S}\) are connected from left to right using \(m_{\mathrm{mem}}=m-m_{\mathrm{io}}\) delay lines. The delay lines are equally long and we fill further denote them as'memory modes'. To perform a temporal task, we send a time series of quantum states (e.g., obtained from encoding classical information or a previous quantum experiment) to the input modes of the QORNN. The temporal spacing between the states is chosen equal to the propagation time of \(\mathbf{S}\) and the delay lines, such that we can describe the QORNN operation in discrete time. Because of the memory modes, output states depend on multiple past input states, which grants the QORNN some memory capacity. By training the circuit \(\mathbf{S}\) (essentially training the parameters of its constituent gates), the QORNN can learn to process temporal data. In further sections, we sometimes restrict \(\mathbf{S}\) to be _orthogonal_ symplectic. Such a circuit comprises only beam splitters and phase shifters, with optical squeezers being excluded. When applicable, we will denote it as \(\mathbf{O}\). ## III Benchmark Tasks ### Short-term quantum memory task The goal of the short-term quantum memory (STQM) task is to recall states that were previously fed to the QORNN after a specific number of iterations, denoted by \(D\). This task is visualized in Fig. 2 for the case where \(m_{\mathrm{io}}=2\) and the QORNN consists of an orthogonal circuit. Note that if we were to use a general symplectic network instead of an orthogonal one, optical squeezers could be added and optimized, such the results would be at least equally good. However, we will show that we can reach improved performance without including optical squeezers in the QORNN, which is beneficial for an experimental setup. Figure 1: QORNN model. Quantum states are repeatedly sent in the upper \(m_{\mathrm{io}}\) modes. These input modes are combined with \(m_{\mathrm{mem}}\) memory modes and sent through a symplectic network \(\mathbf{S}\) (i.e. a network of beam splitters, phase shifters, and optical squeezers). Afterwards, the state on the upper \(m_{\mathrm{io}}\) modes is collected as output, while the remaining \(m_{\mathrm{mem}}\) modes are looped back to the left side of \(\mathbf{S}\). We focus our attention on the case where \(D=1\). The input states are chosen randomly from a set of squeezed thermal states (more details in Section A of Methods). Fig. 3(a) shows the average fidelity between an input state at iteration \(k\) and an output state at iteration \(k+D\), as a function of \(m_{\rm mem}\) and \(m_{\rm io}\). We see that the QORNN perfectly solves the STQM task if \(m_{\rm io}\leq m_{\rm mem}\). This is easy to understand as \({\bf O}\) can be trained to perform several SWAP operations (i.e. operations that interchange the states on two different modes). More specifically, the QORNN can learn to swap every input mode with a different memory mode, such that the input state is memorized for a single iteration before being swapped back to the corresponding output mode. For \(m_{\rm io}>m_{\rm mem}\), such a SWAP-based circuit is not possible, leading to less than optimal behavior of the QORNN. In Fig. 3(b), the fidelity values obtained by the RC-inspired model of Ref. [1] are subtracted from our results. Across all values of \(m_{\rm mem}\) and \(m_{\rm io}\), we observe that the QORNN scores equally well or better. Although Ref. [1] also achieves a fidelity of 1 for certain combinations of \(m_{\rm io}\) and \(m_{\rm mem}\), the set of these combinations is smaller than for the QORNN. Moreover, it is important to note that the RC-inspired design limits the number of trainable parameters, making a SWAP-based solution impossible in general. As a result, prior to training, it is more challenging to guarantee optimal performance of the RC-inspired model, while this is not the case for the QORNN. ### Entangler task The objective of the entangler task is to entangle different states of a time series that were initially uncorrelated. The performance of this task is evaluated based on the average logarithmic negativity between output states at iterations \(k\) and \(k+S\). Negativity [12] is an entanglement measure for which higher values indicate greater levels of entanglement between the states. Note that if we consider output states with spacing \(S=1\), then we aim to entangle nearest-neighbor states. This last task is visualized in Fig. 4 for the case where \(m_{\rm io}=1\). We choose vacuum states as input and hence the circuit \({\bf S}\) should _not_ be orthogonal as we want to generate states with nonzero average photon numbers. For \(m_{\rm io}=1\), Fig. 5(a) displays the average logarithmic negativity obtained by the QORNN for various values of \(m_{\rm mem}\) and \(S\). For a given spacing, the performance increases with \(m_{\rm mem}\). This can be attributed to the fact that a higher value of \(m_{\rm mem}\) leads to a bigger circuit \({\bf S}\), such that more entangling operations can be applied. It can also be seen that the performance roughly stays the same along the diagonal (\(S=m_{\rm mem}\)) and along lines parallel to the diagonal. This can be explained by the findings of Section III.1, which indicate that increasing \(m_{\rm mem}\) can effectively address the increased linear memory requirements of the task that arise from increasing \(S\). Finally, our comparison with the RC-inspired model proposed in Ref. [1], as shown in Fig. 5(b), indicates that Figure 3: STQM performance for \(D=1\) and for different values of \(m_{\rm io}\) and \(m_{\rm mem}\). Fig. (a) shows the average fidelity between a desired output state and a state resulting from the QORNN. In Fig. (b), the corresponding results achieved in Ref. [1] are subtracted from our results. Figure 2: Setup for the STQM task with \(m_{\rm io}=2\). The QORNN consists of an orthogonal symplectic network \({\bf O}\). Pulses of different colors represent a time series of quantum states. A state that is sent into the QORNN at iteration \(k\) should appear at the output at iteration \(k+D\). the QORNN performs better, owing to its larger number of trainable parameters. ## IV Quantum communication tasks ### Superadditivity In this section, we show that the QORNN can enhance the transmission rate of a quantum channel that exhibits memory effects. When a state is transmitted through such a'memory channel', it interacts with the channel's environment. As subsequent input states also interact with the environment, correlations arise between different channel uses. Contrary to memoryless channels, it is known that the transmission rate of memory channels can be enlarged by providing them with input states that are entangled over subsequent channel uses [15], a phenomenon that is better known as'superadditivity'. Here, we aim to create such entangled input states using our QORNN. Note the similarity with the definition of the entangler task. Now however, the goal is not to create maximal entanglement between the different states, but rather a specific type of entanglement that depends on the memory effects of the channel and that will increase the transmission rate. The setup for the'superadditivity task' is shown in Fig. 6. A QORNN with \(m_{\rm io}=1\) transforms vacuum states into an entangled quantum time series. Information is encoded by displacing each individual state of the series over a continuous distance in phase space. These distances are provided by a classical complex-valued information stream. Their probabilities follow a Gaussian distribution with zero mean and covariance matrix \(\mathbf{\gamma}_{\rm mod}\). The resulting time series is then sent through a memory channel. A number of \(K\) consecutive uses of the channel are modeled as a single parallel K-mode channel. The memory effects we consider here are modeled by correlated noise emerging from a Gauss-Markov process [13]. The environment has the following classical noise covariance matrix \(\mathbf{\gamma}_{\rm env}\): \[\mathbf{\gamma}_{\rm env}=\left(\begin{array}{cc}\mathbf{M}(\phi)&0\\ 0&\mathbf{M}(-\phi)\end{array}\right), \tag{1}\] \[M_{ij}(\phi)=N\phi^{|i-j|}. \tag{2}\] Here, \(\phi\in[0,1)\) denotes the strength of the nearest-neighbor correlations and \(N\in\mathbb{R}\) is the variance of the noise. In Eq. (1), \(\mathbf{M}(\phi)\) correlates the \(q\) quadratures, while \(\mathbf{M}(-\phi)\) anti-correlates the \(p\) quadratures. The transmission rate of the channel is calculated from the von Neumann entropy of the states that pass through the channel (i.e. from the Holevo information). Here we adopt the approach and the parameter values outlined in Ref. [13]. Note that the average photon number that is transmitted per channel use (\(\bar{n}\)) has a contribution from both the QORNN (i.e. from its squeezers) and from the displacer. Given a value for \(\bar{n}\), the transmission rate is maximized by training both the circuit \(\mathbf{S}\) and \(\mathbf{\gamma}_{\rm mod}\) under the energy constraint imposed by \(\bar{n}\). Nonzero squeezing val Figure 4: Setup for the entangler task for \(m_{\rm io}=1\) and spacing \(S=1\). Circles of different colors represent an input time series of vacuum states. Pulses of different colors are entangled output states. Figure 5: Entangler task performance for \(m_{\rm io}=1\) and for different values of \(S\) and \(m_{\rm mem}\). Fig. (a) shows the logarithmic negativity resulting from the QORNN. In Fig. (b), the corresponding results achieved in Ref. [1] are subtracted from our results.[1]. ues are obtained, leading to an information carrier. This highlights the counter-intuitive quantum nature of the superadditivity phenomenon: by spending a part of the available energy on the carrier generation rather than on classical modulation, one can reach higher transmission rates, something that has no classical analog. We now define a quality measure for the superadditivity task. The gain \(G\) is the ratio of the achieved transmission rate to the optimal transmission rate for separable (i.e. unentangled) input states. For 30 channel uses, Fig. 7 shows \(G\) as a function of the average photon number \(\bar{n}\) per use of the channel and for different values of the correlation parameter \(\phi\). We take the signal-to-noise ratio \(\mathrm{SNR}=\bar{n}/N=3\), where \(N\) is defined in Eq. (2). We observe that superadditivity is achieved, as the gain is higher than 1 and can reach as high as 1.10. These results agree with the optimal gain values that were analytically derived in prior studies of this memory channel (cfr. Fig. 7 of Ref. [16]). While a scheme already exists to generate carriers sequentially (i.e., generating carriers without creating all channel entries simultaneously) [14], our model also provides a simpler and more versatile alternative. Unlike the existing scheme, our model eliminates the need for Bell measurements, while achieving the same near-optimal gains. Additionally, our model is able to adapt to hardware imperfections, as they can be taken into account during training. ### Quantum channel equalization In this section, we use the QORNN as a model for a quantum memory channel. This time, we assume its memory effects to be unwanted (unlike Section IV.1) and compensate for them by sending the channel's output through a second QORNN instance. Fig. 8 shows the setup for the quantum channel equalization (QCE) task in more detail. An 'encoder' QORNN acts as a model for a memory channel. Because such channels normally do not increase the average photon number of transmitted states, we restrict the encoder's symplectic circuit to be orthogonal and denote it as \(\mathbf{O}_{\mathrm{enc}}\). This circuit is initialized randomly and will not be trained later. A second 'decoder' QORNN is trained to invert the transformation caused by the encoder. Similar to the STQM task, we will show that an orthogonal symplectic circuit \(\mathbf{O}_{\mathrm{dec}}\) is enough to lead to the desired performance, without requiring optical squeezers, which is Figure 6: Setup for the superadditivity task. A QORNN (with \(m_{\mathrm{io}}=1\)) transforms vacuum states into a quantum information carrier that is entangled across different time bins. A displacer (D) modulates this carrier to encode classical input information. The resulting signal is sent to a bosonic memory channel [13]. A number of \(K\) consecutive uses of the channel are modeled as a single parallel K-mode channel. The channel’s environment introduces noise (\(\blacklozenge\)) that leads to correlations between the successive channel uses. As a result of the entangled carrier, the transmission rate of the channel can be enhanced. Figure 7: Performance of the superadditivity task for 30 channel uses. The gain in transmission rate is plotted as a function of the average photon number per use of the channel (\(\bar{n}\)) and for different values of the noise correlation parameter (\(\phi\)). Additional parameters are chosen as follows: \(m_{\mathrm{io}}=m_{\mathrm{mem}}=1\), \(N=\bar{n}/3\). beneficial for experimental realizations. We will further denote the number of memory modes of the encoder and decoder as \(m_{\text{mem,enc}}\) and \(m_{\text{mem,dec}}\) respectively. Finally, we introduce a delay of \(D\) iterations between the input and output time series, similar to the definition of the STQM task (see Fig. 2). Assume for a moment that the input time series of the encoder only consists of a single state, i.e. we are looking at an impulse response of the system. By taking \(D>0\), we allow the decoder to receive multiple output states from the encoder before it has to reconstruct the original input state. The longer \(D\), the more information that was stored in the memory modes of the encoder will have reached the decoder by the time it needs to start the reconstruction process. A similar reasoning applies when the input time series consists of multiple states. This approach effectively addresses the challenge posed by the no-cloning principle, which prevents the decoder from accessing information stored in the encoder's memory or in the correlations between the encoder's memory and output. For the RC-inspired model of Ref. [1], only the case where \(D=0\) was considered. The no-cloning problem was addressed by redundantly encoding the input signals of the encoder. I.e., multiple copies of the same state were generated based on _classical_ input information and subsequently fed to the model through different modes ('spatial multiplexing') or at subsequent iterations ('temporal multiplexing'). Here, we show that taking \(D>0\) allows us to solve the QCE task without such redundancy, ultimately using each input state only once. This not only simplifies the operation procedure but also enables us to perform the QCE task without prior knowledge of the input states, which is often missing in real-world scenarios such as quantum key distribution. As these input states cannot be cloned, our approach significantly increases the practical use of the QCE task. It is worth noting that such an approach where \(D>0\) was also attempted for the RC-inspired model [17], but this was unsuccessful, which we attribute here to its limited number of trainable interactions. Additionally, we will show that the QCE performance of the QORNN is influenced by two key factors: the memory capacity of the decoder (as determined by the value of \(m_{\text{mem,dec}}\)), and the response of the encoder to a single input state (observed at the encoder's output modes). More formally, we measure the impulse response of the encoder by sending in a single squeezed vacuum state (with an average photon number of \(\bar{n}_{\text{impulse}}\)) and subsequently tracking the average photon number \(h_{\text{enc}}\) in its output modes over time. We denote the impulse response at iteration \(k\) by \(h_{\text{enc}}^{k}\). We now define: \[I_{\text{enc}}=\frac{1}{\bar{n}_{\text{impulse}}}\sum_{k=0}^{D}h_{\text{enc}}^ {k} \tag{3}\] \(I_{\text{enc}}\) is a re-normalized cumulative sum that represents the fraction of \(\bar{n}_{\text{impulse}}\) that leaves the encoder before the decoder has to reconstruct the original input state. We now consider 20 randomly initialized encoders with \(m_{\text{mem,enc}}=2\). The input states are randomly sampled from a set of squeezed thermal states (more details in Section D of Methods). Fig. 9(a) shows the average fidelity between an input state of the encoder at iteration \(k\) and an output state of the decoder at iteration \(k+D\) as a function of \(I_{\text{enc}}\) and for different values of \(D-m_{\text{mem,dec}}\). We see that if \(D\leq m_{\text{mem,dec}}\) (blueish dots), the decoder potentially has enough memory, and the quality of constructing the input states increases as the decoder receives more information from the encoder (i.e. as \(I_{\text{enc}}\) increases). If \(D>m_{\text{mem,dec}}\) (reddish dots), we ask the decoder to wait for a longer time before starting to reconstruct the input. This explains why the dots are clustered on the right side of the diagram because more information about the input will be received and \(I_{\text{enc}}\) will be higher. On the other hand, if the delay is too long, it will exceed the memory of the decoder, and the input will start to be forgotten. This explains that the darkest dots with the longest delay have the worst performance. Note that \(D\) is a hyperparameter that can be chosen freely. Also note that the optimal choice for the value of \(D\) is not necessarily \(D=m_{\text{mem,dec}}\) (light grey dots) and Figure 8: Setup for the QCE task when \(m_{\text{io}}=1\) and for a delay \(D\). Pulses of different colors represent a time series of quantum states. The encoder and decoder respectively consist of orthogonal symplectic networks \(\mathbf{O}_{\text{enc}}\) and \(\mathbf{O}_{\text{dec}}\). \(\mathbf{O}_{\text{enc}}\) is initialized randomly and kept fixed. \(\mathbf{O}_{\text{dec}}\) is trained such that an input state that is sent into the encoder at iteration \(k\) appears at the output of the decoder at iteration \(k+D\). the actual optimum depends on the exact configuration of the encoder. Fig. 9(b) shows a subset of the results in Fig. 9(a), where the optimal value of \(D\) is chosen for every encoder initialization and every value of \(m_{\rm mem,dec}\). We observe that the task can be tackled without redundantly encoded input signals and that the performance increases with both \(m_{\rm mem,dec}\) and \(I_{\rm enc}\). For \(m_{\rm mem,dec}=3\), all 20 encoders are equalized better than is done in Ref. [1]. ### Experimental demonstration of quantum channel equalization To show that our model can be implemented using currently existing hardware, we perform the QCE task on the recently introduced quantum processor Borealis [2]. Because of hardware restrictions, we only consider the case where \(m_{\rm mem,enc}=m_{\rm mem,dec}=1\). The setup for this task is visualized in Fig. 10. The input time series consists of squeezed vacuum states (whereas squeezed _thermal_ states were used in Section IV.2). Both the encoder and decoder are decomposed using beam splitters and phase shifters. Here we use the following definitions for those respective components: \[BS(\theta)=e^{\theta(a_{i}a_{j}^{\dagger}-a_{i}^{\dagger}a_{j})} \tag{4}\] \[PS(\phi)=e^{i\phi a_{i}^{\dagger}a_{i}} \tag{5}\] where \(a_{i}\) (\(a_{i}^{\dagger}\)) is the creation (annihilation) operator on mode \(i\). Note that the transmission amplitude of the beamsplitter is \(\cos(\theta)\). Whereas Section IV.2 presented the results of training the decoder, here we will visualize a larger part of the cost function landscape (including sub-optimal decoder configurations). Note that while evaluating a certain point of the cost function landscape, i.e. while processing a single time series, the parameters of the beam splitters and phase shifters are kept fixed. Hence, in Fig. 10, the PNR results are not influenced by the phase shifters outside of the loops (i.e. outside of the memory modes). These components can be disregarded. Consequently, we can parameterize the encoder (decoder) using only a beam splitter angle \(\theta_{\rm enc}\) (\(\theta_{\rm dec}\)) and a phase shift angle \(\phi_{\rm enc}\) (\(\phi_{\rm dec}\)). We detect output states with a photon-number-resolving (PNR) detector. In contrast to Section IV.2, we will not use fidelity as a quality measure, but we will estimate the performance of the QORNN using the following cost function: \[\mathrm{cost}=\sum_{k=0}^{K}|\bar{n}_{\rm out}^{k}-\bar{n}_{\rm target}^{k}|\, \tag{6}\] where \(\bar{n}_{\rm out}^{k}\) and \(\bar{n}_{\rm target}^{k}\) are the average photon numbers of the _actual_ output state and the _target_ output state at iteration \(k\) respectively. \(K\) is the total number of states in the input time series. The small-scale version of the QCE task (depicted in Fig. 10) is run on the photonic processor Borealis [2]. The Borealis setup is depicted in Fig. 11. It consists of a single optical squeezer that generates squeezed vacuum states. These states are sent to a sequence of three dynamically programmable loop-based interferometers of which the delay lines have different lengths, corresponding with propagation times of \(T\), \(6T\), and \(36T\) (\(T=36\mu s\)). For our experiment, we only use the two leftmost loop-based interferometers. More formally, we choose \(\theta=0\) for the rightmost BS and \(\phi=0\) for the rightmost PS. Figure 9: QCE performance for 20 randomly initialized encoders that consist of 2 memory modes. The results are shown as a function of \(I_{\rm enc}\), i.e. the fraction of the photon number of a certain input state that reaches the decoder within \(D\) iterations. In Fig. (a), we consider \(D\in\{0,1,...,m_{\rm mem,dec}+m_{\rm mem,enc}+1\}\) and \(m_{\rm mem,dec}\in\{1,2,...,5\}\). In Fig. (b), the optimal value of \(D\) is chosen (given an encoder and \(m_{\rm mem,dec}\)). As is explained in more detail in Section E of Methods, we can virtually make the lengths of Borealis' delay lines equal. We do so by lowering the frequency at which we send input states and by putting the \(\theta=0\) for the leftmost beam splitter in between input times. We first consider the case where \(\phi_{\rm enc}=\phi_{\rm dec}=0\), such that all phase shifters can be disregarded. Fig. 12 compares the experimental and numerical performance of the QCE task for \(D=1\). We observe that the task is solved perfectly when either the encoder or the decoder delays states by \(D=1\) and the other QORNN transmits states without delay. The performance is worst when both the encoder and decoder delay states with an equal number of iterations (either \(D=0\) or \(D=1\)). Indeed, the cost of Eq. (6) is then evaluated between randomly squeezed vacuum states. For beam splitter angles between \(0\) and \(\pi/2\), we find that the cost follows a hyperbolic surface. We observe good agreement between simulation and experiment. We now consider the case where \(\phi_{\rm enc}\) and \(\phi_{\rm dec}\) are not restricted to \(0\). Note that the Borealis setup (Fig. 11) does not have phase shifters inside the loops. However, as is explained in more detail in Section E of Methods, we can _virtually_ apply the phase shifts \(\phi_{\rm enc}\) and \(\phi_{\rm dec}\)_inside_ Borealis' loops by dynamically adjusting the phase shifters positioned _before_ those loops over time. Unfortunately, this procedure is hampered by hardware restrictions. The range of Borealis' phase shifters is restricted to \([-\pi/2,\pi/2]\). This is problematic, since in order to apply a single virtual phase shift value within a specific loop, the phase shifters preceding that loop need to be tuned dynamically across the complete \([-\pi,\pi]\) range. As a result, about half of the parameter values of the phase Figure 11: Borealis setup from Ref. [2]. A squeezer (S) periodically produces single-mode squeezed states, resulting in a train of 216 states that are separated in time by \(T=36\mu s\). These states are sent through programmable phase shifters (PS) and beam splitters (BS). Each BS is connected to a delay line. From left to right in the figure, the lengths of these delay lines are \(T\), \(6T\), and \(36T\). The output states are measured by a photon-number-resolving (PNR) detector. Figure 10: Setup for the QCE task where both the encoder and decoder have a single memory mode. The matrices \({\bf O}_{\rm enc}\) and \({\bf O}_{\rm dec}\) are decomposed using beam splitters (BS) and phase shifters (PS). A squeezer (S) is used to generate input states, while the output states are measured using a photon-number-resolving (PNR) detector. As the PSs outside of the loops do not influence the PNR results, they can be disregarded. shifters cannot be applied correctly. When such a value falls outside of the allowed range, an additional phase shift of \(\pi\) is added artificially to ensure proper hardware operation. Fig. 13 shows the QCE (\(D=1\)) performance for three different encoders (corresponding to the three columns). We compare classical simulation results (rows 1 and 2) with experimental results (row 3). Whereas the restricted range of Borealis' phase shifters is taken into account in rows 2 and 3, it is not in row 1. While Fig. 12 (for \(\phi_{\mathrm{enc}}=\phi_{\mathrm{dec}}=0\)) showed that the decoder can easily be optimized by considering only \(\theta_{\mathrm{dec}}=0\) and \(\theta_{\mathrm{dec}}=\pi/2\), this optimization is less trivial when \(\phi_{\mathrm{enc}}\neq 0\) and \(\phi_{\mathrm{dec}}\neq 0\). We observe that the general trends of the cost function landscapes agree between rows 2 and 3, although row 3 is affected by experimental imperfections. ## V Conclusions We have introduced a new model to process time series of quantum states in real time. We have probed two of its key functionalities: the linear memory capacity and the capability to entangle distinct states within a time series. By comparing with an existing RC-inspired model, we showed that these functionalities benefit from an RNN structure where all internal interactions can be trained. Moreover, the QORNN showed the ability to tackle two _quantum communication_ tasks, a domain where optics naturally is the leading hardware platform. First, we demonstrated that the QORNN can enhance the transmission rate of a quantum memory channel with Gauss-Markov noise by providing it with entangled input states. The enhancement showed to be near-optimal as compared to previous analytical studies, while the generation scheme of the input states is simpler and can more Figure 13: QCE performance of Borealis when \(D=1\). (a) Simulation where phase shifters are tunable without range restriction. (b) Simulation where phase shifters are tunable within \([0,\pi)\). (c) Experiment where phase shifters are tunable within \([0,\pi)\). The columns correspond with different encoders. Each plot shows the QCE cost (as defined in Eq. (6)) as a function of the decoder parameters \(\theta_{\mathrm{dec}}\) and \(\phi_{\mathrm{dec}}\). easily adapt to hardware imperfections than previous approaches. Second, we showed that the QORNN can mitigate undesired memory effects in quantum channels. A small-scale experiment of this last task demonstrated the readiness of quantum optics to implement models like the QORNN. ## Methods The classical simulations of our QORNN were carried out using the open-source photonic optimization library MrMustard[18]. This allows us to perform gradient-based optimizations of symplectic circuits and orthogonal symplectic circuits. ### Stqm As is explained in Ref. [1], a special purpose cost function \(f\) can be used to solve the STQM task. \(f\) can be defined as a function of the submatrices of the orthogonal symplectic matrix that is associated with the circuit \(\mathbf{O}\). With slight abuse of notation, we write the orthogonal symplectic _matrix_ that is associated with the _circuit_\(\mathbf{O}\) as: \[\mathbf{O}=\left(\begin{array}{cc}\mathbf{A}&\mathbf{B}\\ \mathbf{C}&\mathbf{D}\end{array}\right). \tag{7}\] In contrast to Eq. (1), the quadratures are ordered here as follows: \((q_{1},p_{1},q_{2},p_{2},q_{3},p_{3},...)\), where \(q_{i}\) and \(p_{i}\) are the quadratures of mode \(i\). When the delay is nonzero (\(D>0\)), Ref. [1] shows that the goal of the STQM task can be redefined as the following system of equations: \[\begin{cases}\mathbf{D}\approx\mathbf{0}\,\\ \mathbf{C}\mathbf{A}^{D-1}\mathbf{B}\approx\mathbf{I}\,\\ \mathbf{C}\mathbf{A}^{t}\mathbf{B}\approx\mathbf{0}\,\ \forall\,t\neq D-1\.\end{cases} \tag{8}\] Hence, we choose the following cost function: \[\begin{split} f\left(\mathbf{U}\right)&=0.5\|\mathbf{D} \|+5\left\|\mathbf{C}\mathbf{A}^{D-1}\mathbf{B}-\mathbf{I}\right\|\\ &+0.5\sum_{\begin{subarray}{c}0\leq t<T\\ t\neq D-1\end{subarray}}\|\mathbf{C}\mathbf{A}^{t}\mathbf{B}\|,\end{split} \tag{9}\] where \(\|\cdot\|\) is the Frobenius norm. Note that the last sum in Eq. (9) was not used in Ref. [1], as these terms appeared to worsen the results. However, we have found that their inclusion is beneficial in our work. A numerical parameter sweep showed that the factors \(0.5\), \(5\), and \(0.5\) for the terms in Eq. (9), together with a value of \(T=10\) yield good results. The learning rate is initialized at a value of \(0.01\) and decreased throughout the training procedure until the cost function converges. For each value of \((m_{\text{io}},m_{\text{mem}})\) in Fig. 3, the training is repeated for an ensemble of \(100\) initial conditions of the QORNN. After training, a test score is calculated as the average fidelity over a test set of \(100\) states. Only the lowest test score in the ensemble of different initial conditions is kept, as this corresponds to a degree of freedom that is exploitable in practice. In order to evaluate the test score, squeezed thermal states are used as input. In the case of a single mode, we first generate a thermal state with an expected number of photons \(\bar{n}_{\text{th}}\). Afterwards, the state is squeezed according to a squeezing magnitude \(r_{\text{sq}}\) and a squeezing phase \(\phi_{\text{sq}}\). The relevant parameters of this state generation process are chosen uniformly within the following intervals: \(\bar{n}_{\text{th}}\in[0,10)\), \(r_{\text{sq}}\in[0,1)\) and \(\phi_{\text{sq}}\in[0,2\pi)\). If \(m_{\text{io}}>1\), the state generation is altered as follows. We first generate a product state of single-mode thermal states. The \(M\)-mode product state is then transformed by a random orthogonal symplectic circuit, followed by single-mode squeezers on all modes. A second random orthogonal symplectic circuit transforms the multi-mode state to the final input state. ### Entangler task We evaluate the cost function as follows. We send vacuum states to the QORNN for \(10\) iterations, such that the model undergoes a transient process. We numerically check that this transient process has ended by monitoring the convergence of the average photon number in the output mode. We then calculate the logarithmic negativity of the \(2\)-mode state formed by the output at iteration \(10\) and \(10+S\). The logarithmic negativity of a \(2\)-mode state is calculated from the determinants of the state's covariance matrix (and its submatrices) as described in Ref. [12]. Note that we do not calculate the negativity from the symplectic eigenvalues of the covariance matrix (as is done for example in Ref. [19]), which is beneficial for numerical optimization. The cost function is defined as the negative of the logarithmic negativity. The training is performed using \(4000\) updates of \(\mathbf{S}\) and with a learning rate of \(0.01\). For each value of \((S,m_{\text{mem}})\) in Fig. 5, the training is repeated for an ensemble of \(100\) initial conditions of \(\mathbf{S}\). Again, only the lowest score in the ensemble is kept. ### Superadditivity task For more details on the calculation of the optimal transmission rate of the quantum channel with Gauss-Markov noise, both with and without entangled input states, we refer to Ref. [16]. For the case of entangled input, the optimization problem defined by Eq. 9 of this Reference is solved under the restriction that the covariance matrix \(\mathbf{\gamma}_{\text{in}}\) is produced by the QORNN. \(\mathbf{\gamma}_{\rm mod}\) is a general covariance matrix that takes into account the required input energy constraint (Eq. 13 in Ref. [16]). The cost function is defined as the negative of the transmission rate. The training is performed using 1000 updates of \(\mathbf{S}\) and with a learning rate of 0.5. ### Quantum channel equalization In contrast to the STQM task, here we use the negative of the fidelity (between the desired output and the actual output) as a cost function, both during training and testing. The input states are single-mode squeezed thermal states where \(\bar{n}_{\rm th}\), \(r_{\rm sq}\), and \(\phi_{\rm sq}\) are chosen uniformly within the following ranges: \(\bar{n}_{\rm th}\in[0,10)\), \(r_{\rm sq}\in[0,1)\) and \(\phi_{\rm sq}\in[0,2\pi)\). \(\mathbf{O}_{\rm enc}\) and \(\mathbf{O}_{\rm dec}\) are initialized randomly. Every epoch, 30 squeezed thermal states are sent through the QORNNs. In order to account for the transient process at the start of each epoch, the fidelity is only averaged over the last 20 output states. The training is performed using 2000 updates of \(\mathbf{O}_{\rm dec}\) and with a learning rate of 0.05. In Fig. 9, the training is repeated for an ensemble of 20 initial conditions of \(\mathbf{S}\) for every value of \((D,m_{\rm mem,dec})\) and for each encoder initialization. After training, a test score is calculated by simulating a transient process during 10 iterations and averaging the fidelity over 100 subsequent iterations. The lowest test score in each ensemble is kept. ### Experimental demonstration of quantum channel equalization This section describes the technical details of both the simulations and the experiments shown in Section IV.3. We first describe how the QORNN can be mapped onto the Borealis setup. Afterwards, we describe the input state generation, the post-processing of the PNR results, and some other relevant parameters for the simulations and experiments. #### iv.5.1 Mapping the QORNN model on Borealis We map our model on the Borealis setup of Fig. 11. We choose \(\phi=0\) and \(\theta=0\) for the rightmost phase shifter and beam splitter respectively, such that these components and their corresponding delay line can be disregarded. After applying these changes, it is clear that the Borealis setup differs from the setup in Fig. 10 in two ways. First, the remaining delay lines of the Borealis setup have different lengths, which is not the case in Fig. 10. Second, Borealis does not have phase modulators in the delay lines. We can circumvent both of these problems as follows. First, we can virtually increase the length of the leftmost delay line (i.e. the encoder delay line) from \(T\) to \(6T\) by performing two changes: (1) we lower the frequency at which we input states from \(\frac{1}{T}\) to \(\frac{1}{6T}\) and (2) we store the encoder memory state as long as we do not input a new state. More formally, we do the following. Before generating a new input state, we put the values of the beam splitter angles to \(\theta_{\rm enc}\) (for the encoder) and \(\theta_{\rm dec}\) (for the decoder). Once a state enters the encoder, it interacts with the memory state, leading to an output state and a new memory state for the encoder. The output state of the encoder is sent to the decoder. We now put \(\theta_{\rm enc}=0\) for a period of \(6T\), such that no new inputs can enter the loop. Hence, the new memory state is stored in the delay line of the encoder. During this period, no states are generated and no interactions occur between the encoder's memory mode and input/output mode. Afterward, a new state is generated and the process is repeated. Second, we can apply virtual phase shifts in the delay lines of Fig. 11 by using the phase shifters in front of the loops. To do this, we utilize an existing functionality of Borealis' control software. This functionality (implemented in StrawberryFields [20] as compilers.tdm.Borealis.update_params) is normally used to compensate for unwanted phase shifts in the delay lines. Given such a static unwanted phase shift, the compensation is performed by adjusting the phase shifters in front of the loops _over time_, such that they apply different phase shifts at different iterations. The actual unwanted phase shifts that are present in the hardware can be measured before running an experiment. We now choose to operate the setup _as if_ there were phase offset sets \(\phi_{\rm 1,unwanted}-\phi_{\rm enc}\) and \(\phi_{\rm 2,unwanted}-\phi_{\rm dec}\) in the first two delay lines respectively. Hence, we apply net virtual phase shifts in the loop with values \(\phi_{\rm enc}\) and \(\phi_{\rm dec}\). Unfortunately, this procedure is undercut by a hardware restriction of Borealis. The range of these phase shifters is limited to \([-\pi/2,\pi/2]\), which means that about half of the desired phase shifts cannot be implemented correctly. When a phase shift falls outside of the allowed range, an additional phase shift of \(\pi\) is added artificially to ensure proper hardware operation. Listing 1 shows the Python code for this process. #### iv.5.2 Generating input states, post-processing PNR results, and simulation parameters Both for the computer simulations and for the experiments presented in Section IV.3, the input states are generated by a single-mode optical squeezer that operates on a vacuum state. In every iteration, the squeezing magnitude is chosen randomly between either 0 or 1, effectively encoding bit strings into the quantum time series. It is worth noting that public online access to Borealis does not allow for multiple squeezing magnitudes within one experiment. Our experiments were therefore performed on-site. The output of Borealis consists of the average photon numbers gathered over 216 iterations. Due to the mapping described in the previous section, only 36 of these average photon numbers can be used. To account for the transient process, the results of the first 10 of these 36 output states are disregarded. The PNR results of the remaining 26 output states are used to estimate the decoder's QCE performance. For the _simulations_, 10 runs (each using a different series of input states) are performed per set of gate parameter values of the QORNNs. The cost is therefore averaged over 260 output states. For the _experiments_ shown in Fig. 12 and Fig. 13, the number of runs per data point is 7 and 2 respectively. In hardware experiments, non-idealities occur that are not considered in the classical simulations. We compensate for two such non-idealities by re-scaling the experimentally obtained average photon numbers. On the one hand, there are optical losses. On the other hand, the optical squeezer suffers from hardware imperfections that lead to effective squeezing magnitudes that decrease over time when the pump power is kept constant. The re-scaling is performed as follows. At each iteration, we calculate the average of the number of detected photons over all experiments. We fit a polynomial of degree 2 to these averages. Given an experimental result, i.e. a series of average photon numbers, we then divide each value by the value of the polynomial at that iteration. The resulting series is rescaled such that the average of the entire series matches the average value of the entire series of input states. ###### Acknowledgements. We would like to thank Filippo M. Miatto for his insights, feedback, and assistance with software-related challenges. We are also grateful to Johannes Nokkala for sharing his expertise, and to Lars S. Madsen, Fabian Laudenbach, and Jonathan Lavoie for making the hardware experiment possible. This work was performed in the context of the Flemish FWO project G006020N and the Belgian EOS project G0H1422N. It was also co-funded by the European Union in the Prometheus Horizon Europe project.
2309.11864
A Golub-Welsch version for simultaneous Gaussian quadrature
The zeros of type II multiple orthogonal polynomials can be used for quadrature formulas that approximate $r$ integrals of the same function $f$ with respect to $r$ measures $\mu_1,\ldots,\mu_r$ in the spirit of Gaussian quadrature. This was first suggested by Borges in 1994, even though he does not mention multiple orthogonality. We give a method to compute the quadrature nodes and the quadrature weights which extends the Golub-Welsch approach using the eigenvalues and left and right eigenvectors of a banded Hessenberg matrix. This method was already described by Coussement and Van Assche in 2005 but it seems to have gone unnoticed. We describe the result in detail for $r=2$ and give some examples.
Walter Van Assche
2023-09-21T08:06:50Z
http://arxiv.org/abs/2309.11864v3
# A Golub-Welsch version for simultaneous Gaussian quadrature ###### Abstract The zeros of type II multiple orthogonal polynomials can be used for quadrature formulas that approximate \(r\) integrals of the same function \(f\) with respect to \(r\) measures \(\mu_{1},\ldots,\mu_{r}\) in the spirit of Gaussian quadrature. This was first suggested by Borges in 1994. We give a method to compute the quadrature nodes and the quadrature weights which extends the Golub-Welsch approach using the eigenvalues and left and right eigenvectors of a banded Hessenberg matrix. This method was already described by Coussement and Van Assche in 2005 but it seems to have gone unnoticed. We describe the result in detail for \(r=2\) and give some examples. **Keywords:** Simultaneous Gauss quadrature, multiple orthogonal polynomials, banded Hessenberg matrix. **MSC Classification:** primary 41A55, 65D32; secondary 15A18, 33C45, 41A21, 42C05 ## 1 Introduction Suppose a real function \(f\) is given and one wants to compute the integrals \[\int_{\mathbb{R}}f(x)\,d\mu_{j}(x),\qquad 1\leq j\leq r\] for \(r\) positive measures \(\mu_{1},\ldots,\mu_{r}\) with only \(N\) function evaluations. In 1994 Borges [3] suggested Gauss-like quadrature rules and showed that one gets an optimal set of quadrature rules if one takes the \(N\) quadrature nodes at the zeros of a polynomial of degree \(N\) which satisfies orthogonality conditions with respect to the measures \(\mu_{1},\ldots,\mu_{r}\). Such a polynomial is now known as a type II multiple orthogonal polynomial. The quadrature rules are of the form \[\sum_{k=1}^{N}\lambda_{k,N}^{(j)}f(x_{k,N})\approx\int_{\mathbb{R}}f(x)\,d\mu_ {j}(x),\] so that the integral for the measure \(\mu_{j}\) corresponds to quadrature weights \(\lambda_{k,N}^{(j)}\). An important numerical aspect is then how to compute the quadrature nodes \(N\)} and the quadrature weights \(\{\lambda_{k,n}^{(j)},1\leq k\leq N\}\) for \(1\leq j\leq r\) in an efficient and stable way. In 1969 Golub and Welsch [10] showed that for Gaussian quadrature (the case when \(r=1\)) one can use the Jacobi matrix for the corresponding orthogonal polynomials. The eigenvalues of the \(N\times N\) Jacobi matrix are the quadrature nodes and the first components of the normalized eigenvectors are related to the quadrature weights. This approach was extended to simultaneous Gaussian quadrature (in the sense of Borges) by Coussement and Van Assche [5] but this extension seems to have gone unnoticed. The role of the Jacobi matrix is now played by a banded Hessenberg matrix for which the eigenvalues are the required quadrature nodes, and one needs the right and the left eigenvectors to find the quadrature weights. The goal of this paper is to give a simplified proof of this result for two measures \(\mu_{1}\) and \(\mu_{2}\). We will give some examples involving measures \(d\mu_{1}=w_{1}(x)\,dx\) and \(d\mu_{2}(x)=w_{2}(x)\,dx\) for which the weight functions are related to modified Bessel functions. The structure of the paper is as follows. In Section 2 we will give the necessary background about multiple orthogonal polynomials and their relation with Hermite-Pade approximation. In Section 3 we explain what we mean by simultaneous Gaussian quadrature. Multiple orthogonal polynomials satisfy various recurrence relations which we give in Section 4, and in particular the multiple orthogonal polynomials on the stepline (near the diagonal) satisfy a linear recurrence relation of order \(r+1\), which gives rise to a banded Hessenberg matrix with one diagonal above the main diagonal and \(r\) diagonals below the main diagonal. The eigenvalues of this Hessenberg matrix correspond to the zeros of the type II multiple orthogonal polynomial. The main theorem (Theorem 2) will be proved in Section 5. We will use the fact that the quadrature weights can be expressed in terms of the Christoffel-Darboux kernel for multiple orthogonal polynomials. One important difference is that the Chrisoffel-Darboux kernel for multiple orthogonal polynomials contains both the type I and the type II multiple orthogonal polynomials, and hence is not symmetric in the two variables. Two numerical examples involving modified Bessel function weights are worked out in Section 6. ## 2 Multiple orthogonal polynomials Multiple orthogonal polynomials are polynomials in one variable with orthogonality relations with respect to \(r\) measures \((\mu_{1},\mu_{2},\ldots,\mu_{r})\) (on the real line in this paper). We use a multi-index \(\vec{n}=(n_{1},n_{2},\ldots,n_{r})\in\mathbb{N}^{r}\) of size \(|\vec{n}|=n_{1}+n_{2}+\ldots+n_{r}\). **Definition 1** (type II MOP).: _A type II multiple orthogonal polynomial \(P_{\vec{n}}\) is a polynomial of degree \(\leq|\vec{n}|\) for which_ \[\int_{\mathbb{R}}P_{\vec{n}}(x)x^{k}\,d\mu_{1}(x) = 0,\qquad 0\leq k\leq n_{1}-1,\] \[\vdots\] \[\int_{\mathbb{R}}P_{\vec{n}}(x)x^{k}\,d\mu_{r}(x) = 0,\qquad 0\leq k\leq n_{r}-1.\] **Definition 2** (type I MOP).: _Type I multiple orthogonal polynomials are in a vector \((A_{\vec{n},1},\ldots,A_{\vec{n},r})\), with \(\deg A_{\vec{n},j}\leq n_{j}-1\), and they satisfy_ \[\sum_{j=1}^{r}\int_{\mathbb{R}}A_{\vec{n},j}(x)x^{k}\,d\mu_{j}(x)=0,\qquad 0\leq k \leq|\vec{n}|-2.\] The orthogonality conditions for the type II multiple orthogonal polynomial give a homogeneous system of \(|\vec{n}|\) linear equations for the \(|\vec{n}|+1\) unknown coefficients of the polynomial \(P_{\vec{n}}\). In a similar way, the orthogonality conditions for the type I multiple orthogonal polynomials give a homogeneous linear system of \(|\vec{n}|-1\) equations for the \(|\vec{n}|\) unknown coefficients of the polynomials \(A_{\vec{n},j}\) (\(1\leq j\leq r\)). When the type I and type II multiple orthogonal polynomials are unique up to a multiplicative factor, then we say that the multi-index \(\vec{n}\) is normal. When this happens, the type II polynomial \(P_{\vec{n}}\) has degree \(|\vec{n}|\) and we will take the leading coefficient to be one. For a normal multi-index we normalize the type I polynomials by setting \[\sum_{j=1}^{r}\int_{\mathbb{R}}x^{|\vec{n}|-1}A_{\vec{n},j}(x)\,d\mu_{j}(x)=1. \tag{2.1}\] In this paper we assume this is the case for all multi-indices \(\vec{n}\), in which case we deal with a perfect system. Hence our type II multiple orthogonal polynomials are monic and the type I multiple orthogonal polynomials satisfy (2.1). Multiple orthogonal polynomials appear in simultaneous rational approximation of \(r\) functions \((f_{1},f_{2},\ldots,f_{r})\), where \[f_{j}(z)=\int_{\mathbb{R}}\frac{d\mu_{j}(x)}{z-x},\qquad z\in\mathbb{C}\setminus \mathbb{R}.\] **Problem 1** (Type II Hermite-Pade).: _Find a polynomial \(P_{\vec{n}}\) of degree \(\leq|\vec{n}|\) and polynomials \(Q_{\vec{n},j}\) such that for \(1\leq j\leq r\)_ \[P_{\vec{n}}(z)f_{j}(z)-Q_{\vec{n},j}(z)=\mathcal{O}\left(\frac{1}{z^{n_{j}+1}} \right),\qquad z\to\infty.\] The common denominator \(P_{\vec{n}}\) is the type II multiple orthogonal polynomial and the numerator polynomials are given by \[Q_{\vec{n},j}(z)=\int_{\mathbb{R}}\frac{P_{\vec{n}}(z)-P_{\vec{n}}(x)}{z-x}\, d\mu_{j}(x).\] The error can also be expressed in terms of the type II multiple orthogonal polynomial: \[P_{\vec{n}}(z)f_{j}(z)-Q_{\vec{n},j}(z)=\int_{\mathbb{R}}\frac{P_{\vec{n}}(x) }{z-x}\,d\mu_{j}(x).\] **Problem 2** (Type I Hermite-Pade).: _Find polynomials \(A_{\vec{n},j}\) of degree \(\leq n_{j}-1\) and a polynomial \(B_{\vec{n}}\) such that_ \[\sum_{j=1}^{r}A_{\vec{n},j}(z)f_{j}(z)-B_{\vec{n}}(z)=\mathcal{O}\left(\frac{ 1}{z^{|\vec{n}|}}\right),\qquad z\to\infty.\] The polynomials \(A_{\vec{n},1},\ldots,A_{\vec{n},r}\) are the type I multiple orthogonal polynomials and the polynomial \(B_{\vec{n}}\) is \[B_{\vec{n}}(z)=\sum_{j=1}^{r}\int_{\mathbb{R}}\frac{A_{\vec{n},j}(z)-A_{\vec{n}, j}(x)}{z-x}\,d\mu_{j}(x).\] The error is given by \[\sum_{j=1}^{r}A_{\vec{n},j}(z)f_{j}(z)-B_{\vec{n}}(z)=\sum_{j=1}^{r}\int_{ \mathbb{R}}\frac{A_{\vec{n},j}(x)}{z-x}\,d\mu_{j}(x).\] Here is some history indicating important progress (but very incomplete). Hermite-Pade approximation was introduced by Charles Hermite in 1873, who used them in his proof that \(e\) is transcendental [12]. The case \(r=1\) was then investigated in more detail by Hermite's student Henri Pade (Pade approximation, Pade table) [16]. One important system for which all the multi-indices are normal (a perfect system) was described by Aurel Angelescu [1] who used \(r\) measures (weights) supported on \(r\) disjoint intervals. This is nowadays known as an Angelesco system. In 1934 Kurt Mahler [14] investigated perfect systems but his work was published much later 1968. Another important system of measures was introduced by Evgeni Nikishin in 1979 [15]. For \(r=2\) the measures \((\mu_{1},\mu_{2})\) are absolutely continuous on an interval \(\Delta_{0}\) with weight functions \((w_{1},w_{2})\) for which \[\frac{w_{2}(x)}{w_{1}(x)}=\int_{\Delta_{1}}\frac{d\sigma(t)}{x-t},\] where \(\sigma\) is a positive measure on an interval \(\Delta_{1}\) which is disjoint from \(\Delta_{0}\). Such a system (and its recursive extensions to \(r>2\)) are now known as Nikishin systems and it has been shown that these are perfect systems [8]. Gonchar and Rakhmanov studied the convergence of Hermite-Pade approximants in [11] for which they used an extremal problem in logarithmic potential theory for a vector of measures. The idea of simultaneous Gaussian quadrature was introduced by Carlos F. Borges in [3]. Van Assche, Geronimo and Kuijlaars formulated a Riemann-Hilbert problem for multiple orthogonal polynomials in 2001 [18]. This allows to obtain the asymptotic behavior of multiple orthogonal polynomials. ### Examples extending classical orthogonal polynomials Many examples of multiple orthogonal polynomials are known. * Multiple Hermite polynomials: the weights are normal densities with non-zero mean \[w_{j}(x)=e^{-x^{2}+c_{j}x},\quad c_{i}\neq c_{j},\qquad x\in(-\infty,\infty).\] * Multiple Laguerre polynomials: the weights \[w_{j}(x)=x^{\alpha_{j}}e^{-x},\quad\alpha_{i}-\alpha_{j}\notin\mathbb{Z},\ \alpha_{j}>-1,\qquad x\in[0,\infty)\] correspond to multiple Laguerre polynomials of the first kind. The weights \[w_{j}(x)=x^{\alpha}e^{-c_{j}x},\quad c_{i}\neq c_{j},\ c_{j}>0,\ \alpha>-1,\qquad x\in[0,\infty),\] correspond to multiple Laguerre polynomials of the second kind. * Jacobi-Pineiro polynomials: the weights are \[w_{j}(x)=x^{\alpha_{j}}(1-x)^{\beta},\quad\alpha_{i}-\alpha_{j}\notin\mathbb{Z}, \ \alpha_{j},\beta>-1,\qquad x\in[0,1].\] For these extensions of the classical orthogonal polynomials one knows a differential equation for the multiple orthogonal polynomials, a system of nearest neighbor recurrence relations, the asymptotic distribution of the zeros, etc. Other examples include weights in terms of special functions satisfying a second order differential equation. ### Modified Bessel functions \((K_{\nu},K_{\nu+1})\) The modified Bessel function \(K_{\nu}\) is given as \[K_{\nu}(x)=\frac{1}{2}\left(\frac{x}{2}\right)^{\nu}\int_{0}^{\infty}\exp \left(-t-\frac{x^{2}}{4t}\right)t^{-\nu-1}\,dt.\] It satisfies the modified Bessel equation \[x^{2}y^{\prime\prime}+xy^{\prime}-(x^{2}+\nu^{2})y=0.\] Introduce the functions \[\rho_{\nu}(x)=2x^{\nu/2}K_{\nu}(2\sqrt{x}),\qquad x>0,\] which have the moments \[\int_{0}^{\infty}x^{n}\rho_{\nu}(x)\,dx=\Gamma(n+\nu+1)\Gamma(n+1).\] Van Assche and Yakubovich [20] and Ben Cheikh and Douak [2] obtained the multiple orthogonal polynomials for the weights or \((w_{1},w_{2})=x^{\alpha}(\rho_{\nu},\rho_{\nu+1})\), where \(\alpha>-1\) and \(\nu\geq 0\). The vector Pearson equation is \[\left[x\begin{pmatrix}w_{1}\\ w_{2}\end{pmatrix}\right]^{\prime}=\begin{pmatrix}\alpha+\nu+1&-1\\ -x&\alpha+1\end{pmatrix}\begin{pmatrix}w_{1}\\ w_{2}\end{pmatrix}.\] One has \[\frac{\rho_{\nu}(x)}{\rho_{\nu+1}(x)}=\frac{1}{\pi^{2}}\int_{0}^{\infty}\frac {1}{(x+s)[J_{\nu+1}^{2}(2\sqrt{s})+Y_{\nu+1}^{2}(2\sqrt{s})]}\frac{ds}{s},\] so that this is a Nikishin system. Denote the type I function by \[q_{n,m}^{\alpha}(x)=A_{n,m}(x)\rho_{\nu}(x)+B_{n,m}(x)\rho_{\nu+1}(x),\] and the type II multiple orthogonal polynomials by \(p_{n,m}^{\alpha}\). **Property 1** (Rodrigues formula).: _One has_ \[x^{\alpha}q_{n,n}^{\alpha}=\frac{d^{2n-1}}{dx^{2n-1}}[x^{2n-1+\alpha}\rho_{ \nu}(x)],\] \[x^{\alpha}q_{n,n-1}^{\alpha}=\frac{d^{2n-2}}{dx^{2n-2}}[x^{2n+\alpha-2}\rho_{ \nu}(x)].\] **Property 2** (recurrence relation).: _Let \(P_{2n}=p_{n,n}^{\alpha}\) and \(P_{2n+1}=p_{n+1,n}^{\alpha}\), then_ \[xP_{n}(x)=P_{n+1}(x)+b_{n}P_{n}(x)+c_{n}P_{n-1}(x)+d_{n}P_{n-2}(x)\] _with_ \[b_{n} = (n+\alpha+1)(3n+\alpha+2\nu)-(\alpha+1)(\nu-1),\] \[c_{n} = n(n+\alpha)(n+\alpha+\nu)(3n+2\alpha+\nu),\] \[d_{n} = n(n-1)(n+\alpha)(n+\alpha-1)(n+\alpha+\nu)(n+\alpha+\nu-1).\] _Let \(Q_{2n}=q_{n,n}^{\alpha}\) and \(Q_{2n-1}=q_{n,n-1}^{\alpha}\), then_ \[xQ_{n}(x)=Q_{n-1}(x)+b_{n-1}Q_{n}(x)+c_{n}Q_{n+1}(x)+d_{n+1}Q_{n+2}(x).\] ### Modified Bessel functions \((I_{\nu},I_{\nu+1})\) Another solution of the modified Bessel equation is given by the modified Bessel function \(I_{\nu}(x)\). In [4] we considered \[w_{1}(x)=x^{\nu/2}I_{\nu}(2\sqrt{x})e^{-cx},\quad w_{2}(x)=x^{(\nu+1)/2}I_{\nu+ 1}(2\sqrt{x})e^{-cx},\quad x>0,\] with \(\nu>-1\) and \(c>0\). The case \(c=1\) was first studied in [7]. These weights are related to the noncentral \(\chi^{2}\)-distribution: \[p_{\chi^{2}}^{(\nu,\lambda)}(x)=\frac{1}{2}\left(\frac{x}{\lambda}\right)^{( \nu-2)/4}I_{(\nu-2)/2}(\sqrt{\lambda x})e^{-(\lambda+x)/2},\qquad x>0,\] with \(\lambda>0\) and \(\nu\in\mathbb{N}\). The Pearson equation is \[\left[x\begin{pmatrix}w_{1}\\ w_{2}\end{pmatrix}\right]^{\prime}=\begin{pmatrix}\nu-cx&1\\ x&-cx\end{pmatrix}\begin{pmatrix}w_{1}\\ w_{2}\end{pmatrix}.\] One has \[\frac{w_{1}(x)}{w_{2}(x)}=\frac{\nu+1}{x}+\sum_{n=1}^{\infty}\frac{1}{x+j_{\nu +1,n}^{2}/4}\frac{J_{\nu+2}(j_{\nu+1,n})}{J_{\nu+1}^{\prime}(j_{\nu+1,n})},\] so that we again have a Nikishin system. Let \[Q_{2n}^{\nu}(x)=q_{n,n}^{\nu}(x),\quad Q_{2n+1}^{\nu}(x)=q_{n+1,n}^{\nu}(x),\] and similarly \[P_{2n}^{\nu}(x)=p_{n,n}^{\nu}(x),\quad P_{2n+1}^{\nu}(x)=p_{n+1,n}^{\nu}(x).\] **Property 3** (raising-lowering).: _For every \(\nu>-1\) and \(c>0\) one has_ \[\frac{d}{dx}Q_{n}^{\nu+1}(x)=Q_{n+1}^{\nu}(x),\quad\text{ and }\quad\frac{d}{dx}P_{n}^{\nu}(x)=nP_{n-1}^{\nu+1}(x).\] **Property 4** (recurrence relation).: _One has_ \[xP_{n}^{\nu}(x)=P_{n+1}^{\nu}(x)+b_{n}P_{n}^{\nu}(x)+c_{n}P_{n-1}^{\nu}(x)+d_{ n}P_{n-2}^{\nu}(x),\] _with_ \[b_{n} = \frac{1}{c^{2}}[1+c(\nu+2n+1)],\] \[c_{n} = \frac{n}{c^{3}}[2+c(\nu+n)],\] \[d_{n} = \frac{n(n-1)}{c^{4}}.\] _Furthermore_ \[xQ_{n}^{\nu}(x)=Q_{n-1}^{\nu}(x)+b_{n-1}Q_{n}^{\nu}(x)+c_{n}Q_{n+1}^{\nu}(x)+d _{n+1}Q_{n+2}^{\nu}(x).\] ## 3 Simultaneous Gaussian quadrature Suppose we have two measures \(\mu_{1},\mu_{2}\) and one function \(f\). We want to approximate integrals by sums \[\int_{\mathbb{R}}f(x)\,d\mu_{1}(x)\approx\sum_{k=1}^{2n}\lambda_{k,2n}^{(1)}f( x_{k,2n}),\] \[\int_{\mathbb{R}}f(x)\,d\mu_{2}(x)\approx\sum_{k=1}^{2n}\lambda_{k,2n}^{(2)}f( x_{k,2n}).\] **Theorem 1** (Borges [3]).: _If we take for \(\{x_{k,2n},1\leq k\leq 2n\}\) the zeros of the type II multiple orthogonal \(P_{n,n}\) for the two measures \((\mu_{1},\mu_{2})\) and use interpolatory quadrature, then the quadrature is exact for polynomials \(f\) of degree \(\leq 3n-1\)._ There are \(2n\) function evaluations. If we use ordinary Gaussian quadrature with \(n\) nodes for each integral, then we also use \(2n\) function evaluations, but the quadrature then is exact for polynomials \(f\) of degree \(\leq 2n-1\). Important numerical aspects are: * How to compute the quadrature nodes \(x_{k,2n}\), \(1\leq k\leq 2n\)? (the zeros of \(P_{n,n}\))? * How to compute the quadrature weights \[\lambda_{k,2n}^{(1)}=\int_{\mathbb{R}}\frac{P_{n,n}(x)}{(x-x_{k,2n})P_{n,n}^{ \prime}(x_{k,2n})}\,d\mu_{1}(x),\] (3.1) \[\lambda_{k,2n}^{(2)}=\int_{\mathbb{R}}\frac{P_{n,n}(x)}{(x-x_{k,2n})P_{n,n}^{ \prime}(x_{k,2n})}\,d\mu_{2}(x).\] (3.2) Some relevant analytical problems are * The convergence of the quadrature rules as \(n\to\infty\). * Are the weights \(\lambda_{k,2n}^{(1)}\) and \(\lambda_{k,2n}^{(2)}\) positive? If not, are they small? We have investigated some of the analytical problems of simultaneous Gaussian quadrature for an Angelesco system in [13] and for multiple Hermite polynomials in [19]. We formulated the simultaneous Gaussian quadrature for \(2n\) nodes. If the number of nodes is odd \((2n+1)\), then one uses the zeros of the type II multiple orthogonal \(P_{n+1,n}\) and \[\int_{\mathbb{R}}f(x)\,d\mu_{1}(x)=\sum_{k=1}^{2n+1}\lambda_{k,2n+1}^{(1)}f(x_ {k,2n+1})\] holds for polynomials \(f\) of degree \(\leq 3n+1\), and \[\int_{\mathbb{R}}f(x)\,d\mu_{2}(x)=\sum_{k=1}^{2n+1}\lambda_{k,2n+1}^{(2)}f(x_ {k,2n+1})\] holds for polynomials \(f\) of degree \(\leq 3n\). For multi-indices \((n,m)\) one takes the nodes at the zeros of \(P_{n,m}\) and \[\int_{\mathbb{R}}f(x)\,d\mu_{1}(x)=\sum_{k=1}^{n+m}\lambda_{k,n+m}^{(1)}f(x_{k,n+m}),\qquad\deg f\leq 2n+m-1,\] \[\int_{\mathbb{R}}f(x)\,d\mu_{2}(x)=\sum_{k=1}^{n+m}\lambda_{k,n+m}^{(2)}f(x_{k,n+m}),\qquad\deg f\leq n+2m-1.\] There is an important relation with Hermite-Pade approximation. Recall that for type II Hermite-Pade approximation one has \[P_{\vec{n}}(z)f_{j}(z)-Q_{\vec{n},j}(z)=\mathcal{O}\left(\frac{1}{z^{n_{j}+1}} \right),\qquad z\to\infty.\] For the rational approximants one then has \[f_{j}(z)-\frac{Q_{\vec{n},j}(z)}{P_{\vec{n}}(z)}=\mathcal{O}\left(\frac{1}{z^ {|\vec{n}|+n_{j}+1}}\right),\] and if one decomposes the rational approximant into partial fractions, then \[\frac{Q_{\vec{n},j}(z)}{P_{\vec{n}}(z)}=\sum_{k=1}^{|\vec{n}|}\frac{\lambda_{ k,\vec{n}}^{(j)}}{z-x_{k,\vec{n}}}.\] Hence the poles of the type II Hermite-Pade approximants are the quadrature nodes and the residues are the quadrature weights. This extends the well-known connection between Gaussian quadrature and Pade approximation. If \(\mu_{j}\) is supported on \([a,b]\) and \(g\) is analytic in a neighborhood \(\Omega\) of \([a,b]\), then the error is given by \[\int_{a}^{b}g(x)\,d\mu_{j}(x)-\sum_{k=1}^{|\vec{n}|}\lambda_{k,\vec{n}}^{(j)} g(x_{k,\vec{n}})=\frac{1}{2\pi i}\int_{\Gamma}g(z)\left(f_{j}(z)-\frac{Q_{ \vec{n},j}(z)}{P_{\vec{n}}(z)}\right)\,dz,\] where \(\Gamma\) is a closed contour in \(\Omega\) around \([a,b]\). Hence results about the asymptotic behavior of Hermite-Pade approximation imply results for the convergence of the quadrature formula when the integrand \(g\) is sufficiently smooth. ## 4 Recurrence relations Denote by \(\vec{e}_{j}=(0,\ldots,0,1,0,\ldots,0)\) the standard unit vectors. Multiple orthogonal polynomials always satisfy a system of linear recurrence relations connecting \(P_{\vec{n}}\) to its nearest neighbors \(P_{\vec{n}+\vec{e}_{k}}\) from above and \(P_{\vec{n}-\vec{e}_{j}}\) from below [17]: \[xP_{\vec{n}}=P_{\vec{n}+\vec{e}_{k}}(x)+b_{\vec{n},k}P_{\vec{n}}(x)+\sum_{j=1}^ {r}a_{\vec{n},j}P_{\vec{n}-\vec{e}_{j}}(x),\qquad 1\leq k\leq r. \tag{4.1}\] For the type I functions \[Q_{\vec{n}}(x)=\sum_{j=1}^{r}A_{\vec{n},j}(x)w_{j}(x),\qquad w_{j}(x)=\frac{d \mu_{j}(x)}{d\mu(x)},\] where \(\mu=\mu_{1}+\mu_{2}+\cdots+\mu_{r}\), the nearest neighbor recurrence relations are \[xQ_{\vec{n}}=Q_{\vec{n}-\vec{e}_{k}}(x)+b_{\vec{n}-\vec{e}_{k},k}Q_{\vec{n}}( x)+\sum_{j=1}^{r}a_{\vec{n},j}Q_{\vec{n}+\vec{e}_{j}}(x),\qquad 1\leq k\leq r. \tag{4.2}\] If we combine these \(r\) recurrence relations, then one can find a \((r+2)\)-term recurrence relation on the stepline: for \(n=kr+\ell\) we take \(\vec{n}=(\underbrace{k+1,\ldots,k+1}_{\ell\text{ times}},k,\ldots,k)\) and we define the type II polynomials on the stepline by \(P_{n}(x)=P_{\vec{n}}(x)\). This \((r+2)\)-term recurrence relation is \[xP_{n}(x)=P_{n+1}(x)+\sum_{j=0}^{r}c_{n,j}P_{n-j}(x). \tag{4.3}\] There is also a \((r+2)\)-term recurrence relation for the type I functions on the stepline: \[xQ_{n}(x)=Q_{n-1}(x)+\sum_{j=0}^{r}c_{n+j-1,j}Q_{n+j}(x). \tag{4.4}\] Filipuk, Haneczok and Van Assche [9] worked out an algorithm to go from the nearest neighbor recurrence coefficients \((a_{\vec{n},j},b_{\vec{n},j};1\leq j\leq r)_{\vec{n}\in\mathbb{N}^{r}}\) to the stepline coefficients \((c_{n,0},c_{n,1},\ldots,c_{n,r})_{n\in\mathbb{N}}\). They also gave algorithms to go from the recurrence coefficients of the orthogonal polynomials of the measures \(\mu_{j}\) to the nearest neighbor recurrence coefficients. We will illustrate this for the case \(r=2\). The nearest neighbor recurrence relations are \[xP_{n,m}(x) = P_{n+1,m}(x)+c_{n,m}P_{n,m}(x)+a_{n,m}P_{n-1,m}(x)+b_{n,m}P_{n,m- 1}(x), \tag{4.5}\] \[xP_{n,m}(x) = P_{n,m+1}(x)+d_{n,m}P_{n,m}(x)+a_{n,m}P_{n-1,m}(x)+b_{n,m}P_{n,m -1}(x). \tag{4.6}\] Subtracting both equations gives \[P_{n+1,m}(x)-P_{n,m+1}(x)=(d_{n,m}-c_{n,m})P_{n,m}(x). \tag{4.7}\] Denote the type II polynomials on the stepline by \(P_{2k}(x)=P_{k,k}(x)\), \(P_{2k+1}(x)=P_{k+1,k}(x)\). Take \(m=n\) then (4.5) is \[xP_{n,n}(x)=P_{n+1,n}(x)+c_{n,n}P_{n,n}(x)+a_{n,n}P_{n-1,n}(x)+b_{n,n}P_{n,n-1 }(x).\] Use (4.7) to replace \(P_{n-1,n}(x)\) by \(P_{n,n-1}(x)+(c_{n-1,n-1}-d_{n-1,n-1})P_{n-1,n-1}(x)\) to find \[xP_{2n}(x)=P_{2n+1}(x)+c_{n,n}P_{2n}(x)+(a_{n,n}+b_{n,n}) P_{2n-1}(x)\] \[+a_{n,n}(c_{n-1,n-1}-d_{n-1,n-1})P_{2n-2}(x).\] In a similar way, by taking \(m=n-1\) the equation (4.6) is \[xP_{n,n-1}(x)=P_{n,n}(x)+d_{n,n-1}P_{n,n-1}(x)+a_{n,n-1}P_{n-1,n-1}(x)+b_{n,n-1 }P_{n,n-2}(x).\] Use (4.7) to replace \(P_{n,n-2}(x)\) by \(P_{n-1,n-1}(x)+(d_{n-1,n-2}-c_{n-1,n-2})P_{n-1,n-2}(x)\) to find \[xP_{2n-1}(x)=P_{2n}(x)+d_{n,n-1}P_{2n-1}(x)+(a_{n,n-1}+b_{n,n-1}) P_{2n-2}(x)\\ +b_{n,n-1}(d_{n-1,n-2}-c_{n-1,n-2})P_{2n-3}(x).\] This means that the four-term recurrence relation for the type II polynomials on the stepline is \[xP_{n}(x)=P_{n+1}(x)+b_{n}P_{n}(x)+c_{n}P_{n-1}(x)+d_{n}P_{n-2}(x), \tag{4.8}\] with \[b_{2k}=c_{k,k},\quad b_{2k+1}=d_{k+1,k}\] \[c_{2k}=a_{k,k}+b_{k,k},\quad c_{2k+1}=a_{k+1,k}+b_{k+1,k},\] \[d_{2k}=a_{k,k}(c_{k-1,k-1}-d_{k-1,k-1}),\quad d_{2k+1}=b_{k+1,k} (d_{k,k-1}-c_{k,k-1}).\] For the type I functions, the nearest neighbor recurrence relations are \[xQ_{n,m}(x) = Q_{n-1,m}(x)+c_{n-1,m}Q_{n,m}(x)+a_{n,m}Q_{n+1,m}(x)+b_{n,m}(x)Q _{n,m+1}(x),\] \[xQ_{n,m}(x) = Q_{n,m-1}(x)+d_{n,m-1}Q_{n,m}(x)+a_{n,m}Q_{n+1,m}(x)+b_{n,m}(x)Q _{n,m+1}(x).\] Subtracting both equations gives \[Q_{n-1,m}(x)-Q_{n,m-1}(x)=(d_{n,m-1}-c_{n-1,m})Q_{n,m}(x). \tag{4.9}\] In a similar way as before, one then finds for the type I functions on the stepline \(Q_{2k}(x)=Q_{k,k}(x)\), \(Q_{2k+1}(x)=Q_{k+1,k}(x)\) \[xQ_{n}(x)=Q_{n-1}(x)+b_{n-1}Q_{n}(x)+c_{n}Q_{n+1}(x)+d_{n+1}Q_{n+2}(x). \tag{4.10}\] Using the coefficients in this four-term recurrence relation, we can create a banded Hessenberg matrix \[H_{N}=\begin{pmatrix}b_{0}&1&0&0&0&0&\cdots&0\\ c_{1}&b_{1}&1&0&0&0&\cdots&0\\ d_{2}&c_{2}&b_{2}&1&0&0&\cdots&0\\ 0&d_{3}&c_{3}&b_{3}&1&0&\cdots&0\\ 0&0&d_{4}&c_{4}&b_{4}&1&\cdots&0\\ \vdots&&\ddots&\ddots&\ddots&\ddots&\vdots\\ 0&\cdots&0&0&d_{N-2}&c_{N-2}&b_{N-2}&1\\ 0&\cdots&0&0&0&d_{N-1}&c_{N-1}&b_{N-1}\end{pmatrix}. \tag{4.11}\] From now one we will use \(N\) for the number of quadrature nodes. If \(N=2n\) then this corresponds to the multi-index \((n,n)\) and for \(N=2n+1\) to the multi-index \((n+1,n)\). The recurrence relation (4.8) for the type II polynomials can then be written as \[H_{N}\begin{pmatrix}P_{0}(x)\\ P_{1}(x)\\ \vdots\\ P_{N-1}(x)\end{pmatrix}+\begin{pmatrix}0\\ \vdots\\ 0\\ P_{N}(x)\end{pmatrix}=x\begin{pmatrix}P_{0}(x)\\ P_{1}(x)\\ \vdots\\ P_{N-1}(x)\end{pmatrix}.\] If we evaluate this at a zero \(x_{k,N}\) of \(P_{N}\), then we get the following result: **Property 5**.: _If \(P_{N}(x_{k,N})=0\) then \(x_{k,N}\) is an eigenvalue of \(H_{N}\) with **right** eigenvector \(\big{(}P_{0}(x_{k,N}),P_{1}(x_{k,N}),\ldots,P_{N-1}(x_{k,N})\big{)}\)._ There is a similar property for the type I multiple orthogonal polynomials \((A_{N},B_{N})\) for which \[A_{2k}(x) =A_{k,k}(x),\quad A_{2k+1}(x)=A_{k+1,k}(x),\] \[B_{2k}(x) =B_{k,k}(x),\quad B_{2k+1}(x)=B_{k+1,k}(x).\] **Property 6**.: _If \(P_{N}(x_{k,N})=0\) then \(x_{k,N}\) is an eigenvalue of \(H_{N}\) with **left** eigenvector \(\big{(}C_{1}A_{1}(x_{k,N})+C_{2}B_{1}(x_{k,N}),C_{1}A_{2}(x_{k,N})+C_{2}B_{2}( x_{k,N}),\ldots,C_{1}A_{n}(x_{k,N})+C_{2}B_{n}(x_{k,N})\big{)}\), with constants \(C_{1}\) and \(C_{2}\) satisfying_ \[\begin{cases}C_{1}A_{N+1}(x_{k,N})+C_{2}B_{N+1}(x_{k,N})=0,\\ C_{1}A_{N+2}(x_{k,N})+C_{2}B_{N+2}(x_{k,N})=0.\end{cases} \tag{4.12}\] Proof.: For the type I function \(Q_{n}(x)=A_{n}(x)w_{1}(x)+B_{n}(x)w_{2}(x)\), the four-term recurrence relation (4.10) is equivalent to \[\begin{split}\big{(}Q_{1}(x)&Q_{2}(x)&\cdots&Q_{N}(x) \big{)}\,H_{N}\\ &+\big{(}0&\cdots&0&d_{N}Q_{N+1}(x)&0\\ &+\big{(}0&\cdots&0&c_{N}Q_{N+1}(x)&d_{N+1}Q_{N+2}(x)\big{)}\\ &=x\left(Q_{1}(x)&Q_{2}(x)&\cdots&Q_{N}(x)\right).\end{split}\] This relation also holds for the polynomials \(A_{n}(x)\) and \(B_{n}(x)\), hence \[\begin{split}\big{(}A_{1}(x)&A_{2}(x)&\cdots&A_{N}(x)\big{)}\,H_{N}\\ &+\big{(}0&\cdots&0&d_{N}A_{N+1}(x)&0\\ &+\big{(}0&\cdots&0&c_{N}A_{N+1}(x)&d_{N+1}A_{N+2}(x)\big{)}\\ &=x\left(A_{1}(x)&A_{2}(x)&\cdots&A_{N}(x)\right),\end{split}\] and \[\begin{split}\big{(}B_{1}(x)&B_{2}(x)&\cdots&B_{N}(x)\big{)}\,H_{N}\\ &+\big{(}0&\cdots&0&d_{N}B_{N+1}(x)&0\\ &+\big{(}0&\cdots&0&c_{N}B_{N+1}(x)&d_{N+1}B_{N+2}(x)\big{)}\\ &=x\left(B_{1}(x)&B_{2}(x)&\cdots&B_{N}(x)\right),\end{split}\] and therefore also for a linear combination of \(A_{n}\) and \(B_{n}\) \[\begin{array}{l}\left(C_{1}A_{1}(x)+C_{2}B_{1}(x)\ \ C_{1}A_{2}(x)+C_{2}B_{2}(x) \ \ \cdots\ \ C_{1}A_{N}(x)+C_{2}B_{N}(x)\right)H_{N}\\ +\left(0\ \ \cdots\ \ 0\ \ d_{N}[C_{1}A_{N+1}(x)+C_{2}B_{N+1}(x)]\ \ Here we used the version as it is given in terms of the nearest neighbor recurrence coefficients [17]. For \(r=2\) and on the stepline, it becomes \[(x-y)\sum_{k=0}^{N-1}P_{k}(x)Q_{k+1}(y)=P_{N}(x)Q_{N}(y)-c_{N}P_{N-1 }(x)Q_{N+1}(y)\\ -d_{N}P_{N-2}(x)Q_{N+1}(y)-d_{N+1}P_{N-1}(x)Q_{N+2}(y).\] The Christoffel-Darboux formula also holds if we replace \(Q_{k}(y)\) by \(C_{1}A_{k}(y)+C_{2}B_{k}(y)\): \[(x-y)\sum_{k=0}^{N-1}P_{k}(x)[C_{1}A_{k+1}(y)+C_{2}B_{k+1}(y)]\\ = P_{N}(x)[C_{1}A_{N}(y)+C_{2}B_{N}(y)]-c_{N}P_{N-1}(x)[C_{1}A_{N+1 }(y)+C_{2}B_{N+1}(y)]\\ -\,d_{N}P_{N-2}(x)[C_{1}A_{N+1}(y)+C_{2}B_{N+1}(y)]-d_{N+1}P_{N-1} (x)[C_{1}A_{N+2}(y)+C_{2}B_{N+2}(y)].\] Take \(y=x_{j,N}\) a zero of \(P_{N}\), then (4.12) gives \[\sum_{k=0}^{N-1}P_{k}(x)[C_{1}A_{k+1}(x_{j,N})+C_{2}B_{k+1}(x_{j,N})]=\frac{P_ {N}(x)}{x-x_{j,N}}[C_{1}A_{N}(x_{j,N})+C_{2}B_{N}(x_{j,N})]. \tag{5.1}\] Recall that the quadrature nodes are given by (3.1) and (3.2). Integrating (5.1) with measure \(\mu_{1}\) gives \[\sum_{k=0}^{N-1}[C_{1}A_{k+1}(x_{j,N})+C_{2}B_{k+1}(x_{j,N})]\int _{\mathbb{R}}P_{k}(x)\,d\mu_{1}(x)\\ =\int_{\mathbb{R}}\frac{P_{N}(x)}{x-x_{j,N}}\,d\mu_{1}(x)\,\,[C_{ 1}A_{N}(x_{j,N})+C_{2}B_{N}(x_{j,N})],\] hence if we retain only the non-vanishing integrals \[\lambda_{j,N}^{(1)}[C_{1}A_{N}(x_{j,N})+C_{2}B_{N}(x_{j,N})]= \frac{1}{P_{N}^{\prime}(x_{j,N})}\int_{\mathbb{R}}P_{0}(x)\,d\mu_{1}(x)\,\,[C_ {1}A_{1}+C_{2}B_{1}]. \tag{5.2}\] In a similar way, integrating (5.1) with measure \(\mu_{2}\) gives \[\sum_{k=0}^{N-1}[C_{1}A_{k+1}(x_{j,N})+C_{2}B_{k+1}(x_{j,N})]\int _{\mathbb{R}}P_{k}(x)\,d\mu_{2}(x)\\ =\int_{\mathbb{R}}\frac{P_{N}(x)}{x-x_{j,N}}\,d\mu_{2}(x)\,\,[C_{ 1}A_{N}(x_{j,N})+C_{2}B_{N}(x_{j,N})],\] hence if we retain only the non-vanishing integrals \[\lambda_{j,N}^{(2)}[C_{1}A_{N}(x_{j,N})+C_{2}B_{N}(x_{j,N})]\\ =\frac{1}{P_{N}^{\prime}(x_{j,N})}\left(\int_{\mathbb{R}}P_{0}(x) \,d\mu_{2}(x)\,\,[C_{1}A_{1}+C_{2}B_{1}]\\ +\int_{\mathbb{R}}P_{1}(x)\,d\mu_{2}(x)\,\,[C_{1}A_{2}+C_{2}B_{2}] \right). \tag{5.3}\] If we denote \[D_{1,1}=\int_{\mathbb{R}}P_{0}(x)\,d\mu_{1}(x),\quad D_{1,2}=\int_{\mathbb{R}}P_{ 1}(x)\,d\mu_{1}(x),\] \[D_{2,1}=\int_{\mathbb{R}}P_{0}(x)\,d\mu_{2}(x),\quad D_{2,2}=\int_{\mathbb{R}}P_ {1}(x)\,d\mu_{2}(x),\] and use the biorthogonality \[\int_{\mathbb{R}}P_{0}(x)[A_{1}w_{1}(x)+B_{1}w_{2}(x)]\,d\mu(x)=1,\quad\int_{ \mathbb{R}}P_{0}(x)[A_{2}w_{1}(x)+B_{2}w_{2}(x)]\,d\mu(x)=0,\] \[\int_{\mathbb{R}}P_{1}(x)[A_{1}w_{1}(x)+B_{1}w_{2}(x)]\,d\mu(x)=0,\quad\int_{ \mathbb{R}}P_{1}(x)[A_{2}w_{1}(x)+B_{2}w_{2}(x)]\,d\mu(x)=1,\] then we find \[\begin{pmatrix}A_{1}&0\\ A_{2}&B_{2}\end{pmatrix}\begin{pmatrix}D_{1,1}&D_{1,2}\\ D_{2,1}&D_{2,2}\end{pmatrix}=\begin{pmatrix}1&0\\ 0&1\end{pmatrix}.\] Observe that \(A_{1}=A_{1,0}(x)\), \(A_{2}=A_{1,1}(x)\) and \(B_{2}=B_{1,1}(x)\) are polynomials of degree \(0\) and hence these are constants, whereas \(B_{1}=B_{1,0}=0\) and by the orthogonality op type II multiple orthogonal polynomials we also have \(D_{1,2}=0\). In order to find a formula for \(P_{N}^{\prime}(x_{j,N})\), we take the limit \(x\to x_{j,N}\) in (5.1) to find \[\sum_{k=0}^{N-1}P_{k}(x_{j,N})[C_{1}A_{k+1}(x_{j,N})+C_{2}B_{k+1}(x_{j,N})]=P_ {N}^{\prime}(x_{j,N})[C_{1}A_{N}(x_{j,N})+C_{2}B_{N}(x_{j,N})]. \tag{5.4}\] The left hand side is the inner product of the left eigenvector and the right eigenvector of \(H_{N}\) for the eigenvalue \(x_{j,N}\). Combining (5.2), (5.3) and (5.4) and taking into account Property 5 and 6 then gives the main result. ## 6 Examples ### Bessel functions \(K_{\nu}\) We will consider the multiple orthogonal polynomials related to the modified Bessel functions \(K_{\nu}\) and \(K_{\nu+1}\). Let \((w_{1},w_{2})=x^{\alpha}(\rho_{\nu},\rho_{\nu+1})\) with \[\rho_{\nu}(x)=2x^{\nu/2}K_{\nu}(2\sqrt{x}),\qquad x>0.\] We take \(\alpha=1\) and \(\nu=0\) and we compute the eigenvalues of \(H_{N}\) for \(N=10\), together with the weights \(\lambda_{j,10}^{(1)}\) and \(\lambda_{j,10}^{(2)}\) using Theorem 2. The type I polynomials \((A_{1,0},B_{1,0})\) and \((A_{1,1},B_{1,1})\) are the constants \[A_{1,0}=1,\quad B_{1,0}=0,\qquad A_{1,1}=\alpha+\nu+1,\quad B_{1,1}=-1,\] and if we use the normalization (2.1) \[\int_{0}^{\infty}x^{\alpha}[\widehat{A}_{n,m}(x)\rho_{\nu}(x)+\widehat{B}_{n, m}(x)\rho_{\nu+1}(x)]x^{n+m-1}\,dx=1,\] then we get \[\begin{pmatrix}A_{1}&0\\ A_{2}&B_{2}\end{pmatrix}=\begin{pmatrix}\frac{1}{\Gamma(\alpha+\nu+1)\Gamma(\alpha+ 1)}&0\\ -\frac{\alpha+\nu+1}{\Gamma(\alpha+\nu+2)\Gamma(\alpha+2)}&\frac{1}{\Gamma(\alpha +\nu+2)\Gamma(\alpha+2)}\end{pmatrix},\] and hence for \(\alpha=1\) and \(\nu=0\) we have \[\begin{pmatrix}D_{1,1}&0\\ D_{2,1}&D_{2,2}\end{pmatrix}=\begin{pmatrix}1&0\\ -1/2&1/4\end{pmatrix}^{-1}=\begin{pmatrix}1&0\\ 2&4\end{pmatrix}.\] We used Maple with Digits:=100 for our calculations and we show the 20 first digits after the decimal point in Fig. 6.1. **Algorithm 1: Maple** ``` with(LinearAlgebra): b:=m->(n+alpha+1)*(3*n+alpha+2*nu)-(alpha+1)*(nu-1); c:=m->n*(n+alpha)*(n+alpha+nu)*(3*n+2*alpha+nu); d:=m->n*(n-1)*(n+alpha)*(n+alpha-1)*(n+alpha+nu)*(n+alpha+nu-1); H:=->BandMatrix([[seq(d(k),k=2.n-1)],[seq(c(k),k=1..n-1)], [seq(b(k),k=0..n-1)],[seq(1,k=1..n-1)]]); alpha:=1.;nu:=0.; Digits:=100; N:=10; E,RE:=Eigenvectors(H(N)): % right eigenvalues/vectors (descending) eig:=Eigenvectors(Transpose(H(N)), output = list): eig := sort(eig, (a, b) -> is(Re(b[1]) < Re(a[1]))): EE:=Vector[column]([seq(eig[k][1],k=1..N)]): % eigenvalues (descending) LE:=Matrix([seq(op(eig[k][3]), k = 1.. N)]): % left eigenvectors A:=Matrix([[1/(GAMMA(alpha+nu+1)*GAMMA(alpha+1)),0], [-(alpha+nu+1)/(GAMMA(alpha+nu+2)*GAMMA(alpha+2)), 1/(GAMMA(alpha+nu+2)*GAMMA(alpha+2))]]); D:=MatrixInverse(A); u:=j->LE(1:N,j); % left eigenvectors v:=j->RE(1:N,N-j+1)/RE(1,N-j+1); % normalized right eigenvectors Figure 6.1: Quadrature nodes and quadrature weights for \((w_{1},w_{2})=x(\rho_{0},\rho_{1})\). lambda1:=j->D(1,1)*u(j)[1]/DotProduct(u(j),v(j)); lambda2:=j->(D(2,1)*u(j)[1]+D(2,2)*u(j)[2])/DotProduct(u(j),v(j)); seq(EE(k),k=1..N); % quadrature nodes seq(lambda1(k),k=1..N); % quadrature weights (first weight) seq(lambda2(k),k=1..N); % quadrature weights (second weight) ``` To illustrate the quality of the quadratures, we will simultaneously compute \[I_{1} = \int_{0}^{\infty}e^{-x}x\rho_{0}(x)\,dx=0.1926947246\ldots,\] \[I_{2} = \int_{0}^{\infty}e^{-x}x\rho_{1}(x)\,dx=0.2109579130\ldots\] using \(N=10,20,30,40,50\) nodes in Maple. We show the first 10 decimals in Fig. 6.2. The results show that the quadrature formulas converge rather slowly to the correct value. A possible explanation is that the quadrature nodes become very large and the corresponding quadrature weights are very small, so only a small proportion of terms in the quadrature sum contribute to the result. ### Bessel functions \(I_{\nu}\) For the multiple orthogonal polynomials with weights \((w_{1},w_{2})=(\omega_{\nu,c},\omega_{\nu+1,c})\), where \[\omega_{\nu,c}(x)=x^{\nu/2}I_{\nu}(2\sqrt{x})e^{-cx},\qquad x>0,\] we take \(c=1\) and \(\nu=0\) and compute the eigenvalues of \(H_{N}\) for \(N=10\), together with the weights \(\lambda_{j,10}^{(1)}\) and \(\lambda_{j,10}^{(2)}\) using Theorem 2. After normalization we get \[\begin{pmatrix}A_{1}&0\\ A_{2}&B_{2}\end{pmatrix}=e^{-1/c}\begin{pmatrix}c^{\nu+1}&0\\ -c^{\nu+2}&c^{\nu+3}\end{pmatrix},\] so that for \(c=1\) and \(\nu=0\) we have \[\begin{pmatrix}D_{1,1}&0\\ D_{2,1}&D_{2,2}\end{pmatrix}=\begin{pmatrix}1/e&0\\ -1/e&1/e\end{pmatrix}^{-1}=\begin{pmatrix}e&0\\ e&e\end{pmatrix}.\] We use Matlab and we show the first 10 digits after the decimal point in Fig. 6.2. Figure 6.2: Simultaneous quadrature to \(I_{1}\) and \(I_{2}\) \begin{tabular}{|c|c|c|c|} \hline \(j\) & \(x_{j,10}\) & \(\lambda_{j,10}^{(1)}\) & \(\lambda_{j,10}^{(2)}\) \\ \hline 1 & 0.1531952228 & 0.3913749988 & 0.0557885974 \\ 2 & 0.8105837014 & 0.8175616919 & 0.4874004644 \\ 3 & 2.0077223654 & 0.8459198767 & 0.9551942639 \\ 4 & 3.7719525634 & 0.4850707607 & 0.8091738873 \\ 5 & 6.1482336073 & 0.1517396396 & 0.3357737316 \\ 6 & 9.2079873838 & 0.0246520172 & 0.0683288497 \\ 7 & 13.0663024491 & 0.0019027391 & 0.0063827530 \\ 8 & 17.9203555594 & 0.0000595495 & 0.0002366956 \\ 9 & 24.1543375116 & 0.000005543 & 0.000025816 \\ 10 & 32.7593296369 & 0.000000007 & 0.0000000038 \\ \hline \end{tabular} **Algorithm 2: Matlab** We will simultaneously compute the integrals \[J_{1} = \int_{0}^{\infty}\cos(x)\omega_{0,1}(x)\,dx=0.32822497668527712 3104160354501976758\ldots\] \[J_{2} = \int_{0}^{\infty}\cos(x)\omega_{1,1}(x)\,dx=-0.3952195416068074559 2163128352397786234\ldots\] with \(N=10,20,30,40,50\) quadrature nodes in Maple (with Digits:=100). The results are in Fig. 6.4. The results in Matlab were comparable for \(N=10\) but for \(N=20\) and higher some of the eigenvalues and eigenvectors became complex, due to machine precision. This was avoided in Maple by choosing Digits:=100. Alternatively one could use variable precision arithmetic (vpa) in Matlab but then Algorithm 2 needs to be modified because the command eig does not give the left eigenvalues. Clearly the results of the quadrature are much better than for the previous example. Figure 6.3: Quadrature nodes and quadrature weights for \((w_{1},w_{2})=(\omega_{0,1},\omega_{1,1})\).
2309.14908
Face Cartoonisation For Various Poses Using StyleGAN
This paper presents an innovative approach to achieve face cartoonisation while preserving the original identity and accommodating various poses. Unlike previous methods in this field that relied on conditional-GANs, which posed challenges related to dataset requirements and pose training, our approach leverages the expressive latent space of StyleGAN. We achieve this by introducing an encoder that captures both pose and identity information from images and generates a corresponding embedding within the StyleGAN latent space. By subsequently passing this embedding through a pre-trained generator, we obtain the desired cartoonised output. While many other approaches based on StyleGAN necessitate a dedicated and fine-tuned StyleGAN model, our method stands out by utilizing an already-trained StyleGAN designed to produce realistic facial images. We show by extensive experimentation how our encoder adapts the StyleGAN output to better preserve identity when the objective is cartoonisation.
Kushal Jain, Ankith Varun J, Anoop Namboodiri
2023-09-26T13:10:25Z
http://arxiv.org/abs/2309.14908v1
# Face Cartoonisation For Various Poses Using StyleGAN ###### Abstract This paper presents an innovative approach to achieve face cartoonisation while preserving the original identity and accommodating various poses. Unlike previous methods in this field that relied on conditional-GANs, which posed challenges related to dataset requirements and pose training, our approach leverages the expressive latent space of StyleGAN. We achieve this by introducing an encoder that captures both pose and identity information from images and generates a corresponding embedding within the StyleGAN latent space. By subsequently passing this embedding through a pre-trained generator, we obtain the desired cartoonised output. While many other approaches based on StyleGAN necessitate a dedicated and fine-tuned StyleGAN model, our method stands out by utilizing an already-trained StyleGAN designed to produce realistic facial images. We show by extensive experimentation how our encoder adapts the StyleGAN output to better preserve identity when the objective is cartoonisation. Our code will be released upon acceptance. ## 1 Introduction Generative Adversarial Networks (GANs) [7] have demonstrated remarkable success in a wide range of computer vision applications, including cartoonisation [25, 2], video generation, image-to-image translation [9, 15, 31]. Over the past few years, there has been a rapid improvement in the quality of images synthesized by GANs. From the seminal DCGAN framework [19] in 2015, the current state-of-the-art GANs can generate images at much higher resolutions and produce significantly more realistic results. A notable advancement in this trajectory is StyleGAN [11], which introduces an intermediate latent space denoted as \(W\). This latent space constitutes a pivotal innovation that holds immense potential for enabling controlled image modifications within unsupervised settings. This intermediate space, makes it feasible to conduct fine-grained alterations to images, allowing for an unprecedented level of manipulation while maintaining high fidelity. Our method utilizes the rich latent space (\(W\)) of a StyleGAN pretrained on FFHQ [11] dataset paired with a GAN-based cartoon generator [25] to generate cartoonized faces. These both combined make our generator setup. In contrast to many other artistic styles that involve adding smaller intricate textures such as brush strokes or shading lines, cartoon images exhibit a distinct feel that is achieved by simplifying and abstracting elements from real-world photographs. Common features of cartoon images, also pointed out by authors in [2], are well-defined edges, a consistent application of smooth color shading, and relatively plain textures. These features sets cartoon images apart from various other forms of artwork that may prioritize different techniques and aesthetics. We found that by integrating the outputs of the cartoon generator to calculate loss, we imbue our encoder with a sense of what the desired cartoonised version of the input image should entail. This supplemental supervision serves as an anchor during the loss computation, enabling our encoder to send a more cartoonised signal to StyleGAN. Furthermore, it affords us the ability to sidestep the otherwise demanding process of fine-tuning the StyleGAN model, which requires considerable computational resources. By leveraging the features learnt by CartoonGAN framework during training with unpaired datasets, we not only streamline the training process for our generator setup but our results also show strong cartoon characteristics (see Fig. 1). Our method takes the identity image (\(I_{id}\)) and the pose image (\(I_{p}\)) as inputs and extracts the required features from those images using encoders namely \(E_{id}\) for identity and \(E_{p}\) for pose. These features are passed through a Multi-Layer Perceptron which is trained to generate a vector \(w\in W\) which corresponds to the cartoonised version of \(I_{id}\) in the pose of \(I_{p}\). When we pass this vector through our generator setup to get the output cartoon face. The main contributions that we making in this paper are as follows : 1. We propose a method to achieve cartoonisation by learning the seperate representations for identity and pose in the \(W\) latent space of StyleGAN allowing us to generate various poses and identities without class-supervision. 2. We use a readily available StyleGAN pretrained on FFHQ dataset with a cartoon generator which allows for skipping the time-consuming fine-tuning process. 3. We show that the distribution our encoder learns in the latent space, is better for encoding identity when cartoonisation is the objective. ## 2 Related Works ### Conditional-GANs In cGANs [17], the generator network takes both random noise and conditional information as input to produce contextually relevant outputs. Isola et al. [9] extended cGANs to general image to image translation but it required paired datasets. Recently many works have successfully achieved image-to-image translation with unpaired datasets using cycle consistency loss [31] and other losses [8, 12, 26]. The utilization of such image-to-image translation techniques finds relevance in scenarios involving inpainting, where the transformation occurs between unconventional domains, like the shift from a 3D-2D rendered domain to a photo-realistic domain [28]. In the domain of face manipulation, there exists a diverse array of techniques classified based on whether they prioritize preserving identity [23] or manipulating other facial attributes [15]. Several works employ conditional GANs to translate between discrete domains such as age or emotional expression, necessitating labeled datasets but capable of operating on unseen images [2, 12, 16]. Unlike existing approaches that rely on black-box models and loss terms for network training, our proposed method takes a different approach by learning disentangled representations in the StyleGAN latent space [20]. This helps in enforcing the network to learn distinct features with separate objectives, making the learning process more controllable and tunable. ### Latent Space of StyleGAN The exploration and control of latent spaces in GANs have been a subject of significant research. One of the tasks that has started to garner attention is GAN inversion which aims to find the latent vector in a pretrained GAN, that accurately reconstructs a given image [30]. GAN inversion has been widely explored using StyleGAN [11], because of its rich and expressive latent space. Recently, GAN inversion has been achieved by mainly two methods, by directly optimizing the latent vector to minimize reconstruction error for every image [1, 14] or by training an encoder to map images to the latent space [20]. We draw inspiration from [20] and make an encoder that directly maps the input conditions onto the StyleGAN latent space, doing away from "invert first edit later" paradigms [3]. Through explorations of the latent space, several works have effectively showcased the latent vectors' capacity to facilitate seamless transitions among distinct facial characteristics, expressions, and poses [6, 10, 21]. This phenomenon highlights the exceptional ability of StyleGAN's latent space to disentangle and represent semantic attributes in a coherent manner. ## 3 Method Given two images \(I_{d}\) and \(I_{p}\), our method generates a cartoon image representing the identity of \(I_{d}\) and other attributes, primarily pose and expression of \(I_{p}\). As show in Fig. 2, our model roughly comprises of encoders, a Multi-Layer Perceptron (MLP) and generators. On a high level, the encoders capture the respective features of the input images which are then mapped to the latent space of the generators by the MLP, finally resulting in the required cartoon image. The input images (\(I_{id},I_{p}\)) serve as the foundation from which we derive the necessary information to guide our process. To distill the salient characteristics from these images, we employ dedicated encoders. These encoders are adept at extracting pertinent features from their respective input images, hence their embeddings encapsulate the intrinsic elements that contribute to identity and pose attributes. Essentially, the MLP functions as an interpreter, translating the extracted features into a concise and expressive instruction set for our subsequent steps. We adopt a strategy where the generator, depicted in green (Fig. 2), is fully pretrained, while only the MLP remains untrained. Simultaneously, we focus on refining the performance of the pose encoder, denoted as \(E_{p}\) and highlighted in yellow. This involves finetuning of the encoder's final layers to enhance its capability to capture pose information accurately. The overarching goal of this finetuning endeavor is to achieve a more effective and nuanced translation of pose-related data, contributing to the overall success of our method, also observed by [18]. ### Encoders In our approach, we integrate identity and pose information using well-established models. For identity encoding, we employ a pre-trained Arcface face recognition model [4], extracting activations from its penultimate layer to create an identity embedding vector (\(E_{id}(I_{id})\)). To encode pose we use the pretrained VGG [13], trained on face images, but the model is also being finetuned to serve our purpose better. We use the penultimate layer activations as our pose embedding vector (\(E_{p}(I_{p})\)). We concatenate the two embeddings and pass it through the MLP to get the \(w\) vector. \[w=M([E_{p}(I_{p}),E_{id}(I_{id})])\] Figure 2: Our model architecture. All the models in green are pretrained, while the pose encoder in yellow is being fine-tuned and the MLP is trained from scratch. Data flow is marked with solid lines and losses are marked with dashed lines. Bold-dashed lines enclose the generator setup. Input images \(I_{id}\) and \(I_{p}\) are encoded using \(E_{id}\) and \(E_{p}\) respectively, to give embeddings. The embeddings are concatenated and passed through the MLP which maps it to the \(W\) latent space of StyleGAN. The \(w\) vector generated is passed through our generator setup to give the final output \(G(C(w))\). \(\mathscr{L}_{id}\) ensures that identity is preserved, \(\mathscr{L}_{ind}\) enforces pose translation. Another loss term \(\mathscr{L}_{rec}\) is also used but is not shown here. We also make use of a landmark encoder \(E_{ind}\) in the training process in order to captures facial landmarks of realistic images. ### Generator Our generator comprises of two essential components. Firstly, we employ a StyleGAN (\(G\)) that has been pre-trained on the FFHQ dataset [11]. This acts like a pre-cartoonisation backbone. Our aim here is not to generate realistic faces but faces that cartoonise well in the subsequent step. Now to transfer the style of our generated image from real to cartoon, we pass our image through a GAN-based cartoon generator called White Box Cartooniser (\(C\)), as detailed in the work by Wang and colleagues [25]. Their work builds on the foundational work of CartoonGAN [2] that introduced a novel edge loss to cartoonise real-life images. White Box Cartooniser on the other hand breaks down the problem into surface representation, structure representation and texture representation and optimises for them individually which makes the framework more controllable and tunable. These image to image translation models have learnt many crucial features needed for our task. Thus, the decision to leverage these models aligns intuitively with the requirements of our task. We pass the \(w\) vector through the \(G\) and then through \(C\) to get our output image. \[I_{out}=C(G(w))\] ### Losses **Identity Loss :** The process of transforming facial images into their cartoonised counterparts involves a delicate balance between creative distortion and maintaining the essential identity of the subject. To achieve this delicate equilibrium, we introduce a pivotal identity loss term, formulated as an \(L_{1}\) cycle consistency loss between \(C(I_{id})\) and \(I_{out}\) as follows. \[\mathscr{L}_{id}=\|E_{id}(C(I_{id}))-E_{id}(I_{out})\|_{1} \tag{1}\] Cartoonisation involves a deliberate departure from photo-realism, introducing stylistic exaggerations and distortions to achieve the desired artistic effect, which may degrade identity preservation. Therefore, we align our approach with the objective of preserving the essential identity attributes in the transformed cartoon space. **Landmark Loss :** The human visual system excels at discerning identity-defining details like facial arrangement and subtle variations, hence a landmarks loss becomes crucial. The importance of this loss can also be seen in Sec. 4.4, where we show that by not using landmark loss the MLP is unable to transfer pose accurately. As a result, we incorporate a sparse \(L_{2}\) cycle consistency landmark loss as given below. \[\mathscr{L}_{ind}=\|E_{Ind}(C(I_{p}))-E_{Ind}(I_{out})\|_{2} \tag{2}\] **Reconstruction Loss :** In addition to identity and pose, preservation of other factors such as illumination and colour, is instrumental in generating a good generalized cartoon image especially in the scenario when \(I_{id}\) and \(I_{p}\) are identical. In our proposed architecture, whenever \(I_{id}=I_{p}\), the MLP is tasked with inversion so using a pixel-level reconstruction loss makes intuitive sense. Motivated by the results of [18] we use a _mix_ loss \(\mathscr{L}_{mix}\)[27] to encourage a more refined cartoonisation preserving non-facial attributes. \(\mathscr{L}_{mix}\) is a weighted sum of \(L_{1}\) loss and MS-SSIM loss with \(\alpha\) being a hyper-parameter. \[\mathscr{L}_{mix}=\alpha(1-MS\text{-}SSIM(I_{p},I_{id}))+(1-\alpha)\|I_{p}-I _{id}\|_{1} \tag{3}\] We impose a constraint on the application of this loss, limiting it exclusively to cases where the identity and pose images are identical. This deliberate constraint is designed to prevent the reconstruction of identity features from \(I_{p}\) within the resulting cartoon image. The below redefinition of \(\mathscr{L}_{mix}\) takes this into account. \[\mathscr{L}_{rec}=\begin{cases}\mathscr{L}_{mix}&\text{if }I_{id}=I_{p}\\ 0&\text{otherwise}\end{cases} \tag{4}\] The overall MLP loss is a weighted sum of the above Figure 4: A visualisation of landmark loss computation. \(E_{ind}\) encodes the facial landmarks in the cartoon space to ensure consistency and penalise differences in pose. Figure 3: A visualisation of identity loss calculation. \(E_{id}\) captures the identity features from the generated image and the cartoonised version of the identity image to compute the loss. losses: \[\mathscr{L}_{total}=\lambda_{1}\mathscr{L}_{id}+\lambda_{2}\mathscr{L}_{ ind}+\lambda_{3}\mathscr{L}_{rec} \tag{5}\] ## 4 Experiments **Implementation :** We use StyleGAN pre-trained at 256x256 resolution in all our experiments, to easily compare with other methods. The ratio of training samples with \(I_{id}=I_{p}\) and \(I_{id}\neq I_{p}\) is an hyper-parameter that controls the weight for disentanglement and reconstruction. As suggested in [18], we take \(I_{id}=I_{p}\) every third iteration, and \(I_{id}\neq I_{p}\) otherwise. \(E_{Ind}\) is implemented using a pre-trained landmarks regression network [5], trained to regress 68 facial keypoints. **Hyper Parameters :** We use learning rate of \(5e^{-5}\) when optimizing \(\mathscr{L}_{total}\). Loss weights are set to \(\lambda_{1}=1,\lambda_{2}=1,\lambda_{3}=0.001\) and \(\alpha=0.84\). **Dataset :** We create a dataset using StyleGAN in the way proposed in [20]. We sample 70,000 random Gaussian vectors and forward them through a pre-trained StyleGAN. In the forward process, the Gaussian noise is mapped into a latent vector, from which an image is generated, and we save the image and the vector as well. These vectors can be used for additional supervision to the MLP in the form of an adversarial loss but we found that it does not help our objective of cartoonisation (see Sec. 4.4). **Training :** Our training methodology is pretty straightforward. We begin by randomly selecting \(I_{id}\) and \(I_{p}\) images. When \(I_{id}=I_{p}\) (i.e. both images are the same), the MLP learns to encode all the necessary information essential for achieving accurate reconstruction in the \(W\) space, basically turning into a GAN inverter. When \(I_{id}\neq I_{p}\) the MLP is taught to disentangle the intrinsic identity from other attributes present in the image. ### Disentanglement of Identity and Pose As previously outlined, our approach leverages the inherent characteristics of the chosen generator, namely StyleGAN, and its associated latent space \(W\). Previous works have extensively demonstrated that the latent space \(W\) ex Figure 5: Results of pose/expression (first 3 rows) and identity (last 3 rows) interpolation. We sample eight vectors in the linearly interpolated space between the mapped latent codes \(w_{1}\) and \(w_{2}\) using \(w=w_{1}+(w_{2}-w_{1})\frac{k}{8}\) where \(k\in\{1,2,...,8\}\), and pass it through our generator \(C(G(w))\) to obtain the cartoonised images. hibits a high degree of controllability, allowing for seamless feature manipulation through latent code interpolation [22, 29]. Our proposed method discerns latent codes that are optimal for cartoonisation while also capturing the high-dimensional identity attribute and an intricate combination of expression and pose. In Fig. 5, we visually demonstrate the disentanglement of these high-level features through latent code interpolation, showcasing the robust nature of StyleGAN's latent space \(W\) and its resilience to style distortion. In the first two rows of the figure, we keep the identity fixed and extract the attribute features (representing pose and expression) from two different images as labelled. We then proceed to obtain two corresponding latent codes \(w_{1}\) and \(w_{2}\) using our proposed method, and linearly interpolate between them. Similarly, for the last two rows we keep the attribute features fixed and interpolate between two different identities. The generated cartoonised images exhibit a natural and smooth transition between the interpolated features while preserving the abstract style of cartoonisation. However, we observe minute variations in ideally constant facial attributes across the interpolated images which may lead to flickers while processing a sequence of frames. ### Loss In Identity Due To Cartoonisation Our approach possesses the capability to generate cartoonised versions of diverse identities across various poses. However, it's worth noting that, to our knowledge, there aren't existing models that explicitly cater to this specific task. In light of this, we conduct a comparative analysis by juxtaposing our results against those of other face frontalisation models [6, 21, 28, 18] coupled with the White-Box Cartooniser [25], as shown in Fig. 6. This approach facilitates the cartoonisation of the outputs from these models, allowing us to gauge the impact on identity preservation as well as stylisation. Our comparison revolves around quantifying the loss in identity as a consequence of this cartoonisation process. To calculate the loss in identity related information of \(I_{x}\) after cartoonisation we subtract \(\mathscr{L}_{id}(I_{x},G(w))\) from \(\mathscr{L}_{id}(I_{x},G(C(w))\). Similarly for other frontalisation methods we calculate the loss in identity due to cartoonisation by subtracting \(\mathscr{L}_{id}\) for each image after cartoonisation from \(\mathscr{L}_{id}\) before. Our findings listed in Table 1, clearly show that our model exhibits superior performance in terms of mitigating the increment in identity loss, after the cartoonisation step. This outcome shows the robustness of our approach and its effectiveness in maintaining identity-related information during the style transfer from realistic to cartoon aesthetics. Figure 6: (a) Comparison of our method with face frontalization models namely, pSp [21], R&R [28], OSTeC [6] and FaceID [18], followed by successive cartoonisation [25]. Notice how our intermittent frontal images are better suited for cartoonisation. (b) Other methods produce shading and texture artifacts especially around the eyes, cheeks and mouth regions. Our results illustrate better cartoonization characterized by well-defined edges, consistent and smooth shading, and plain textures. ### Different \(W\) Spaces To validate our hypothesis that our encoder had indeed learned a novel distribution within the StyleGAN latent space, we devised a visualization strategy. We aimed to visually compare the original \(w\) vectors with the vectors generated by our trained MLP. While we had previously demonstrated (as seen in Sec. 4.2) that the vectors produced by our MLP were better suited for maintaining identity after cartoonization, this time our focus was on confirming the distinctness of the two distributions - the MLP-generated vectors and the original StyleGAN \(w\) vectors. Our approach involved utilizing t-SNE visualizations [24] to depict the latent vectors derived from both distributions. Upon generating these t-SNE plots, discernible differences became apparent. The patterns and clusters formed by the StyleGAN-derived \(w\) vectors exhibited clear separation from those formed by the vectors generated by our MLP. This visual divergence in distribution strongly supports the notion that the latent spaces of StyleGAN-generated images and the images produced by our encoder are not aligned as originally anticipated. This observation underscores the need for further exploration to understand the nuances contributing to this dissimilarity, potentially shedding light on the intricacies of our MLP's adaptation to the StyleGAN latent space. ### Ablations To show the effectiveness of each component in our loss calculation method we perform experiments without individual components and report our results (see Fig. 8 ) and observations. Ablating the recreational loss we found that smaller details that contribute to identity are lost and the overall the face looks messy (Fig. (a)a). Ablating the landmark loss causes the results to always have the same pose (Fig. (c)c), meaning that enforcing landmark also helps us get better embeddings from our pose encoder that is being fine-tuned. We also tried to use the randomly sampled \(w\) vectors from the StyleGAN latent space as real samples to train a discriminator that tries to classify the real \(w\) from the generated \(w\). The output of the discriminator was used to calculate an adversarial loss term (as mentioned in [18]) which we added to the total loss. We found that the incorporation of the adversarial loss term adds to the inherent complexity of StyleGAN-generated images and the resulting outputs retained a substantial amount of intricate details that \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Average Identity Loss (\(\widehat{\mathcal{L}}_{id}\)) \(\downarrow\)} & \multirow{2}{*}{Increment \(\downarrow\)} \\ \cline{2-3} \cline{5-5} & Before & & After \\ & Cartoonisation & & \\ \hline pSp [21] & 0.1548 & **0.5092** & 0.3544 \\ R\&R [28] & 0.3010 & 0.5797 & 0.2787 \\ OSTeC [6] & **0.2326** & 0.5710 & 0.3384 \\ FaceID [18] & 0.3441 & 0.7046 & 0.3605 \\ Ours & 0.4903 & 0.6117 & **0.1213** \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative evaluation of frontalisation and subsequent cartoonisation on StyleGAN generated data with different identities and poses. Figure 7: t-SNE visualizations of the latent vectors obtained from the two distributions : StyleGAN’s original \(W\) space and our mapped \(W\) space contribute to a heightened sense of realism in the images (as illustrated in Fig. (b)b). As a result, the final outputs after cartoonisation lacked the desired smooth textures that we set out to achieve. This intricate balance between retaining realism and simplifying textures led us to reevaluate the interplay between adversarial loss and texture preservation. ## 5 Conclusion In this paper we proposed a method to achieve cartoonisation given an identity and a pose image by learning the separate representations for identity and pose in the \(W\) latent space of StyleGAN in a semi supervised fashion. We used a readily available StyleGAN pretrained on FFHQ dataset with a white box cartoon generator which allowed us to bypass the fine-tuning process. We demonstrated that our encoder is better for encoding identity when cartoonisation is the objective by performing extensive experiments. We leverage the generative abilities of the powerful pretrained generator that we employed, but this involves not only adopting its strengths but also accepting its limitations, which occur because of the bias in training dataset. Due to the preprocessing methodology employed by StyleGAN which aligns the orientation of heads in images to eliminate roll angles, yaw rotations become closely linked with translation. Consequently, these characteristics are inherited by the images generated using StyleGAN, including our own generated outputs. Our method also leaves artefacts, usually associated with flickering in GANs, like hallucinating spectacles where there are none and changing hair structure with pose. The approach presented in this paper is agnostic to the type of style generator we use in the setup hence, this work can be extented to other styles like face sketch or line drawings. We have also used readily available standard models for calculating embeddings and we think there is scope of improvement in the quality of embeddings that we use to train the MLP. This work can also be extended to videos by introducing temporal losses and other flicker suppression mechanisms. StyleGAN is not capable of fully encompassing the entire spectrum of human head poses within its \(W\) latent space. To address this, certain studies [1] have explored the concept of an augmented latent space referred to as \(W+\). This expanded space enables the generator to produce images beyond human identities, encompassing a range of non-human subjects like animals and rooms, which is an exciting line of future work. Figure 8: Results of ablations, (a) Shows minor noisy details on the face, (b) Adding adversarial loss makes the output more realistic (c) Removing \(\mathscr{L}_{ind}\) makes all outputs have same pose.
2301.00187
Collective modes in the anisotropic collisional hot QCD medium at finite chemical potential
We conducted a study on the collective modes within the hot QCD medium generated in heavy-ion collision experiments. These modes, whether real or imaginary, stable or unstable, play a crucial role in shaping the medium's evolution. To gain a deeper understanding, we considered several factors affecting the medium, including anisotropy, interactions among medium particles, and finite baryonic chemical potential. While the first two aspects have been thoroughly examined from various angles, the inclusion of finite chemical potential was previously overlooked. To provide a comprehensive analysis, we integrated these factors. The interactions among medium particles were accounted for using the BGK collisional kernel, while anisotropy and finite chemical potential were incorporated through the distribution functions of quarks, anti-quarks, and gluons. Our findings suggest that the presence of finite chemical potential amplifies the influence of unstable modes, potentially affecting the rapid thermalization of the hot QCD medium. Furthermore, exploring the implications of finite chemical potential in conjunction with other aspects of the created medium is intriguing, particularly in the context of low-energy heavy-ion collision experiments conducted at low temperatures and finite baryon density.
Mohammad Yousuf Jamal
2022-12-31T12:30:35Z
http://arxiv.org/abs/2301.00187v2
Collective modes in the anisotropic collisional hot QCD medium in the presence of finite chemical potential ###### Abstract The collective modes that are the collective excitations in the hot QCD medium produced in the heavy-ion collision experiments have been studied. These modes being real or imaginary and stable or unstable affect the evolution of the medium. To go into the more profound analysis we incorporated several aspects of the medium such as anisotropy, the presence of medium particle collisions, and finite baryonic chemical potential. The first two facets have been studied several times from different perspectives but the inclusion of finite chemical potential is a missing part. Therefore, to have a close analysis, we have combined these effects. The medium particle collision has been included using BGK- collisional kernel, the anisotropy, and finite chemical potential enter via quarks, anti-quarks and gluons distribution functions. Our results indicate that the presence of finite chemical potential enhances the magnitude of unstable modes which may affect the fast thermalization of the hot QCD medium. Moreover, it is interesting to see the consequences of finite chemical potential along with the other aspects of the created medium from the viewpoint of low-energy heavy-ion collision experiments that operate at low temperatures and finite baryon density. **Keywords**: Quark-Gluon-Plasma, Collective excitations, Collective modes, particle distributions function, BGK collisional kernel, Anisotropic QCD, Gluon self-energy, finite chemical potential. ## I Introduction In the relativistic heavy-ion collision (HIC) experiments we obtain such an extremely high energy density than any we know of in the present natural universe and where the QCD predicts a new form of matter consisting of an extended volume of interacting quarks, antiquarks, and gluons known as quark-gluon plasma (QGP) [1; 2]. Such a high energy density phase, however, is thought to have existed a few microseconds after the Big Bang. In this phase, the effective degrees of freedom are quarks, anti-quarks, and gluons rather than the hadronic matter. The historic discovery of the \(J/\Psi\), a bound state of the charmed quark and its anti-quark in the year 1974 first time proved the existence of the heavy quarks which indicated that the nucleus-nucleus collisions at high energy are very different from a simple superposition of nucleon-nucleon interactions. As it mimics the cosmic Big Bang, the term mini-bang has been coined to describe these interactions, in which nuclear collisions are thought to proceed in a series and cause the formation of matter. The presumed line of events begins with an extremely high-temperature region inhabited by the two nuclei at the moment of collision, as a large fraction of their kinetic energy is converted into heat. At thermal equilibrium it forms a high-temperature plasma medium consisting of quarks, anti-quarks, and gluons that instantly begins to expand and cool, passing down through the critical temperature at which the QGP condenses into a system of mesons, baryons, or simply hadrons. On further expansion, the system reaches its "freeze-out" density, at which the hadrons no longer interact with each other and stream into the detectors. There are several theoretical investigations based on various approaches such as semi-classical transport or kinetic, hard-thermal loop (HTL), ASD-CFT, holographic theories have been invoked to study the produced matter [3; 4]. There have been several signatures observed that support the formation of the QGP in these experiments such as suppression of heavy quarkonia (a colorless and flavorless bound state of a heavy quark and its anti-quark) yields at high transverse momentum, color screening [5; 6; 7; 8], Landau damping [9; 10], energy loss or gain of heavy quarks [11; 12; 13; 14; 15], _etc_. The observables that we see at the detector are affected by the presence of the QGP medium that in turn, is influenced by several plasma aspects. One of the important factors among them is the collective excitations or collective modes produced in the hot QGP medium. It has been explored in previous studies that these modes can be real or imaginary, stable or unstable [16; 17; 18; 19; 20; 21]. Each of them plays a significant role in altering the observed outcomes at the detector. In many-body systems, the spectrum of collective excitations is a fundamental part as it possesses information regarding the thermodynamic and transport properties of that system along with the temporal evolution of a non-equilibrium system [22]. The investigation begins with the analogical idea from Quantum Electro Dynamics (QED) that says that the evolving medium produced in the HIC could also contain instabilities that are similar to Weibel or filamentation instabilities [23] present in electromagnetic plasma (EMP) that may contribute to accomplishing the fast thermalization [24]. In particular, at very high-temperature, QCD resembles QED due to the small coupling constant. The most common feature of EMP and QGP is collective behavior that suggests the QGP could also have a very rich spectrum of collective modes they are there in EMP [25; 26]. In the present context, these modes could refer to plasma particles that could be real or imaginary and stable or unstable. The stable modes have infinite lifetimes whereas the unstable modes decay quickly. The stable modes have a long-range interaction with the heavy quarks traversing through the QGP medium whereas the unstable modes are only found in anisotropic plasma and may help in the fast thermalization/equilibration of the system, a lesser-known puzzle that attracts the curiosity of investigating the unstable modes and hence, the prior is less studied and discussed in the literature though the latter has been discussed widely. In several analyses the question about the formation, existence, and effects of the collective modes have been discussed considering various plasma aspects such as anisotropy, medium particle collisions, _etc,_[27; 28; 29; 30; 31; 32; 33; 34]. Here, we are interested in a crucial aspect which is to understand the inclusion and effects of finite baryon density in the study of these modes. Thrusting to very high baryon density at low temperatures, _i.e.,_ squeezing nuclei without heating them takes us into another fascinating region of the QCD phase diagram. The high baryon density generally means the surplus of quarks over antiquarks in the hot system parameterized by the baryon chemical potential, \(\mu\). The zero chemical potential refers to the equal densities of quarks and antiquarks which is a good approximation for the matter produced at midrapidity at very high energy HIC such as RHIC and LHC, and exceptionally good for the early Universe but not at the lower energies. It is essential from the viewpoint of the experimental facilities which perform at moderate temperatures along with finite baryon density such as the Super Proton Synchrotron (SPS) at CERN, Switzerland, and also upcoming planned experimental facilities such as the Facility for Antiproton and Ion Research (FAIR) in Darmstadt, Germany and the Nuclotron-Based Ion Collider Facility (NICA) in Dubna, Russia, and at the Japan Proton Accelerator Research Complex (J-PARC) in Tokai, Japan. To have a comprehensive study of the collective modes we shall also include the momentum anisotropy and medium particle collisions along with the finite chemical potential. The momentum anisotropy and the chemical potential will enter via particle distribution functions and for medium particle collisions we shall consider the Bhatnagar-Gross-Krook (BGK) collisional kernel in the Boltzman transport equation. The dispersion relations for the modes are acquired from the poles of the (gluon) propagator. Based on their solutions it can be identified if the modes are stable or unstable and real or imaginary which we shall discuss in detail in the upcoming sections. The current manuscript is arranged as follows. In section II, the methodology to obtain the dispersion relations of the modes will be discussed. Section III, contains a detailed discussion of the results. Finally, section IV, is dedicated to the summary and conclusions of the present work along with the potential future problems. Here, natural units are used throughout the text with \(c=k_{B}=\hbar=1\). We use a bold typeface to display three vectors and a regular font to indicate four vectors. The centre dot depicts the four-vector scalar product with \(g_{\mu\nu}=\mathrm{diag}(1,-1,-1,-1)\). ## II Methodology The formation of collective modes in the QGP medium can be understood as if one assumes the QGP initially in a homogeneous and stationary state with no net local color charges and no net currents. Next, suppose this state is slightly perturbed by either a random fluctuation or some external field that induces local charges or currents in the plasma that in turn generate chromoelectric and chromo-magnetic fields. These fields interact back with colored partons that contributed to their formation. If the wavelength of the perturbation surpasses the typical inter-particle spacing, the plasma undergoes a collective motion known as collective modes/excitations. The collective modes can be specified by the nature of the solutions of their dispersion relations (\(\omega(\mathbf{k})\), \(\mathbf{k}\) being the wave vector) that can be fetched from the poles of the gluon propagator. The solutions to the dispersion equations, \(\omega(\mathbf{k})\) could be complex-valued. For \(\Im(\omega(\mathbf{k}))=0\) there exist only stable modes. If \(\Im(\omega(\mathbf{k}))>0\) the amplitude of the mode exponentially grows with time as \(e^{\Im(\omega(\mathbf{k}))t}\) that represents the instability. If \(\Im(\omega(\mathbf{k}))<0\) the mode is damped and the mode amplitude exponentially decays with time as \(e^{-\Im(\omega(\mathbf{k}))t}\). Likewise, if we have, \(-\Im(\omega(\mathbf{k}))\geq|\Re(\omega(\mathbf{k}))|\) the modes will be over damped. To obtain the gluon propagator we first derive the gluon polarization tensor that contains the medium information such as medium anisotropy, finite chemical potential, the effect of medium particle collision, _etc_ that we shall discuss next. ### Gluon polarization tensor The gluon polarization tensor or gluon self-energy (\(\Pi^{\mu\nu}\)), bears the information of the QCD medium as it represents the interactions term in the effective action of the QCD. Before we start our derivation, we specify that our working scale is the soft momentum scale where the plasma aspects first appear, _i.e.,_\(k\sim gT\ll T\), ( \(g\) being the strong coupling constant). At this scale, one can obtain that the strength of field fluctuations, \(A^{\mu}\sim O(\sqrt{g}T)\), and the derivatives, \(\partial_{x}\sim O(gT)\). Now, if we examine the field strength tensor, \[F_{a}^{\mu\nu}=\partial^{\mu}A_{a}^{\nu}-\partial^{\nu}A_{a}^{\mu}-ig\left[A_ {b}^{\mu},A_{c}^{\nu}\right], \tag{1}\] we notice that the order of the non-abelian term is \(\mathrm{O}(g^{2})\) which is smaller than the order of the first two terms _i.e.,_\(\mathrm{O}(g^{3/2})\) in \(F^{\mu\nu}\) and hence can be neglected. Now we end up with the abelian part of \(F^{\mu\nu}\). Next, in the abelian limit, the linearized semi-classical transport equations for each color channel are given as [16; 28; 31; 35], \[v^{\mu}\partial_{\mu}\delta f^{i}_{a}(p,X)+g\theta_{i}v_{\mu}F^{ \mu\nu}_{a}(X)\partial_{\nu}^{(p)}f^{i}(\mathbf{p})=\mathcal{C}^{i}_{a}(p,X), \tag{2}\] where the index '\(i\)' refers to the particle species (quark, anti-quark, and gluon), '\(a\)', '\(b\)', '\(and\)'\(c\)' are the color index and \(\theta_{i}\in\{\theta_{g},\theta_{q},\theta_{\bar{q}}\}\) and have the values \(\theta_{g}=\theta_{q}=1\) and \(\theta_{\bar{q}}=-1\). The four vectors \(x^{\mu}=(t,\mathbf{x})=X\) and \(v^{\mu}=(1,\mathbf{v})=V\), are respectively, the four space-time coordinate and the velocity of the plasma particle with \(\mathbf{v}=\mathbf{p}/|\mathbf{p}|\). The \(\partial_{\mu}\), \(\partial_{\nu}^{(p)}\) are the partial four derivatives corresponding to space and momentum, respectively. As mentioned earlier the collisional term \(\mathcal{C}^{i}_{a}(p,X)\), is the BGK - kernel [35; 36; 28; 37] that describes the effects of collisions between hard particles in a hot QCD medium given as, \[\mathcal{C}^{i}_{a}(p,X)=-\nu\left[f^{i}_{a}(p,X)-\frac{N^{i}_{a} (X)}{N^{i}_{\rm eq}}f^{i}_{\rm eq}(|\mathbf{p}|)\right], \tag{3}\] where, \[f^{i}_{a}(p,X)=f^{i}(\mathbf{p})+\delta f^{i}_{a}(p,X), \tag{4}\] are the distribution functions of quarks, anti-quarks and gluons, \(f^{i}(\mathbf{p})\) is equilibrium part while, \(\delta f^{i}_{a}(p,X)\) perturbed part of the distribution function and \[N^{i}_{a}(X) =\int\frac{d^{3}p}{(2\pi)^{3}}f^{i}_{a}(p,X),\] \[N^{i}_{\rm eq} =\int\frac{d^{3}p}{(2\pi)^{3}}f^{i}_{\rm eq}(|\mathbf{p}|)=\int \frac{d^{3}p}{(2\pi)^{3}}f^{i}(\mathbf{p}). \tag{5}\] is the particle number and its equilibrium value. The BGK - kernel [36] depicts the equilibration of the system due to the collisions in a time proportional to \(\nu^{-1}\). Here, we consider the collision frequency \(\nu\) to be independent of momentum and particle species. We preferred to choose BGK -kernel because it conserves the particle number instantaneously as, \[\int\frac{d^{3}p}{(2\pi)^{3}}\mathcal{C}^{i}_{a}(p,X)=0. \tag{6}\] At finite baryon or quark chemical potential \(\mu\), the momentum distributions of gluon, quark, and anti-quark are given as, \[f_{g} =\frac{\exp[-\beta E_{g}]}{\left(1-\exp[-\beta E_{g}]\right)},\] \[f_{q} =\frac{\exp[-\beta(E_{q}-\mu)]}{\left(1+\exp[-\beta(E_{q}-\mu)] \right)},\] \[f_{\bar{q}} =\frac{\exp[-\beta(E_{q}+\mu)]}{\left(1+\exp[-\beta(E_{q}+\mu)] \right)}. \tag{7}\] To include the momentum anisotropy, we track the strategy given in Ref. [31]. In this method, the anisotropic distribution function was obtained from the isotropic distribution function given in Eq. (7) by rescaling (stretching and squeezing) in one direction of the momentum space as follows, \[f(\mathbf{p})\equiv f_{\xi}(\mathbf{p})=f(\sqrt{\mathbf{p}^{2}+ \xi(\mathbf{p}\cdot\mathbf{\hat{n}})^{2}}), \tag{8}\] where, \(\mathbf{\hat{n}}\) is an unit vector (\(\mathbf{\hat{n}}^{2}=1\)) showing the direction of momentum anisotropy and \(\xi\) the strength of anisotropy. It refers to squeezing when \(\xi>0\) and stretching when \(-1<\xi<0\) of the distribution function in the \(\mathbf{\hat{n}}\) direction. Next, the gluon polarization tensor can be obtained from the current induced due to the change in the particles distribution functions in the Fourier space as, \[\Pi^{\mu\nu}_{ab}(K)=\frac{\partial J^{\mu}_{\rm ind\,a}(K)}{ \partial A^{\nu}_{\nu}(K)}, \tag{9}\] where the induced current is given by [35; 38; 28; 31], \[J^{\mu}_{ind,a}(X) =g\int\frac{d^{3}p}{(2\pi)^{3}}V^{\mu}\{2N_{c}\delta f^{g}_{a}(p,X )+N_{f}[\delta f^{q}_{a}(p,X)\] \[-\delta f^{q}_{a}(p,X)]\}. \tag{10}\] Rewriting Eq. (2) as, \[v^{\mu}\partial_{\mu}\delta f^{i}_{a}(p,X)+g\theta_{i}v_{\mu}F^ {\mu\nu}_{a}(X)\partial_{\nu}^{(p)}f^{i}(\mathbf{p})=\] \[\nu\left(f^{i}_{\rm eq}(|\mathbf{p}|)-f^{i}(\mathbf{p})\right)- \nu\delta f^{i}_{a}(p,X)\] \[+\frac{\nu f^{i}_{\rm eq}(|\mathbf{p}|)}{N^{i}_{\rm eq}}\left( \int\frac{d^{3}p^{\prime}}{(2\pi)^{3}}\delta f^{i}_{a}(p^{\prime},X)\right). \tag{11}\] On solving Eq.(11) for \(\delta f^{i}_{a}(p,X)\) in the Fourier space we obtained, \[\delta f^{i}_{a}(p,K)=\frac{-ig\theta_{i}v_{\mu}F^{\mu\nu}_{a}(K)\partial^{(p)}_{ \nu}f^{i}(\mathbf{p})+i\nu(f^{i}_{\rm eq}(|\mathbf{p}|)-f^{i}(\mathbf{p}))+i\nu f ^{i}_{\rm eq}(|\mathbf{p}|)\left(\int\frac{d^{3}p}{(2\pi)^{3}}\delta f^{i}_{a} (p^{\prime},K)\right)/N_{\rm eq}}{\omega-\mathbf{v}\cdot\mathbf{k}+i\nu}, \tag{12}\] where, \(\delta f^{i}(p,K)\) and \(F^{\mu\nu}(K)\), are the Fourier transforms of \(\delta f^{i}(p,X)\) and \(F^{\mu\nu}(X)\), respectively. Where, \(K=k^{\mu}=(\omega,\mathbf{k})\) Now taking the Fourier transform of the induced current from Eq. (10) and employing \(\delta f^{i}_{a}(p,K)\), from Eq. (12) we get, \[J^{\mu}_{\rm ind,a}(K) = g^{2}\int\frac{d^{3}p}{(2\pi)^{3}}v^{\mu}\partial^{(p)}_{\nu}f( \mathbf{p})\mathcal{M}^{\nu\alpha}(K,V)D^{-1}(K,\mathbf{v},\nu)A_{\alpha a}+ gi\nu\{2N_{c}\mathcal{S}^{g}(K,\nu)+N_{f}(\mathcal{S}^{g}(K,\nu)-\mathcal{S}^{ \bar{g}}(K,\nu))\} \tag{13}\] \[+g(i\nu)\int\frac{d\Omega}{4\pi}v^{\mu}D^{-1}(K,\mathbf{v},\nu) \int\frac{d^{3}p^{\prime}}{(2\pi)^{3}}\Big{[}g\partial^{(p^{\prime})}_{\nu}f( \mathbf{p}^{\prime})\mathcal{M}^{\nu\alpha}(K,V^{\prime})D^{-1}(K,\mathbf{v}^ {\prime},\nu)\mathcal{W}^{-1}(K,\nu)A_{\alpha a}\] \[+i\nu(f_{\rm eq}(|\mathbf{p}^{\prime}|)-f(\mathbf{p}^{\prime}))D ^{-1}(K,\mathbf{v}^{\prime},\nu)\Big{]}\mathcal{W}^{-1}(K,\nu)\,,\] where, \[D(K,\mathbf{v},\nu)=\omega+i\nu-\mathbf{k}\cdot\mathbf{v},\] \[\mathcal{M}^{\nu\alpha}(K,V)=g^{\nu\alpha}(\omega-\mathbf{k} \cdot\mathbf{v})-K^{\nu}v^{\alpha},\] \[f(p)=2N_{c}f^{g}(\mathbf{p})+N_{f}\left[f^{q}(\mathbf{p})+f^{ \bar{q}}(\mathbf{p})\right],\] \[f_{\rm eq}(|\mathbf{p}^{\prime}|)=2N_{c}f^{g}_{\rm eq}(|\mathbf{ p}^{\prime}|)+N_{f}\left[f^{q}_{\rm eq}(|\mathbf{p}^{\prime}|)+f^{\bar{q}}_{\rm eq }(|\mathbf{p}^{\prime}|)\right],\] \[\mathcal{W}(K,\nu)=1-i\nu\int\frac{d\Omega}{4\pi}D^{-1}(K, \mathbf{v},\nu),\] \[\mathcal{S}^{i}(K,\nu)=-\int\frac{d^{3}p}{(2\pi)^{3}}v^{\mu}[f^{i}(\mathbf{p}) -f^{i}_{\rm eq}(|\mathbf{p}|)]D^{-1}(K,\mathbf{v},\nu). \tag{14}\] \(K=k^{\mu}=(\omega,\mathbf{k})\) Now taking the Fourier transform of the induced current from Eq. (10) and employing \(\delta f^{i}_{a}(p,K)\), from Eq. (12) we get, \[J^{\mu}_{\rm ind,a}(K) = g^{2}\int\frac{d^{3}p}{(2\pi)^{3}}v^{\mu}\partial^{(p)}_{\nu}f( \mathbf{p})\mathcal{M}^{\nu\alpha}(K,V)D^{-1}(K,\mathbf{v},\nu)A_{\alpha a}+ gi\nu\{2N_{c}\mathcal{S}^{g}(K,\nu)+N_{f}(\mathcal{S}^{g}(K,\nu)-\mathcal{S}^{ \bar{g}}(K,\nu))\} \tag{15}\] \[+g(i\nu)\int\frac{d\Omega}{4\pi}v^{\mu}D^{-1}(K,\mathbf{v},\nu) \int\frac{d^{3}p^{\prime}}{(2\pi)^{3}}\Big{[}g\partial^{(p^{\prime})}_{\nu}f( \mathbf{p}^{\prime})\mathcal{M}^{\nu\alpha}(K,V^{\prime})D^{-1}(K,\mathbf{v}^ {\prime},\nu)\mathcal{W}^{-1}(K,\nu)A_{\alpha a}\] \[+i\nu(f_{\rm eq}(|\mathbf{p}^{\prime}|)-f(\mathbf{p}^{\prime}))D ^{-1}(K,\mathbf{v}^{\prime},\nu)\Big{]}\mathcal{W}^{-1}(K,\nu)\,,\] where, \[D(K,\mathbf{v},\nu)=\omega+i\nu-\mathbf{k}\cdot\mathbf{v},\] \[\mathcal{M}^{\nu\alpha}(K,V)=g^{\nu\alpha}(\omega-\mathbf{k}\cdot\mathbf{v})- K^{\nu}v^{\alpha},\] \[f(p)=2N_{c}f^{g}_{\rm eq}(p)+N_{f}\left[f^{q}(\mathbf{p})+f^{ \bar{q}}(\mathbf{p})\right],\] \[f_{\rm eq}(|\mathbf{p}^{\prime}|)=2N_{c}f^{g}_{\rm eq}(|\mathbf{p}^{\prime}|)+ N_{f}\left[f^{q}_{\rm eq}(|\mathbf{p}^{\prime}|)+f^{\bar{q}}_{\rm eq}(| \mathbf{p}^{\prime}|)\right],\] \[\mathcal{W}(K,\nu)=1-i\nu\int\frac{d\Omega}{4\pi}D^{-1}(K,\mathbf{v},\nu),\] \[\mathcal{S}^{i}(K,\nu)=-\int\frac{d^{3}p}{(2\pi)^{3}}v^{\mu}[f^{i}(\mathbf{p} )-f^{i}_{\rm eq}(|\mathbf{p}|)]D^{-1}(K,\mathbf{v},\nu). \tag{16}\] \(K=k^{\mu}=(\omega,\mathbf{k})\) Now taking the Fourier transform of the induced current from Eq. (10) and employing \(\delta f^{i}_{a}(p,K)\), from Eq. (12) we get, \[J^{\mu}_{\rm ind,a}(K) = g^{2}\int\frac{d^{3}p}{(2\pi)^{3}}v^{\mu}\partial^{(p)}_{\nu}f( \mathbf{p})\mathcal{M}^{\nu\alpha}(K,V)D^{-1}(K,\mathbf{v},\nu)A_{\alpha a}+ gi\nu\{2N_{c}\mathcal{S}^{g}(K,\nu)+N_{f}(\mathcal{S}^{g}(K,\nu)-\mathcal{S}^{ \bar{g}}(K,\nu))\} \tag{17}\] \[+g(i\nu)\int\frac{d\Omega}{4\pi}v^{\mu}D^{-1}(K,\mathbf{v},\nu) \int\frac{d^{3}p^{\prime}}{(2\pi)^{3}}\Big{[}g\partial^{(p^{\prime})}_{\nu}f( \mathbf{p}^{\prime})\mathcal{M}^{\nu\alpha}(K,V^{\prime})D^{-1}(K,\mathbf{v}^{ \prime},\nu)\mathcal{W}^{-1}(K,\nu)A_{\alpha a}\] \[+i\nu(f_{\rm eq}(|\mathbf{p}^{\prime}|)-f(\mathbf{p}^{\prime}))D ^{-1}(K,\mathbf{v}^{\prime},\nu)\Big{]}\mathcal{W}^{-1}(K,\nu)\,,\] where, \[D(K,\mathbf{v},\nu)=\omega+i\nu-\mathbf{k}\cdot\mathbf{v},\] \[\mathcal{M}^{\nu\alpha}(K,V)=g^{\nu\alpha}(\omega-\mathbf{k}\cdot\mathbf{v})- K^{\nu}v^{\alpha},\] \[f(p)=2N_{c}f^{g}_{\rm eq}(p)+N_{f}\left[f^{q}(\mathbf{p})+f^{\bar{q}}(\mathbf{p}) \right],\] \[f_{\rm eq}(|\mathbf{p}^{\prime}|)=2N_{c}f^{g}_{\rm eq}(|\mathbf{p}^{\prime}|)+ N_{f}\left[f^{q}_{\rm eq}(|\mathbf{p}^{\prime}|)+f^{\bar{q}}_{\rm eq}(|\mathbf{p}^{ \prime}|)\right],\] \[\mathcal{W}(K,\nu)=1-i\nu\int\frac{d\Omega}{4\pi}D^{-1}(K,\mathbf{v},\nu),\] \[\mathcal{S}^{i}(K,\nu)=-\int\frac{d^{3}p}{(2\pi)^{3}}v^{\mu}[f^{i}(\mathbf{p}) -f^{i}_{\rm eq}(|\mathbf{p}|)]D^{-1}(K,\mathbf{v},\nu). \tag{18}\] \(K=k^{\mu}=(\omega,\mathbf{k})\) Now taking the Fourier transform of the induced current from Eq. (10) and employing \(\delta f^{i}_{a}(p,K)\), from Eq. (12) we get, \[J^{\mu}_{\rm ind,a}(K) = g^{2}\int\frac{d^{3}p}{(2\pi)^{3}}v^{\mu}\partial^{(p)}_{\nu}f( \mathbf{p})\mathcal{M}^{\nu\alpha}(K, where \(\Lambda_{T}\ll T\) is the QCD scale parameter. Here, Eq. (17) is fully solved numerically and the variation of screening mass with respect to the chemical potential at different temperatures is shown in Fig. 1. It has been found that \(m_{D}\) increases with the chemical potential whereas the effect of different values of temperature is negligible. Next, we shall discuss the derivation of the gluon propagator. ### Gluon Propagator To obtain the gluon propagator, we initiate with Maxwell's equation in the Fourier space, \[-ik_{\nu}F^{\nu\mu}(K)=J^{\mu}_{ind}(K)+J^{\mu}_{ext}(K). \tag{20}\] Here, \(J^{\mu}_{ext}(K)\) is the external current. The induced current, \(J^{\mu}_{ind}(K)\) can be described in terms of self-energy, \(\Pi^{\mu\nu}(K)\) as follows, \[J^{\mu}_{ind}(K)=\Pi^{\mu\nu}(K)A_{\nu}(K). \tag{21}\] The Eq.( 20) can also be noted as, \[[K^{2}g^{\mu\nu}-k^{\mu}k^{\nu}+\Pi^{\mu\nu}(K)]A_{\nu}(K)=-J^{\mu}_{ext}(K). \tag{22}\] The quantity in the square bracket is the needed gluon propagator. Now assuming the temporal gauge, the above equation can be written as, \[[\Delta^{-1}(K)]^{ij}E^{j} = [(k^{2}-\omega^{2})\delta^{ij}-k^{i}k^{j}+\Pi^{ij}(K)]E^{j} \tag{23}\] \[= i\omega J^{i}_{ext}(k),\] where \(E^{j}\) is the physical electric field and \[[\Delta^{-1}(K)]^{ij}=(k^{2}-\omega^{2})\delta^{ij}-k^{i}k^{j}+\Pi^{ij}(K), \tag{24}\] is the inverse of the propagator. The dispersion equations for collective modes can be obtained from the poles of the propagator, \([\Delta(K)]^{ij}\). Since, it is a tensorial equation and hence, can not be simply integrated. To solve it we first need to construct an analytical form of \(\Pi^{ij}\) using the available symmetric tensors such as, \[\Pi^{ij}=\alpha P^{ij}_{T}+\beta P^{ij}_{L}+\gamma P^{ij}_{n}+\delta P^{ij}_{ kn}, \tag{25}\] where, \(P^{ij}_{T}=\delta^{ij}-k^{i}k^{j}/k^{2}\), \(P^{ij}_{L}=k^{i}k^{j}/k^{2}\), \(P^{ij}_{n}=\tilde{n}^{i}\tilde{n}^{j}/\tilde{n}^{2}\) and \(P^{ij}_{kn}=k^{i}\tilde{n}^{j}+k^{j}\tilde{n}^{i}\)[33; 31; 40], where, \(\tilde{n}^{i}=(\delta^{ij}-\frac{k^{i}k^{j}}{k^{2}})\tilde{n}^{j}\) is a vector orthogonal to, \(k^{i}\),_i.e.,_\(\tilde{\bf n}\cdot{\bf k}=0\). The structure functions \(\alpha\), \(\beta\), \(\gamma\) and \(\delta\), can be determined by taking the appropriate projections of the Eq.(25) as follows, \[\alpha = (P^{ij}_{T}-P^{ij}_{n})\Pi^{ij},\quad\beta=P^{ij}_{L}\Pi^{ij},\] \[\gamma = (2P^{ij}_{n}-P^{ij}_{T})\Pi^{ij},\quad\delta=\frac{1}{2k^{2} \tilde{n}^{2}}P^{ij}_{kn}\Pi^{ij}. \tag{26}\] The structure functions mainly depend on \(k\), \(\omega\), \(\xi\), \({\bf k}\cdot\hat{\bf n}=\cos\theta_{n}\), and \(\nu\). We confined our analysis in the small anisotropy limit, \(\xi<1\), where all the structure functions can be calculated analytically up to linear order in \(\xi\), given as \[\alpha = (P^{ij}_{T}-P^{ij}_{n})\Pi^{ij},\quad\beta=P^{ij}_{L}\Pi^{ij},\] \[\gamma = (2P^{ij}_{n}-P^{ij}_{T})\Pi^{ij},\quad\delta=\frac{1}{2k^{2} \tilde{n}^{2}}P^{ij}_{kn}\Pi^{ij}. \tag{27}\] The structure functions mainly depend on \(k\), \(\omega\), \(\xi\)\(\nu\) and \({\bf k}\cdot\hat{\bf n}=\cos\theta_{n}\). In the small anisotropy limit, \(\xi<1\), all the structure functions can be calculated analytically up to linear order in \(\xi\), given as \[\alpha\left(K\right) = \frac{m_{D}^{2}}{48k}\Bigg{(}24kz^{2}-2k\xi\left(9z^{4}-13z^{2}+4 \right)+2i\nu z\left(\xi\left(9z^{2}-7\right)-12\right)-2\xi\cos 2\theta_{n} \Big{(}k\big{(}15z^{4}-19z^{2}+4\big{)} \tag{28}\] \[+i\nu z\big{(}13-15z^{2}\big{)}\Big{)}+\big{(}z^{2}-1\big{)}\left( 3\xi\left(kz\left(5z^{2}-3\right)+i\nu\left(1-5z^{2}\right)\right)\cos 2 \theta_{n}+kz\left(-7\xi+9\xi z^{2}-12\right)\right.\] \[\left.+i\nu\left(\xi-9\xi z^{2}+12\right)\right)\ln\frac{z+1}{z -1}\Bigg{)},\] Figure 1: Variation of screening mass with respect to chemical potential at different temperatures. \[\beta\left(K\right)=-\frac{m_{D}^{2}}{k}\ \frac{2(kz-i\nu)^{2}}{\nu\ln\frac{z+1}{z -1}+2ik}\ \Bigg{(}1-\frac{1}{2}z\ln\frac{z+1}{z-1}+\frac{1}{12}\xi\left(1+3\cos 2 \theta_{n}\right)\left(2-6z^{2}+\left(3z^{2}-2\right)z\ln\frac{z+1}{z-1} \right)\Bigg{)}, \tag{29}\] \[\gamma\left(K\right)=-\frac{m_{D}^{2}}{12k}\xi\left(k\left(z^{2}-1\right)-i\nu z \right)\left(4-6z^{2}+3\left(z^{2}-1\right)z\ln\frac{z+1}{z-1}\right)\sin^{2} \theta_{n}, \tag{30}\] \[\delta\left(K\right)=\frac{m_{D}^{2}}{24k^{2}}\xi\ \frac{(kz-i\nu)}{2k-i\nu\ln\frac{z+1}{z-1}} \Bigg{(}k\left(88z-96z^{3}\right)+8i\nu\left(6z^{2}-1\right)+\ln\frac{z+1}{z-1}\] \[\times\left(12k\left(4z^{4}-5z^{2}+1\right)-10i\nu z-3\ i\nu\left(4z^{4}-5z^{2 }+1\right)\ln\frac{z+1}{z-1}\right)\Bigg{)}\cos\theta_{n}, \tag{31}\] where \(z=\frac{\omega+i\nu}{k}\), and \[\ln\frac{z+1}{z-1}=\ln\frac{|z+1|}{|z-1|}+i\bigg{[}arg\bigg{(}\frac{z+1}{z-1} \bigg{)}+2\pi N\bigg{]}. \tag{32}\] where, \(N\)- corresponds to the number of Riemannian sheets. Now, we can rewrite Eq.(24) as, \[[\Delta^{-1}(K)]^{ij}=(k^{2}-\omega^{2}+\alpha)P_{T}^{ij}+(-\omega ^{2}+\beta)P_{L}^{ij}\] \[+\gamma P_{n}^{ij}+\delta P_{kn}^{ij}. \tag{33}\] Next, we know that both a tensor and its inverse lie in a space spanned by the same basis vectors or projection operators. Hence, \([\Delta(K)]^{ij}\) can also be expanded as its inverse, as, \[[\Delta(K)]^{ij}=aP_{L}^{ij}+bP_{T}^{ij}+cP_{n}^{ij}+dP_{kn}^{ij}. \tag{34}\] Now, using the relation \([\Delta^{-1}(K)]^{ij}[\Delta(K)]^{jl}=\delta^{il}\), we obtained the following expressions for the coefficients \(a\), \(b\), \(c\) and \(d\), \[a=\Delta_{G}(k^{2}-\omega^{2}+\alpha+\gamma),\hskip 28.452756ptb= \Delta_{A}\] \[c=\Delta_{G}(\beta-\omega^{2})-\Delta_{A},\hskip 28.452756ptd=- \Delta_{G}\delta \tag{35}\] where, \[\Delta_{A}=(k^{2}-\omega^{2}+\alpha)^{-1}\] \[\Delta_{G}=[(k^{2}-\omega^{2}+\alpha+\gamma)(\beta-\omega^{2})-k^ {2}\hat{n}^{2}\delta^{2}]^{-1}. \tag{36}\] Considering the linear \(\xi\), approximation we ignore \(\delta^{2}\), as it will be of order \(\xi^{2}\), we have \[\Delta_{G}=[(k^{2}-\omega^{2}+\alpha+\gamma)(\beta-\omega^{2})]^{-1}. \tag{37}\] Rewriting, Eq. (34) as, \[[\Delta(K)]^{ij}=\Delta_{A}(P_{T}^{ij}-P_{n}^{ij})+\Delta_{G} \big{[}(k^{2}-\omega^{2}+\alpha+\gamma)P_{L}^{ij}\] \[+(\beta-\omega^{2})P_{n}^{ij}-\delta P_{kn}^{ij}\big{]}\,, \tag{38}\] This is a simplified structure of the gluon propagator that could be easily used to obtain the poles. ### Modes dispersion relation As noted earlier, the dispersion relation of the collective modes can be acquired from the poles of the gluon propagator. The poles of Eq. (38) are given as, \[\Delta_{A}^{-1}(K)=k^{2}-\omega^{2}+\alpha=0, \tag{39}\] \[\Delta_{G}^{-1}(K)=(k^{2}-\omega^{2}+\alpha+\gamma)(\beta-\omega^{2})=0. \tag{40}\] \(\Delta_{G}^{-1}(K)\), can further splits as, \[\Delta_{G}^{-1}(K)=\Delta_{G1}^{-1}(k)\ \Delta_{G2}^{-1}(k)=0. \tag{41}\] Thus, we have two more dispersion equations, \[\Delta_{G1}^{-1}(K)=k^{2}-\omega^{2}+\alpha+\gamma=0, \tag{42}\] \[\Delta_{G2}^{-1}(K)=\beta-\omega^{2}=0. \tag{43}\] Note that we have got three dispersion equations (39), (42), and (43). We call these A-, G1- and G2-mode dispersion equations, respectively. Now, we have three dispersion relations for the collective modes. Based on their solutions, \(\omega k\) we can identify if the modes are real or imaginary, stable or unstable which we shall discuss in the next section. ## III Results and discussions We shall examine the results by solving the dispersion equations (39), (42) and (43) numerically. We normalize the frequency \(\omega\) and wave vector \(k\) by the plasma frequency in the absence of chemical potential (\(\omega_{p}=m_{D}(\mu=0)/\sqrt{3}\)). To avoid the bulk of plots that are already available in the literature [19; 20; 27; 28; 31], we shall primarily focus on the effects of finite chemical potential and to have a comparative study we shall also highlight the anisotropy and medium particles collisions effects in the context of collective modes. We have fixed the temperature as \(T=2T_{c}\), where \(T_{c}=0.155\) GeV the direction, \(\theta_{n}=\pi/6\), the strength of anisotropy, \(\xi=0.3\), and the finite collision frequency, \(\nu=0.3\omega_{p}\). We shall start with the discussion of structure functions given in Eq. (28) to Eq. (31) as they are the important parts of the gluon selfenergy, gluon propagator and hence, important for collective modes. We have plotted all the structure functions, _i.e.,_\(\alpha\), \(\beta\), \(\gamma\) and \(\delta\) with respect to \(\omega\) normalized by \(\omega_{p}\) at different values of chemical potential but at a fixed collisions frequency and momentum anisotropy as shown in Fig. 2. It can be noticed that with the increase in the chemical potential, the magnitudes of both the real and imaginary parts of the structure functions grow. The behavior of all the structure functions are changing at \(\omega\sim\omega_{p}\). At very high \(\omega\), only the real parts of structure functions survive whereas, their imaginary parts vanish. The stable modes are long-range modes. As shown in Fig. 3, the real stable modes are found to increase with the \(k\) at mentioned parameters. The stable G2- mode is highly, whereas the G1- mode is marginally suppressed as compared to stable A- mode. The magnitude of these modes rise with the chemical potential. The imaginary parts are emerging only because of the collisional effects. Here, the finite \(\nu\) generates the imaginary modes with \(Im(\omega(k))<0\), _i.e.,_ the modes are damped and their amplitudes exponentially decay with time as \(e^{-Im(\omega(k))t}\) whereas it does not further satisfy the over-damped condition. If we consider the collisionless plasma, _i.e.,_\(\nu=0\) these modes disappear. This fact has already been shown in the studies by different groups [19; 27; 31]. These results for the collisionless plasma remain untouched in our case as well. In Fig. 4 the imaginary stable modes are plotted. The magnitude of imaginary A- and G1- modes are growing with the chemical potential and their magnitudes are not very distinguishable. An explanation for that is the difference between the A- and G1- modes are only the structure functions, \(\gamma\) whose magnitude as compared to \(\alpha\) is very small and hence does not carry a high impact that could make it distinguishable. The imaginary G2- mode on the other hand shows the opposite behavior. This is because G2- mode depends on \(\beta\) whose magnitude at \(\mu=10\) GeV is very high. A crucial aspect in the current study is the unstable modes that are shown in Fig. 5 for weakly squeezed plasma,\(\xi=0.3\) and non-zero collisions frequency, \(\nu=0.3\omega_{p}\) at a fixed angle, \(\theta_{n}=\pi/6\) for different values of chemical potential. We have encountered only two unstable modes, _i.e.,_ unstable A- and G1- modes whereas, the unstable G2- mode is missing. This is because the unstable modes are the imaginary solutions of \(\omega\), in the dispersion equations (39), (42) and (43) and to obtain their solutions we consider \(\omega\), to be purely imaginary, _i.e.,_\(\omega=i\Gamma\). In this substitution, we marked that \(\beta>0\) and consequently \(\Gamma^{2}+\beta=0\) from Eq. (43) will never be satisfied. A similar case was discussed in Ref. [19; 27; 31; 28] in the absence of medium particle collision. These modes are short-lived though the presence of chemical potential enhances their magnitude which may further support the fast thermalization of the created hot medium. To comprehend the dependence of instability on anisotropy and chemical potential and medium particle collisions, we obtained \(k_{max}\) and \(\nu_{max}\) at which the unstable modes were completely suppressed at fixed values of the other parameters. The results are shown in Fig. 6 and Fig. 7. The values of \(k_{max}\), at which the unstable A- and G1- modes are completely suppressed are obtained by substituting \(\omega=0\), in Eq. (39) and (42), respectively. In Fig. 6 for A- mode (left panel) and G1- mode (right panel), it is shown that \(k_{max}\) grows with the chemical potential and the existence of anisotropy further enhances it at fixed \(\theta_{n}\) and \(\nu\). The corresponding values of G1-mode are suppressed as compared to A- mode. This supports our prior discussion that the unstable modes exist up to higher values of \(k\) at a large chemical potential. Following the similar procedure discussed above, we obtained \(\nu_{max}\), at the point where the unstable A- and G1- modes are fully suppressed. In Fig. 27, the behavior of \(\nu_{max}/\omega_{p}\), for A- mode (left panel) and G1- mode (right panel) with respect to \(\mu\) are shown for different \(\xi\) (\(\xi=0.2,0.4\) and \(0.6\)) at fixed \(\theta_{n}\) and \(k\). Again it is found that with the increase in chemical potential, the maximum values of collision frequency increase, and the results are further enhanced with the momentum anisotropy. Moreover, it can be seen that in both the cases, _i.e.,_\(k_{max}\) and \(\nu_{max}\), respectively in Fig. 6 and Fig. 7, the results for G1- mode are suppressed as compared to A-mode. It is mainly because of the behavior of the structure function, \(\gamma\) that tries to suppress the G1- mode. This indicates that if we have a high baryon density, the collision frequency among the medium particles also increases, and hence, one can infer that at a large baryon density, the collision frequency is dominated by the medium particles. Figure 6: Variation of the maximum values of propagation vector for unstable A- mode (left) and G1- mode (right) at \(\nu=0.3\omega_{p}\), \(\theta_{n}=\pi/6\) and \(T=2T_{c}\) where \(T_{c}=0.155GeV\) at different \(\xi\). density, the medium will thermalize faster. ## IV Summary and conclusions In this manuscript, we attempted to assimilate the finite chemical potential in the study of collective excitation present in the hot QGP medium and examine its effect in the presence of medium particle collision and small but finite momentum anisotropy. We begin with the derivation of gluon self-energy in terms of structure functions where we included the finite chemical potential at the particle distribution level along with the momentum anisotropy straightforwardly either by stretching or squeezing the distribution function in one of the directions (denoted as \(\mathbf{\hat{n}}\)). We solved the Boltzman equation after incorporating the medium particle collision via the BGK collisional kernel to obtain the change in the distribution function of each species _viz.,_ quarks, anti-quarks and gluons. Next, using the Yang-Mills equation or classical Maxwell's equation, we formulated the gluon propagator. From the poles of this propagator, we fetched the dispersion equation for the collective modes. As mentioned in the introduction section, we classified the modes as real or imaginary and stable or unstable based on the solution of the dispersion equations. In the present formalism, one can analyze the modes at any angular direction by changing \(\theta_{n}\), (_i.e.,_) the angle between \(\mathbf{k}\) and \(\mathbf{\hat{n}}\) at different values of anisotropic strength in the given range, \(-1<\xi<1\). Just to avoid the bulk of plots that are available in the prior literature, we did not display the variation of \(\theta_{n}\) and \(\xi\). Entertaining the anisotropy in the analysis is important otherwise there will be no unstable modes appearing. As shown in Ref. [28] that the maximum possible value of collision frequency could be \(0.62~{}m_{D}\), _i.e.,_\(0.36~{}\omega_{p}\), we fixed the value of collision frequency, \(\nu=0.3\omega_{p}\). We first investigate the structure functions of the self-energy and found that they intensely depend on the chemical potential. We noticed that at \(\theta_{n}=0\), \(\delta\) disappears whereas at \(\theta_{n}=\pi/2\), \(\gamma\) vanishes. From the poles of the propagator, we got three dispersion equations for collective modes. Based on their solutions, we found three real and imaginary stable modes. Whereas, only two unstable modes were found. The imaginary stable modes disappear in the absence of medium particle collision whereas the unstable modes vanish in the absence of anisotropy. Also, the two unstable modes overlap at \(\theta_{n}=\pi/2\) as the structure function \(\gamma\) goes to zero. In the collisionless isotropic medium, we obtain only three real stable modes. It has been found that all kinds of modes rely strongly on the chemical potential and mainly enhance their magnitude. We also investigate the suppression of unstable modes through \(k_{max}\) and \(\nu_{max}\) via plotting them against \(\mu\). It has been observed that with \(\mu\) the maximum values of the wave vector and collision frequency increase which further enhance with anisotropy. In the near future, the current project can be extended through the inclusion of a non-local BGK kernel. Moreover, an essential aspect that is missing in the study of modes is the study of group velocity as these modes are nothing but plasma waves. The non-abelian term of the field strength tensor could also be included to make the analysis more realistic and closer to the experimental observations. ## V Acknowledgements I would like to that SERB for providing NPDF (project ID 2022/F/SKD/014). I would also like to acknowledge the people of INDIA for their generous support for the research in fundamental sciences in the country. Figure 7: Variation of the maximum values of collision frequency for unstable A- mode (left) and G1- mode (right) at \(k=\omega_{p}\), \(\theta_{n}=\pi/6\) and \(T=2T_{c}\) where \(T_{c}=0.155GeV\) at different \(\xi\).
2309.11870
Automated Probe Life-Cycle Management for Monitoring-as-a-Service
Cloud services must be continuously monitored to guarantee that misbehaviors can be timely revealed, compensated, and fixed. While simple applications can be easily monitored and controlled, monitoring non-trivial cloud systems with dynamic behavior requires the operators to be able to rapidly adapt the set of collected indicators. Although the currently available monitoring frameworks are equipped with a rich set of probes to virtually collect any indicator, they do not provide the automation capabilities required to quickly and easily change (i.e., deploy and undeploy) the probes used to monitor a target system. Indeed, changing the collected indicators beyond standard platform-level indicators can be an error-prone and expensive process, which often requires manual intervention. This paper presents a Monitoring-as-a-Service framework that provides the capability to automatically deploy and undeploy arbitrary probes based on a user-provided set of indicators to be collected. The life-cycle of the probes is fully governed by the framework, including the detection and resolution of the erroneous states at deployment time. The framework can be used jointly with existing monitoring technologies, without requiring the adoption of a specific probing technology. We experimented our framework with cloud systems based on containers and virtual machines, obtaining evidence of the efficiency and effectiveness of the proposed solution.
Alessandro Tundo, Marco Mobilio, Oliviero Riganelli, Leonardo Mariani
2023-09-21T08:15:33Z
http://arxiv.org/abs/2309.11870v1
# Automated Probe Life-Cycle Management for Monitoring-as-a-Service ###### Abstract Cloud services must be continuously monitored to guarantee that misbehaviors can be timely revealed, compensated, and fixed. While simple applications can be easily monitored and controlled, monitoring non-trivial cloud systems with dynamic behavior requires the operators to be able to rapidly adapt the set of collected indicators. Although the currently available monitoring frameworks are equipped with a rich set of probes to virtually collect any indicator, they do not provide the automation capabilities required to quickly and easily change (i.e., deploy and undepoly) the probes used to monitor a target system. Indeed, changing the collected indicators beyond standard platform-level indicators can be an error-prone and expensive process, which often requires manual intervention. This paper presents a Monitoring-as-a-Service framework that provides the capability to _automatically_ deploy and undepoly arbitrary probes based on a user-provided set of indicators to be collected. The life-cycle of the probes is fully governed by the framework, including the detection and resolution of the _ erroneous states_ at deployment time. The framework can be used jointly with _existing monitoring technologies_, without requiring the adoption of a specific probing technology. We experimented our framework with cloud systems based on containers and virtual machines, obtaining evidence of the efficiency and effectiveness of the proposed solution. Cloud monitoring, Monitoring framework, Monitoring-as-a-Service, Probes deployment ## 1 Introduction Cloud-based solutions emerged as the key paradigm to support the operation of large-scale distributed systems composed of many interconnected services [1, 2]. Indeed, these systems are characterized by highly _dynamic_ and _complex_ behaviors that include the capability to adapt to changes in the available computational resources, to dynamically update and scale services without interrupting operation, and to be resilient to network problems and software failures. Due to their size and complexity, every element of a cloud system must be _continuously observed_, to timely react to anomalous behaviors, generating alerts, and activating countermeasures [3, 4, 5, 6, 7]. In fact, cloud-based solutions are systematically enriched with monitoring capabilities, either natively offered by cloud platforms (e.g., Kubernetes [8]), or provided by external tools (e.g., Elastic Stack [9] and Prometheus [10]). These monitoring solutions are mainly designed to collect a stable set of indicators over time, being _challenged by scenarios that require rapidly modifying the set of collected indicators_. In contrast, there are many well-known causes of sudden changes to the set of collected indicators. The _goals of the operators_ change with the technical and business objectives of the organization, consequently causing changes in the set of the indicators that must be collected. The _software usage patterns_ that emerge from the field continuously evolve, often determining the need of adjusting the monitored indicators accordingly. The collected indicators must be adapted to changes in the _workload_, which must be carefully observed to timely reveal any symptom of stress on the services. Moreover, _service updates_ normally require putting in place ad-hoc monitoring capabilities that target the updated services to measure their reliability and timely detect misbehaviors. Sometimes, the observation of _failures_ generates the need of continuously observing the services that fail often, to prevent new failures and localize the causes of problems; and _dynamically deployed scenarios_ (e.g., to timely react to disasters and emergencies) require quickly deploying new functional services and the corresponding monitoring components. Relevantly, all these factors are _dynamic and cannot be entirely anticipated_. Changing the set of collected indicators often requires changing the set of probes running in the field. However, configuring and deploying new probes, as well as undepolying the existing probes, are _non-trivial and time-consuming activities_. For instance, a tech company running many cloud services needs to collect KPIs at different granularity levels, taking into account both business and technical needs [5]. The needs of managers shall follow business goals and market evolution, while the needs of technicians shall follow QoS goals and software evolution. These needs evolve independently, and simultaneous changes in both business and technology may generate a rapidly increasing number of requests for the operators responsible of configuring the monitoring system. Operators may struggle adapting their monitoring systems at some point, especially when a large number of services has to be monitored. For this reason, research focused on _increasing the level of automation of probe management_. Figure 1 shows the increasing levels of automation that have been introduced in monitoring systems. Simple _manually configurable_ monitoring systems (Figure 1 (a)), such as Elastic Stack [9] and Prometheus [10], require configuring and deploying probes manually, that is, the life cycle of every component of the monitoring system must be handled manually by developers. Although useful, these monitoring systems are expensive to use in presence of frequent changes to the set of collected indicators, and badly adapt to dynamic scenarios. Some probe deployment tasks could be implemented using _general purpose deployment systems_ (Figure 1 (b)), such as Ansible [11] and Puppet [12]. However, these systems are not designed to specifically serve monitoring frameworks, and defining and controlling the deployment strategies would still be entirely on the shoulder of the operator. As discussed next in this paper, general purpose deployment systems can be indeed used as basic building blocks of more sophisticated deployment solutions tailored to address cloud monitoring. A simple form of automation present in some systems consists of the _support to autoscaling_ (Figure 1 (c)), that is, probes automatically adapt to a changing number of replica of a monitored service [13]. This is a useful feature, although limited to a specific scenario, missing to cope with the many changes that must be actuated as a consequence of changes on the set of collected indicators and monitored services. To obtain a sufficient level of flexibility to address the aforementioned characteristics, _Monitoring-as-a-Service_ (MaaS) solutions have been recently studied [13, 14, 15, 16] (Figure 1 (d)-(f)). In fact, MaaS frameworks provide operators with the capability to flexibly decide the set of indicators to be collected, alleviating them from the burden of configuring and handling the life-cycle of the probes. In principle, an operator using a MaaS framework can simply specify the set of indicators that must be collected, while the operational aspects are automated by the framework. Unfortunately, in many cases, automation is _limited to the activation of manually pre-deployed probes_[13] (Figure 1 (d)), that is, probes that have been already installed and configured _manually_. Adding probes to collect new indicators and removing existing probes must still be done manually by operators. A higher degree of automation is provided by some _specific platforms_ (Figure 1 (e)) that natively offer monitoring capabilities (e.g., Kubernetes). These solutions are effective but significantly limit both the range of platforms and indicators that can be used. _So far, there is no general MaaS solution that can be used to collect virtually any KPI on any platform_. Note that a MaaS system that fully handles the life-cycle of probes is _the only solution that can entirely free operators from the burden of handling probe deployment_; in fact they would be able to control the monitoring system by simply specifying the set of indicators to be collected. In this paper we address this challenge by presenting a MaaS monitoring framework (Figure 1 (f)) that exploits both a catalog of probes annotated with metadata and access to the API of the platform running the monitored services, to deliver _full MaaS capabilities including error-handling_. This work extends our former tool demo paper [17] by (i) proposing a consolidated and scalable architecture, (ii) introducing error handling capabilities in the monitoring system, (iii) providing a rigorous presentation of the monitoring framework, and (iv) reporting results from an extensive empirical evaluation of the effectiveness of the approach. In a nutshell, this paper makes the following contributions: * **Automated life-cycle management of probes**. We present a MaaS framework that fully automates the deployment and undeployment of arbitrary probes starting from declarative inputs (i.e., the list of indicators to be collected) entered by the operators. * **Scalable and independent control processes**. We _rigorously_ describe the probe deployment and undeployment procedures that guarantee the correctness of the resulting behavior. The involved processes are defined as stateless services to guarantee the scalability of the resulting system. * **Deployment error handling routines**. We present how errors in probe deployment can be autonomously detected and fixed by the MaaS framework. So far, _error handling_ capabilities received little attention, with approaches mostly focusing on error-free deployment scenarios or relying on the direct intervention of operators, despite the importance of error handling for long-running systems, such as a monitoring system [15, 16]. * **Technology-agnostic control processes**. We detail how the presented framework can be integrated with existing Fig. 1: Monitoring automation. technologies (e.g., probe technology, backend database, and target environment) without the need of using ad-hoc solutions. * **Empirical evidence**. We empirically study the _effectiveness_ of the framework with both _containers and virtual machines_, the _efficiency of error-handling_, and the _scalability_ for an increasing number of requests. The paper is organized as follows. Section 2 introduces a running example that is used to illustrate the approach throughout the paper. Section 3 presents the MaaS framework that automates life-cycle management of probes, rigorously describing the control processes that govern probe deployment and undeployment. Section 4 presents error handling. Section 5 discusses support to frameworks and probe technologies. Section 6 presents empirical evidence. Section 7 discusses related work. Finally, Section 8 provides concluding remarks. ## 2 Running Example In this section, we introduce a running example that we use to illustrate and exemplify how the proposed MaaS framework works. The example consists of a PostgreSQL instance target-PSQL running as part of a larger cloud system. Such an instance is of interest for two operators: operator op-A and operator op-B. Operator op-A is mostly interested in infrastructure KPIs and is collecting network consumption data related to target-PSQL. Operator op-B is interested in both infrastructure and application KPIs, and is collecting 3 KPIs: network consumption data, CPU consumption data, and database metrics. We refer to this initial configuration as init-conf. In this context, operator op-A may notice anomalous data in the network traffic and decide to collect information about two additional KPIs: CPU consumption and user session data. We refer to the configuration where operator op-A is also collecting these two additional KPIs as the 2-more-kPIs-conf. Finally, operator op-B may loose interest for the PostgreSQL service, for instance because the services maintained by operator op-B may stop using PostgreSQL. In such a case, operator op-B stops collecting any indicator from target-PSQL. We refer to this final configuration as op-B-left-conf. We will refer to these sample scenarios and configurations in the rest of the paper to explain how the set of probes necessary to collect the indicators required by operators op-A and op-B can be adjusted automatically and transparently to the operators. ## 3 MaaS Framework The proposed MaaS framework exploits a few relevant domain concepts to organize the responsibilities of the components. In the following, we first introduce these key concepts, both informally and rigorously, and then discuss the framework architecture. A **Target** represents an application, a system, or a resource that can be monitored by a monitoring framework. It is characterized by the hosting platform (e.g., Microsoft Azure, Kubernetes), a unique identifier within the hosting platform (e.g., the Kubernetes Deployment name or VM name in Microsoft Azure), and its execution environment (e.g., virtual machine, or container). In our running example, the target is a PostgreSQL instance that we assume can be identified with the label target-PSQL in both Kubernetes (as deployment name) and Microsoft Azure (as VM name). A **Probe** represents a _deployable artifact_ that can be used to collect indicators from targets in different environments. Probes are annotated with metadata that describe how they can be deployed and configured. More rigorously, a probe \(p\) is a tuple \(p=(I,meta,artifact)\), where \(I=\{i_{1},\ldots i_{n}\}\) is a set of indicators that can be collected with the probe, _meta_ is a set of key-value pairs that represent the metadata associated with the probe, and _artifact_ is a reference to the artifacts that implement the actual software probe. We use the notation \(p^{I}\), \(p^{meta}\) and \(p^{artifact}\) to refer to the individual components of a probe \(p\). A **Monitoring Claim** specifies the indicators that an operator may want to collect for a specific target. More rigorously, a monitoring claim \(mc\) is a tuple \(mc=(I,op,t)\) where \(I=\{i_{i},\ldots i_{k}\}\) is the set of indicators to be collected from the target \(t\) for the operator \(op\). The claim is intended as a complete specification for the specified target, thus if the operator is already monitoring an indicator \(i\) for a given target \(t\) and the newly submitted claim does not include the indicator, the monitoring system will stop collecting \(i\) from \(t\). For example, operator op-A shall submit a monitoring claim \((\{\)network_consumption, cpu_consumption, user_session_data\(\},\)\(op\)-\(A,\)\(target\)-PSQL\()\) to start collecting CPU consumption and user session data, in addition to network consumption. Similarly, operator op-B shall submit a monitoring claim \((\{\},\)\(op\)-\(B,\)\(target\)-PSQL\()\) to stop collecting data. A **Monitoring Request** is a collection of Monitoring Claims submitted with a single request by an operator. More rigorously, a monitoring request \(mr\) submitted by operator \(op\) is a set \(mr=\{mc_{1},\ldots mc_{m}\}\) where \(mc_{i}=(I_{i},op,t_{i})\). A **Monitoring Unit** is an execution unit (e.g., a virtual machine or a container) that runs one or more probes. When needed, our monitoring framework dynamically creates and destroys monitoring units to collect the indicators specified by the operators in their monitoring claims. A monitoring unit is also characterized by a hosting platform, which represents the environment where the unit is executed, and a configuration, which captures how the probes in the monitoring unit are configured. More rigorously, a monitoring unit \(mu\) is a tuple \(mu=(host,mus,C)\), where \(host\) identifies the platform that provides the unit, \(mus\) indicates the strategy used to configure the unit (i.e., single probe or multi-probe), and \(C\) is the configuration of the unit, which consists of zero or more _probe configurations_, depending on the number of probes installed. Specifically a probe configuration \(c\in C\) is a tuple \(c=(p,I,op)\), where \(p\) is a probe, \(I\subseteq p^{I}\) represents the set of indicators that \(p\) is configured to collect, and \(op\) is the operator who asked for the probe configuration \(c\). We use the notation \(mu_{P}\) to refer to the set of probes in the current configuration of \(mu\), that is, \(mu_{P}=\{p|\exists(p,\cdot,\cdot)\in\text{\emph{user\_session\_data}}\}\) \(C\)1. Finally, given a probe configuration \((p,I,op)\), we use the notation \(I(p)\) to refer to the indicators that \(p\) is configured to monitor, that is, \(I(p)=I\). Footnote 1: We use the symbol \(\cdot\) when any value is allowed in a tuple. Our framework implements two **monitoring unit configuration strategies**: the multi-probe monitoring unit and the single-probe monitoring unit. The _multi-probe monitoring unit strategy_ uses one monitoring unit (e.g., a virtual machine) per monitored target (e.g., an instance of PostgreSQL), hosting in the unit all the probes that share a same target (e.g., every probe that collects indicators about PostgreSQL). This strategy is well suited for virtual machines, which are heavyweight units that typically run multiple processes. The _single-probe monitoring unit strategy_ uses one monitoring unit (e.g., a container) per deployed probe (e.g., a Metricbeat probe for CPU consumption). This strategy is well suited for containers, which are lightweight units that preferably run a single process. For instance, the initial configuration of the running example, where virtual machines running on Microsoft Azure are used, implies the existence of a single monitoring unit \(\mathit{mu}=(\mathit{azure},\mathit{multi-probe},C)\), running the probe \(p_{net}\), which serves both operators op-A and op-B, and the probes \(p_{cpu}\), \(p_{db}\), which both serve operator op-B. Consequently, \(C\) consists of the following four probe configurations: \((p_{net},\textsc{network\_consumption},\textsc{op-A})\), \((p_{net},\textsc{network\_consumption},\textsc{op-B})\), \((p_{cpu},\textsc{cpu\_consumption},\textsc{op-B})\), and \((p_{db},\textsc{db\_Metrics},\textsc{op-B})\). Note that the monitoring units are created to have the right visibility of the target to be monitored. In fact, a virtual machine monitoring unit can be either the same virtual machine running the monitored service or a separated virtual machine with probes that query an interface exposed by the monitored service (e.g., using SNMP [18]). On the other hand, a container monitoring unit can be created as a sidecar of the container running the target service [19], to have extensive visibility of the monitored service, or as a standalone container running in the same node of the target. Figure 2 shows the proposed **monitoring framework**, which consists of four main stateless services and three repositories. The four services are (i) an _API Service_, which offers a gateway to access and update state information about the monitoring system, (ii) a _Monitoring Claim Controller_, which is responsible for handling the life-cycle of every monitoring claim, (iii) a _Monitoring Unit Controller_, which is responsible for handling the life-cycle of every monitoring unit, and (iv) a _Cloud Bridge_, which exploits a plug-in based architecture to interact with different cloud providers and platforms, actuating the operations decided by the other services. The three repositories consist of (i) a repository of _monitoring claims_ submitted by operators, (ii) a repository with the created _monitoring units_ and their configurations, and (iii) a _probe catalog_ with all probes and deployable artifacts. _Automated life-cycle management_ of the probes is provided by the two controllers that collaborate to manage the set of monitoring units, and the deployed probes, based on the requests produced by the operators that only include the information about the indicators to be collected. The stateless nature of the controllers guarantees _scalability_, as long as sufficient resources are provided to the monitoring system. The controllers also track the status of the monitoring units to _handle and recover from errors_. Finally, the framework is built with a plug-in based architecture that allows _multiple cloud platforms_ to be integrated, as long as they provide a management API. In the rest of this section, we rigorously describe how the components, and the controllers in particular, behave. ### _Repositories_ The **Probe Catalog** is a repository \(\mathit{PC}=\{p_{1},\ldots p_{n}\}\) where \(p_{i}\) is a probe. We assume the probe catalog is organized in such a way there is a unique artifact that can be used in a given context, that is, given an index \(i\) and the execution constraints (e.g., the host environment that executes the probe, the timeseries database that must be used to store the data, etc.), there is a unique probe \(p\) that can be used to collect \(i\) in the target environment. We do not detail here the execution constraints that can be used to identify the probe, but these are represented in the metadata associated with the available artifacts and matched for equality (or inclusion in case of lists) by the framework to select the probes. Complex matching procedures can be also implemented in the catalog if needed, such as the possibility to have multiple probes suitable for a same context, and a decision procedure that can choose among them. Defining algorithms to choose among multiple probe artifacts is however out of the scope of the presented work and we simply require the operator to populate the probe catalog with one usable artifact per execution context that must be addressed with the architecture. Fig. 2: Monitoring Framework Architecture. To illustrate the matching procedure, consider the case of op-A asking to collect user session data from PostgreSQL. Let us assume the system considered in the running example runs on Kubernetes and that Elasticsearch is used as time-series database. In this context, the monitoring system will check the probe catalog looking for a probe whose metadata specificity the capability to (a) collect user session data from PostgreSQL, (b) to run within containers, and (c) to store data in Elasticsearch. The monitoring system is configured with information about the environment (e.g., how to access Elasticsearch and Kubernetes APIs) to be able to configure the probes once deployed. If a matching entry is found, the corresponding artifacts are selected, and then deployed in a container, as illustrated later in this section. Otherwise, the request is aborted and the Probe Catalog has to be extended to support new probes, as described in Section 5. The **Monitoring Claims Repository** stores the monitoring claims and tracks their statuses while they are created, processed, and updated. Since operators can update their claims about a given target, the repository can at most include one monitoring claim for a given operator-target pair. For example, an operator may submit a first monitoring claim to collect network consumption for a running instance of PostgreSQL (corresponding to the init-conf in our running example), and later update the monitoring claim asking to collect two more indicators, CPU consumption and user session data, still from PostgreSQL (corresponding to the 2-more-kpis-conf in our running example). The **Monitoring Units Repository** tracks the status of the monitoring units and their configurations. In particular, the Monitoring Units Repository stores both the _current configuration_ of a monitoring unit, which reflects the status of the software monitoring unit, and the _desired configuration_ of a monitoring unit, which reflects the configuration that must be reached based on the received requests, supporting the controllers in the process of adapting the configurations. To conveniently work with the configurations required by operators, we define the operator \(|_{op}\) which discards every entry related to \(op\) from a configuration. More formally, given a configuration \(C\), we define \(C|_{op}=\{c_{i}\ |\ c_{i}\in C\) and \(c_{i}=(p_{i},I_{i},op_{i})\) with \(op_{i}\neq op\}\). A Monitoring Units Repository _MUR_ stores tuples \((t,mu,dc)\) that associate a target \(t\) with a monitoring unit \(mu\) running probes that collect data from \(t\), to its desired configuration \(dc\). Given a monitoring unit \(mu=(host,mus,C)\), we use the notation \(conf_{c}(mu)\) to refer to its current configuration, that is, \(conf_{c}(mu)=C\). We instead use the notation \(conf_{d}(mu)\) to refer to the desired configuration of a monitoring unit \(mu\), that is, \(conf_{d}(mu)=dc\). The level of alignment between \(conf_{c}(mu)\) and \(conf_{d}(mu)\) indicates how much the actual monitoring unit (i.e., the unit running in the cloud) matches the monitoring claims submitted by operators. If \(\mathit{conf}_{c}(mu)=\mathit{conf}_{d}(mu)\), the current and desired monitoring configurations are the same, thus the monitoring unit \(mu\) is up to date and perfectly aligned with the existing monitoring claims. Otherwise if \(\mathit{conf}_{c}(mu)\neq\mathit{conf}_{d}(mu)\), the monitoring unit \(mu\) needs to be modified to reach the desired configuration. If _MUR_ is handled according to the multi-probe monitoring unit strategy, given a target \(t\), there is at most one \(mu\) such that \((t,mu,\cdot)\in\mathit{MUR}\) (i.e., one monitoring unit running multiple probes per target). If _MUR_ is handled according to the single-probe monitoring unit strategy, given a target \(t\) and a probe \(p\), there is at most one \((t,mu,C)\in\mathit{MUR}\), with \((p,\cdot,\cdot)\in C\), but there might exist multiple monitoring units running different probes associated with a same target. ### _API Service_ The API Service provides two APIs: a public API for external clients and a private API for internal use only. The _public API_ is used by operators to submit monitoring requests, receive information about the status of their requests, extract the list of the current available Targets, and upload new probes to the Probes Catalog. The _private API_ is used by the Monitoring Claim Controller and Monitoring Unit Controller to handle (i.e., to read and update) the status information about both the monitoring claims and the monitoring units, as described in Sections 3.3 and 3.4. Note that the API Service is the only service that can directly access the three repositories. The presence of a single entry-point for accessing the persistent data drastically reduces the risk of (potentially) introducing data inconsistencies. To avoid introducing a single-point of failure in the architecture, we designed the API Service as a stateless service that can be instantiated in multiple replicas. The API Service is accessed through synchronous API calls, to guarantee that requests are processed as quickly as possible, but status updates are delivered through a message bus, since serving a request is not always an immediate operation. ### _Monitoring Claim Controller_ ``` 0: a monitoring claim \(mc=(I,op,t)\) to be processed 0:\(\mathit{mus}\), the monitoring unit strategy 0: desired configurations are updated according to \(mc\) 1:\(P\leftarrow\) APIService.getProbeConfigs(\(I,t\)) 2:if\(P=\varnothing\)thenreturn 3:if\(\mathit{mus}\)=multi-probe then 4: UpdateConfUnit(\(P\), \(op\), \(t\), \(mus\)) 5:else if\(\mathit{mus}\)=single-probe then 6:for\(p_{\mathit{conf}}\in P\)do 7: UpdateConfUnit(\(\{p_{\mathit{conf}}\}\), \(op\), \(t\), \(mus\)) 8:procedureUpdateConfUnit(Set of probe configurations \(P\), operator \(op\), target \(t\), monitoring unit strategy \(mus\)) 9:\(\mathit{unit}\leftarrow\) APIService.getMonitoringUnit(\(t\), \(mus\), \(P\)) 10:if\(\mathit{unit}=\varnothing\)then 11:\(\mathit{unit}\leftarrow\) APIService.createEmptyMonitoringUnit(\(t\)) 12: APIService.updateDesiredConf(unit, \(conf_{d}(unit)|_{op}\cup P\)) ``` **Algorithm 1** Monitoring Claim Controller The main responsibility of the Monitoring Claim Controller is to manage the life-cycle of the submitted monitoring claims by assigning the desired configurations, derived from the received claims, with the monitoring units. In particular, every time a monitoring request is received by the API Service, the API Service stores the monitoring claims included in the request in the dedicated repository and sends a status update message to the Monitoring Claim Controller, which will incrementally process them. Since controllers are stateless, the capability to process monitoring claims in parallel can be increased arbitrarily, based on the available resources, by instantiating multiple Monitoring Claim Controllers. Algorithm 1 shows in details the operations performed by the monitoring claim controller every time a monitoring claim is processed. When a monitoring claim \(\mathit{mc}=(I,op,t)\) of an operator \(\mathit{op}\) is processed, the controller first identifies the set of probes necessary to collect the indicators specified in the request and their configuration (line 1). This set is computed by the API service based on the probe metadata. The monitoring units are reconfigured differently depending on the monitoring strategy. If the _multi-probe monitoring unit strategy is used_, the UpdateConfUnit procedure is invoked to associate a single monitoring unit with a desired configuration that includes all the probes (line 4). If the _single-probe monitoring unit strategy is used_, the individual probes configurations are extracted and then used to update the configuration of different monitoring units (lines 6-7). The way a set of probe configurations are associated with a monitoring unit is defined in the UpdateConfUnit procedure. To identify the monitoring unit that must be updated, the controller queries, through the API Service, the monitoring units repository for an existing monitoring unit (line 9). If the _multi-probe monitoring unit strategy_ is used, units can conveniently run multiple probes for a same target. In this case, the service looks for any monitoring unit created to observe \(t\), that is, it looks for an entry \(\mathit{unit}=(t,\mathit{multi-probe},\cdot)\), where \(t\) is the target reported in the monitoring claim. If the _single-probe monitoring unit strategy_ is used, \(P\) can only include a single probe, and the API service looks for a monitoring unit that is already using the selected probe to monitor the target \(t\), that is, it looks for an entry \(\mathit{unit}=(t,\mathit{single-probe},(p,\cdot,\cdot))\). In both cases, if the unit does not exist, a new unit with an empty desired configuration is created for the target \(t\) (line 11). Finally, the existing entry (i.e., the existing desired configuration) is updated by replacing the probes associated with operator \(\mathit{op}\) with the new ones specified in \(P\) (if the existing configuration is empty, \(P\) is simply used). Let us consider the running example, with operator op-A asking to collect two more indicators (CPU consumption and user session data) from PostgreSQL, if we assume the monitoring framework is configured to use the single-probe monitoring strategy, the submitted monitoring claim would be processed as follows. The access to the probe metadata would reveal the availability of two different probes that can be configured to collect the two indicators: \(p_{cpu}\), which can monitor CPU consumption using a Metricbeat probe, and \(p_{session}\), which can use a custom probe to collect data about user sessions. That is, \(P\)=(\(p_{cpu}\), CPU_consumption, op-A), (\(p_{session}\), U_user_session_data, op-A)) at line 1. Since \(mus\)=_single-probe_, the updateConfUnit procedure is invoked twice, once for each probe. The first invocation with probe \(p_{cpu}\) leads to the identification of a running unit that is already collecting CPU_consumption from PostgreSQL for op-B (line 9). The current configuration of the retrieved unit is {\(p_{cpu}\), CPU_consumption, op-B)}. The framework finally updates the desired configuration of the unit by replacing the probe configurations of operator op-A (none in this case) with the input configuration (\(p_{cpu}\), CPU_consumption, op-A ), finally obtaining the desired configuration {\(p_{cpu}\), CPU_consumption, op-B), (\(p_{cpu}\), CPU_consumption, op-A)}. The second invocation with probe \(p_{session}\) returns no unit that is already running that probe. Thus, a new unit is created (line 11), and the desired configuration {\(p_{session}\), U_user_session_data, op-A)} is associated with the unit. The time complexity of Algorithm 1 is linear with respect to the number of selected indicators (\(I\)) and the number of matched probes (\(P\)), that is, \(O(|I|+|P|)\). ### _Monitoring Unit Controller_ ``` 0: a monitoring unit \(mu\) 0: its current configuration \(\mathit{conf}\)\((mu)=\{(p,I,op)\}\) 0: its desired configuration \(\mathit{conf}_{d}(mu)=\{(p^{\prime},I^{\prime},op^{\prime})\}\) 0: the unit is updated according to the desired configuration is generated 1:if\(\mathit{conf}_{d}(mu)=\varnothing\)then dismiss \(mu\) 2:\(P_{add}\leftarrow\{p\in\mathit{conf}_{d}(mu)\setminus\mathit{conf}_{c}(mu)\}\) 3:\(P_{update}\leftarrow\{p\in\mathit{conf}_{d}(mu)\cap\mathit{conf}_{c}(mu)_{P}\}\) s.t. \(I^{\prime}(p)\neq I(p)\}\) 4:\(P_{drop}\leftarrow\{p\in\mathit{conf}_{c}(mu)\setminus\mathit{conf}_{c}(mu)_{P}\}\) 5:if\(P_{add}\cup P_{update}\cup P_{drop}\neq\varnothing\)then 6:\(\leftarrow\)\(\leftarrow\) Bridge.doChanges(\(mu\), \(P_{add}\), \(P_{update}\), \(P_{drop}\)) 7:else 8:res \(\leftarrow\)\(\varnothing\) 9: UpdateConfiguration(\(mu\), res) \(\triangleright\) If no error, \(\mathit{conf}_{c}(mu)\) is updated with \(\mathit{conf}_{d}(mu)\) ``` **Algorithm 2** Monitoring Unit Controller The main responsibility of the Monitoring Unit Controller is to manage the life-cycle of the monitoring units according to the desired configurations generated by the Monitoring Claim Controller. In particular, the Monitoring Unit Controller runs a control-loop that continuously checks the Monitoring Units for changes to be actuated, as a consequence of a misalignment between the current and the desired configurations. Multiple monitoring unit controllers can be active at the same time, but two monitoring unit controllers cannot act simultaneously on a same monitoring unit, to prevent any potentially erroneous concurrent change that would introduce inconsistencies in the process. The operations performed by a Monitoring Unit Controller are shown in Algorithm 2. It first checks if the desired configuration is empty, in such a case the entire monitoring unit is dismissed (line 1). This is an important step to avoid running phantom monitoring units with no running probes. It then computes the diff between the current and desired configuration, identifying the probes to be added (line 2), the probes to be reconfigured to collect a different set of indicators (line 3), and the probes to be dropped (line 4). If any of these sets is non empty, the Cloud Bridge receives the probe configurations corresponding to the changes that must be actuated (line 6). Passing all the changes to be actuated at once enables the Cloud Bridge to potentially optimize how these changes are actuated. The Cloud Bridge returns a result that specifies the errors experienced during the update process, if any. This information is used to update the current and desired configuration. In case no error is experienced, the desired configuration simply replaces the current configuration (line 9). Otherwise, the update process takes the errors into consideration. We describe error handling in Section 4. Let us consider the case of the two desired configurations generated by operator op-A when asking to collect two more indicators (CPU consumption and user session data) from PostgreSQL with the single-probe monitoring unit strategy, as discussed at the end of Section 3.3. The desired configuration related to the already deployed probe \(p_{cpu}\) results in no changes to be operated (\(P_{add}\cup P_{update}\cup P_{drop}=\varnothing\)), since the existing probe will be simply shared between the two operators (this is achieved by only updating the configurations in UpdateConfiguration without touching the running probes). While, the desired configuration related to the new probe \(p_{session}\) to be deployed results in a probe to be added (\(P_{add}\neq\varnothing\)). The time complexity of Algorithm 2 is linear with respect to the number of probes to add (\(|P_{add}|\)), update (\(|P_{update}|\)), and drop (\(|P_{drop}|\)) while configuring a monitoring unit. That is, if \(p_{changes}=|P_{add}|+|P_{update}|+|P_{drop}|\), the complexity of Algorithm 2 is \(O(p_{changes})\). ### _Cloud Bridge_ The main responsibility of the Cloud Bridge is to actuate plans on cloud systems using their management APIs. The Cloud Bridge also provides information about the targets and the deployment status of the probe artifacts. In particular, the Cloud Bridge exploits a plug-in based architecture that can be extended to support additional cloud systems. A plug-in for a target environment (e.g., Kubernetes) is used to map each change requested by controllers into a concrete command for the specific management API (e.g., the Kubernetes API) or the specific configuration management tool used to interact with the platform (e.g., Ansible [11]). This approach encapsulates the technological details inside the plug-in, keeping the whole control-plane framework agnostic from technology. Once all the changes have been actuated, the list of probes resulting in an erroneous state is sent back to the controller. ## 4 Error Handling The presented framework implements error handling procedures to recover from deployment errors, namely, errors that might be experienced at deployment time while creating, updating and removing either probes or monitoring units. The framework does not target the runtime errors that might be experienced after a successful deployment. These procedures are extremely important for the dependability of the monitoring framework, whose behavior may otherwise diverge from the desired behavior. We distinguish two classes of errors that can be detected and handled: * **Soft errors**. Soft errors indicate problems in the operations performed _while preparing_ for the creation, update and deletion of a unit, such as retrieving probes and preparing their configuration. All these operations are performed _before_ modifying any existing monitoring unit. Since those are problems that do not compromise the dependability of the running units, they are considered soft errors that have negligible consequences on the running monitoring system. * **Hard errors**. Hard errors indicate problems in the operations performed while _changing a running monitoring unit_, such as adding, reconfiguring or removing probes. Since these problems may compromise the dependability of the running monitoring system, they are considered hard errors that timely require corrective actions to be managed. Errors are detected by the Cloud Bridge while interacting with platform management APIs and while running commands of configuration systems. Soft errors are produced during the execution of the preparatory steps, differently from hard errors that are generated while changing the actual monitoring units. For this reason, depending on if and when an error is detected, a probe to be deployed can be in one of the following states: * _Failed probe_: a soft error has been detected by the Cloud Bridge while preparing the probe. * _Broken probe_: a hard error has been detected by the Cloud Bridge while deploying/undeploying the probe. * _Stable probe_: no error detected The errors detected for each probe configuration that is processed by the Cloud Bridge are reported in the results returned to the Monitoring Unit Controller (line 6 of Algorithm 2). Consequently, a monitoring unit can be in any of the following states, depending on the states of its probes: * _Stable unit_: no error is detected for the probes in the monitoring unit. * _Unsound unit_: there is at least a failed probe and no broken probe in the monitoring unit. This status indicates a failure in the attempt to align the desired and current configurations of the monitoring unit, but no actual problem is affecting the running unit. * _Dirty unit_: there is at least a broken probe in the monitoring unit. This status indicates that the software running in the unit might be compromised. Errors are mostly handled in the context of the UpdateConfiguration procedure whose pseudocode is shown in Algorithm 3. The UpdateConfiguration procedure is invoked by the Monitoring Unit Controller to finalize the update of a monitoring unit (line 9 in Algorithm 2). In addition to referring to a monitoring unit \(mu\) and the set of probe configurations that resulted in soft (\(Pconf_{soft}\)) and hard (\(Pconf_{soft}\)) errors, the procedure maintains two data structures. The \(RetryTable\) is a table that stores for every monitoring unit the number of consecutive soft failures generated by each probe configuration. The \(BlackList\) data structure stores for each monitoring unit the list of probe configurations that generated hard failures. The idea is that soft failures are not harmful for the monitoring unit, and thus the failed changes can be safely retried. Instead, hard failures introduce dependability problems, and thus the failed changes should not be retried. Operators can reset these tables to allow again certain operations (e.g., after a compatibility problem in a probe has been fixed). In practice, the error handling routine first increases the number of retries for the probe configurations that caused soft failures (line 2) and adds to the blacklist the probe configurations that caused hard errors (line 4). When the number of retries exceeds an operator-defined threshold, the configuration is blacklisted. If at least a hard error has been detected, the unit is _dirty_ and thus the bridge is asked to clean it. This operation depends on the target environment and the implementation of the plug-in used in the Cloud Bridge. For instance, in our implementation for containers, the bridge destroys the existing container and creates a new monitoring unit to replace it. The current configuration of the newly created monitoring unit is consequently set to the empty configuration. If no hard error is detected, the current configuration is updated by adding all the configurations that generated no errors. In all the cases, the desired configuration stays unchanged. This process may lead to three main distinct situations: * _the current and desired configurations are aligned_: no changes will be performed on the monitoring unit in the future, unless a new request is submitted by an operator; * _the current and desired configuration differs only for some blacklisted configurations_: in this case again there is nothing to be done. Note that although for simplicity we have not used the blacklist when computing the set of probes to be added, reconfigured, and deleted, in reality the Monitoring Unit Controller discards the configurations that appear in the BlackList data structure when computing them (Algorithm 2, lines 2- 4) * _there are configurations that must be retried_: in such a case the desired and current configurations do not match, and the monitoring unit controller will process them again in the next iteration of its control-loop, retrying the failed probe configurations. The time complexity of the Algorithm 3 is linear with respect to the number of probe changes and number of errors occurred while configuring the monitoring unit. In particular, if _errors_ is the number of probe configurations that resulted in soft or hard errors. The resulting time complexity is \(O(\textit{perhaps}+\textit{errors})\). ## 5 Technology Agnostic Design The proposed monitoring framework is designed to transparently integrate heterogeneous monitoring technologies, releasing a _technology agnostic control-plane_ that can be exploited to obtain MaaS capabilities using the preferred probe technologies and target platforms. To witness this capability, this section exemplifies the integration of probes of different types and the capability to support multiple cloud platforms. ### _Incorporating New Probes_ To demonstrate the flexibility of the monitoring framework we describe how two largely different probes can be supported: a health-check probe, which queries the health status endpoint of services exploiting the HTTP protocol, and a Prometheus exporter for Apache Kafka [20], which monitors Kafka brokers resources (topics, partitions, etc.) and exposes the collected indicators as Prometheus metrics. Adding a new probe can be done in two steps. First, the probe artifacts have to be manipulated in such a way they can be used by the Cloud Bridge. Second, the probe is added to the catalog by passing the probe metadata, which include information about where the probe can be deployed (the probe might be compatible with certain monitoring unit strategies but not with others), the supported data outputs (i.e., the database where the collected values can be stored), and the supported indicators, to the API Service. Listings 1 (a) and (b) show an excerpt of the metadata associated with the Apache Kafka Prometheus exporter and the health-check probe, respectively. Note that the configuration of the monitoring framework (e.g., the knowledge of both the available time-series database and the type of the target platform), jointly with the requests produced by the operators, allows the framework to select and deploy the right probes. In fact, artifact ids are mapped to the concrete software artifacts and scripts that are executed for probe deployment. Adding new probes (i.e., new artifacts and corresponding metadata) to the catalog may require a different amount of time depending on the knowledge of the involved technologies. It is however a quite convenient operation for people who know the monitoring framework. For instance, we needed 1.5 hours to develop and setup a health-check probe that can be deployed on virtual machines, and 30 minutes to add a Kafka exporter that can be deployed on Kubernetes. ### _Supporting New Target Cloud Platforms_ Supporting multiple target cloud platforms is another capability of the framework. A platform can be supported only if it provides a management API that can be used by the Cloud Bridge to manage the monitoring units and discover targets. Developers who want to create a new Cloud Bridge plug-in have to implement the base interface in order to run execution plans and provide information about the targets to the framework. Listing 1 (c) shows an example of target information that can be retrieved by the API via the Cloud Bridge component. Plug-ins are also associated with metadata (e.g., the supported monitoring unit strategies) that can help the framework in taking some decisions. Our prototype implementation already includes two plug-in implementations that can transparently actuate the same plans on radically different platforms: Kubernetes, a container-based platform, and Microsoft Azure Compute, a virtual-machine-based platform. ## 6 Empirical Evaluation Since probe deployment and error handling are two representative capabilities of the proposed framework, we designed an empirical evaluation to assess them. We further study scalability, to investigate how the monitoring framework scales with an increasing number of requests and operators. These points are captured by the following three research questions. 1. **RQ1 - _Framework Efficiency:_ How efficiently are probes deployed? This research question validates the framework capability of deploying probes starting from a declarative input and investigates how efficiently it is in fulfilling an operator monitoring request. This is investigated for both cloud systems based on containers and virtual machines giving evidence of the technology-agnostic capabilities of the framework. Results are studied in comparison to a solution working with pre-deployed probes that can be activated/deactivated (Figure 1, cases (d) and (e)). To this end, we selected JCatascopia [13], which is consistent with the MaaS case shown in Figure 1 (d), and it is usable with no restrictions being an open-source research prototype. 2. **RQ2 - _Error Handling:_ How efficiently are errors handled? This research question validates the framework capability of detecting and recovering from errors and investigates the time required by the framework to execute the error handling routine. 3. **RQ3 - _Scalability:_ How does the framework scale for an increasing number of requests? This research question validates the framework capability of optimizing the control-plane during the evolution of the monitoring system. It studies scalability with respect to the number of requests produced by operators. All RQs were addressed with cloud systems based on both virtual machines and containers. In the following, we describe the prototype we used to run experiments, we report the design of the study, and the results for each research question. ### _Prototype_ We implemented the framework described in this paper in a publicly available prototype hosted at [https://gitlab.com/learnERC/v](https://gitlab.com/learnERC/v) arrays. The services are implemented as Java standalone applications. The repositories are implemented as MongoDB [21] collections. The JSON format is used both for communication and to persist information, except for the Cloud Bridge which exposes a gRPC API that uses Protocol Buffers. The status update messages are delivered through Redis Streams [22]. The monitoring system can be deployed both on containers and virtual machines, depending on the hosting environment. We designed a probe catalog reusing probes from Metricbeat [23], one of the most popular cloud monitoring framework. We used Elasticsearch [24], as timeseries database to store the values extracted by the probes. We implemented plug-ins for the Cloud Bridge to support both Kubernetes and Microsoft Azure as target cloud platforms. The Microsoft Azure plug-in supports either creating virtual machine monitoring units on-the-fly within the configured Azure resource group, or accessing the same virtual machine running the target to deploy the probes internally. In our experiments, we annotated the target service as an ACCESSIBLE_VM, and made it accessible to the Cloud Bridge via SSH in order to (un)deploy the probes directly within the virtual machine running the target service. With respect to container monitoring units, the Kubernetes plug-in deploys container monitoring units in the same platform of the target and configures the probes accordingly. Purposedy, it does not implement the container sidecar pattern [19] because it would trigger the redeployment of the target service, due to how Pods work in Kubernetes, every time probes are (un)deployed, potentially causing service or monitoring interruptions unless a robust rolling update strategy is in place. ### _RQ1 - How efficiently are probes deployed?_ The monitoring framework can work in parallel on any number of monitored targets, if enough instances of the monitoring unit controller service are created. If there are more targets to modify than controller instances, some modifications will be performed sequentially. For instance, if 4 monitoring units must be modified and only 3 controller instances are available, one unit will be modified sequentially after another one. We will thus study how efficiently a monitoring unit can be managed by a single controller instance, the performance over multiple simultaneously evolving units can be straightforwardly deduced given the number of controllers available. We consider two cases for the experiments: PostgreSQL running in a container in an on-premise installation of Kubernetes and PostgreSQL running in a virtual machine on Microsoft Azure. The two cases show how the same framework can be transparently used to address heterogeneous scenarios where the involved technologies are significantly different. We collect time figures considering the case of a request that requires the simultaneous deployment of three probes to collect the following three indicators from PostgreSQL: CPU consumption (using the CPU metriscet of the Metricbeat probe), memory consumption (using the memory metriscet of the Metricbeat probe), and database metrics (using the database metriscet of the Metricbeat probe). To study the efficiency of each step, we measure the time taken by the first controller to process the claim, by the second controller to compute the execution plan, and by the Cloud Bridge to actuate the plan. To have a baseline measurement, we also consider the case of a static framework, that is, a framework that does not support dynamic probe deployment, requiring operators to _deploy and configure probes in-advance_, which can be later activated and de-activated. This framework is _far less flexible_ than the framework presented in this paper, but faster since it does not deploy probes dynamically. To this end, we both use our framework with pre-deployed probes and the JCatascopia [13] state of the art monitoring solution, which allows us to collect further measurements from a third party system. We do not have measurements for JCatascopia applied to containers since it only supports virtual machines. Every experiment is repeated 10 times to collect stable measures. Figure 4 shows the collected time figures with a semilogarithmic scale considering both virtual machines (Figure 3(a)) and containers (Figure 3(b)). The individual steps of the probe deployment process are captured by the Monitoring Claim Processing, Monitoring Unit Processing and Probes Deployment boxes. While Total represents the total time elapsed between the submission of the request and the time the deployed probes start collecting the required indicators. Not surprisingly Probes Deployment is the most expensive step of the process for both virtual machines and containers. In the case of virtual machines it takes nearly 50 seconds, while the other steps can be completed an order of magnitude faster. In case of containers the difference is remarkably smaller, due to their computational efficiency and their ability to cache artifacts. In fact, probes deployment can be performed in at most 1 second with containers, while the remaining steps take less than 0.25 seconds. Overall, the entire probe deployment process of the three probes (indicated with Total in Figure 4) could be completed in slightly less than a minute using virtual machines and less than 1.5s using containers, which is a nearly two orders of magnitude difference. The box Probe Activation Only shows the time required to activate pre-deployed probes using our framework. In the case of virtual machines, exploiting dynamic probe deployment might be quite expensive compared to manually pre-deploying probes, since it increases the runtime cost by an order of magnitude. However, pre-deploying many probes can be expensive, can generate large and difficult to manage virtual machines, and is efficient only when the indicators that might be collected can be predicted. The comparison to JCatascopia shows that the presented framework is efficient, also when just used to process requests and activate pre-deployed probes. In fact, JCatascopia required several seconds to activate the probes, while our framework could activate probes in less than a second. The difference between dynamic probe deployment and pre-deployed probes for containers is indeed less significant, both in relative and especially absolute terms. **Answer to RQ1** In the case of virtual machines, the cost of flexibly deploying probes is significantly higher than working with pre-deployed probes. Thus, the trade-off between flexibility and efficiency should be carefully considered by operators to decide the monitoring solution that should be adopted. Instead, in the case of containers, the cost of flexibly deploying probes is significantly amortized by the efficiency of the cloud technology. In fact, our framework can complete the process in 0.5-1.5 seconds, while activating the pre-deployed probes requires slightly less than 0.5s, suggesting that dynamic probe deployment might be often preferable. Fig. 4: Probes Deployment ### _RQ2 - How efficiently are errors handled?_ To study the capability of the framework to react to errors, we designed a variant of the experiment performed for RQ1 where we deploy a malfunctioning probe. We obtained such a probe by implementing a wrong configuration of the Metricbeat probe for PostgreSQL that makes the probe deployment to fail. In the case of virtual machines, we study the creation of a new monitoring unit with two probes: one working probe and a malfunctioning probe. The malfunctioning probe artifact contains an Ansible role with a wrong command that leads to a hard deployment error when the Cloud Bridge executes it. Since we use the multi-probe monitoring unit strategy with virtual machines, error detection must autonomously detect the problem with the monitoring unit with two probes and automatically create a monitoring unit with the working probe only. In the case of containers, we study the creation of a new monitoring unit with the malfunctioning probe only. The malfunctioning probe artifact contains a bugged Kubernetes manifest file that tries to deploy the probe within a non-existent Kubernetes namespace. This leads to a hard deployment error when the Cloud Bridge executes it. Since we use the single-probe monitoring unit strategy with containers, error detection should simply drop the malfunctioning monitoring unit (in this case we do not consider the deployment of two probes because the deployment strategy would simply create two different monitoring units handled independently). To capture how error detection works, we measure the time necessary to the framework to _attempt the deployment and detect_ that a monitoring unit is not working (namely Error Detection), the time necessary to _process_ the error and take the decision to clean the monitoring unit (namely Error Processing), and finally the time necessary to actuate the cleaning plan (namely Error Cleaning). Error detection is performed by the cloud bridge while actuating changes (see the call in Algorithm 2, line 6), error processing consists of the operations shown in Algorithm 3, and error cleaning is again performed by the Cloud Bridge when cleaning a unit (see the call in Algorithm 3 line 6). We repeated measurements 10 times to collect stable time figures. Figure 5 shows the collected time figures with a semilogarithmic scale considering both virtual machines (Figure 4(a)) and containers (Figure 4(b)). In both environments, error detection and error cleaning are more expensive than error processing. In fact, error detection requires performing the deployment, at least partially, and similarly error cleaning requires disposing monitoring units and creating new stable units, when possible. Similarly to probe deployment, error handling is significantly more efficient with containers than virtual machines. For instance, error detection requires around 21 seconds with virtual machines while it can be completed in less than 0.25 seconds with containers. Similarly, error cleaning requires around 13 seconds with virtual machines, while it can be completed in about 0.15 seconds with containers, but it is important to remark that the cleaning phase with containers does not require recreating a monitoring unit that is instead only disposed. The entire error handling process can be completed in around 35 seconds with virtual machines and less than a second with containers. **Answer to RQ2** Results show how the proposed MaaS solution that flexibly allocates and destroys resources, although usable with both virtual machines and containers, are naturally more suitable for containers where errors can be recovered in seconds. ### _RQ3 - How does the framework scale for an increasing number of requests?_ As discussed, the framework can update multiple monitoring units in parallel as long as a sufficient number of controller instances are created. We thus focus the scalability study on measuring how the cost of collecting additional indicators grows with an increasing number of requests when single instances of the controllers are available. In particular, we consider two cases: processing requests that require _deploying new probes_ and processing requests that require _reconfiguring_ the monitoring system without deploying new probes. The former case corresponds to operators asking for new indicators to be collected. The latter case corresponds to operators asking for indicators already collected by other operators that the framework handles in an optimized way sharing the existing probes among operators without touching the monitoring units, but only changing the set of configurations associated with a unit. We measure how the total deployment time grows while increasing either the number of _new indicators_ or the number of _existing indicators for new operators_ from 1 to 30. We submit all requests at once and we measure the total time necessary to fulfill the request. We repeated every experiment 5 times on both virtual machines and containers for a total of 160 samples collected about scalability. Figure 6 shows the results. Again, the remarkable difference between virtual machines and containers is confirmed. Fig. 5: Error Handling The scalability experiment gives additional evidence of how the linear growth of the total time for virtual machines is far more steep than containers. The difference is dramatic when considering the deployment of 30 probes, which requires around 10 minutes, in contrast with containers that can complete this operations in seconds. The results show that sharing probes between multiple operators can significantly improve the efficiency of the monitoring system. This is particularly important for virtual machines where the probe deployment cost can be cut thanks to probes sharing. **Answer to RQ3** Overall, results show that dynamic probe deployment can be feasible with both virtual machines and containers. However, the former environment can efficiently deal with probes only if changes are sporadic and the number of parallel requests received is limited. On the contrary, the container technology is definitely ready to support dynamic probe deployment, even in rapidly evolving contexts, based on the proposed framework. ### _Threats to Validity_ The threats to the validity of the results mainly concern with the relationship between the technical setup of the experiment and the collected time figures. In fact, efficiency is affected by both the available computational resources and the choice of the probes. However, while changing the available computational resources and the deployed probes are likely to affect absolute figures, the trends and gaps between the different frameworks and cloud platforms are clear, despite these factors. In fact, plots for virtual machines and containers are similar, although values are on different scales. Further, the scalability trends clearly identify a single case (collecting increasingly more indicators on virtual machines) that scales remarkably worst than the others. In our evaluation, we also selected a specific target service to be monitored (i.e., PostgreSQL) and we also used a specific malfunctioning probe (Metricbeat for PostgreSQL). Both these choices do not likely affect our results. In fact, the cost of handling a monitoring unit does not depend on the monitoring target, and similarly the error handling policy is the same for every type of error and malfunctioning probe. Finally, the collected time figures might be affected by noise. To mitigate this issue we repeated experiments between 5 and 10 times. The reported boxplots show a low variance for the collected values, suggesting that measures are stable and meaningful, and can be used to derive valid conclusions. ## 7 Related Work Due to the size, complexity, and dynamicity of cloud systems, Monitoring-as-a-Service (MaaS) systems are increasingly studied to better cope with the requirements of the cloud [25, 26]. In this paper, we focused on the inability of the monitoring solutions to cope with the dynamic deployment of probe artifacts, which is required to effectively address scenarios that imply quickly or frequently changing the collected indicators. Some MaaS systems are designed to operate _tightly coupled with the target cloud technology_. Although the knowledge of the target platform may simplify the design of the MaaS solution, the resulting architecture lacks generality and is inherently limited to the collection of platform-level indicators, missing to cope with application-level indicators. For instance, MonPaaS [27] is an open-source monitoring solution tightly coupled with OpenStack that provides a MaaS model to cloud users. It collects indicators by integrating Nagios with OpenStack and it obtains VM status information by intercepting messages in the OpenStack message queue. Moreover, several commercial cloud monitoring solutions are developed to monitor resources running on specific cloud platforms, such as Amazon CloudWatch [28] and Google Cloud Monitoring [29]. As a result, these solutions have limited interoperability and applicability. On the other hand, general purpose monitoring tools like Prometheus [10] and Zabbix [30] struggle with adapting to changing needs. In contrast, the framework proposed in this paper provides automation while relying on a technology-agnostic architecture that can operate probes of different type. On the other hand, MaaS systems are normally limited to _interactions with pre-deployed probes_, which can be activated and deactivated by operators, but cannot be deployed/undeployed. For instance, CLAMS is a MaaS framework that can monitor, benchmark and profile applications hosted on multi-clouds environments [31, 32]. MEASU is a Monitoring-as-a-service (MaaS) framework to monitor the cloud using stream processing [33] that relies on a publish-subscribe architecture to push data from resources through stream processors that convert and deliver data to stream subscribers. AdaptiveMon [34] is a peer-to-peer monitoring solution that exploits reconfigurable probes to address the complexity of the fog environment. Unlike our framework, these approaches rely on pre-deployed probes. In some cases, MaaS systems use probes that can address _autoscalable services_. For instance, Amazon CloudWatch [28] uses the MaaS technology to deliver status monitoring capabilities in presence of autoscalable services. JCatscopia [13] is a monitoring solution that targets elastic tasks using automatic discovery capabilities. Compared to these solutions, in addition to managing probes that have native support to services autoscaling and discovery, our framework delivers automatic and declarative probe deployment capabilities, Fig. 6: Scalability results. allowing users to continuously and efficiently adapt the monitoring infrastructure. Early contributions related to automatic probe deployment for cloud systems are the work by Ciuffoletti [35], who proposed a MaaS model for the deployment of monitoring components as required by users, and Anisetti et al. [36], who applied automatic probe deployment to monitor and certify security properties of services running on virtual machines. These works describe high-level design and prototype implementation targeting specific cases. The framework presented in this paper provides instead a general and applicable approach for dynamic probes deployment. ## 8 Conclusions Monitoring systems must be able to cope with dynamically changing and unstable sets of monitored indicators, to effectively address scenarios that characterize modern cloud systems. Current solutions however badly adapt to these scenarios, providing little flexibility and requiring significant manual effort to deploy, undeploy and re-configure monitoring probes. In this paper, we present a framework that can be used to dynamically work with monitoring units and probes. The operator interacts with the monitoring system in a as-a-service fashion, specifying the indicators that must be collected, and letting the framework to deal with probe deployment and configuration. The framework is also able to recover from deployment problems and integrate probes from multiple monitoring technologies. Results show that the framework can be feasibly used with cloud systems based on both virtual machines and containers, although it is significantly more efficient with containers. We identify three main limitations of the current framework implementation that we want to address as part of future work. First, fine-grained control of the probes configurations (e.g., changing the sampling rate of the individual probes) is not supported. This limitation can be potentially addressed by enriching monitoring claims with information about probe configurations. Second, the support to elasticity right now depends on the probe intelligence (e.g., it requires the probes to embed a discovery mechanism as the one in the Metricbeat Kubernetes module). It would be interesting to move this support at the level of the monitoring framework, so that any probe can be used to monitor elastic services. Third, error-handling is limited to the deployment phase, and it is unable to detect and repair run-time errors that occur during the regular execution of the monitoring system. Indeed, error handling requires further research to cover the full range of situations. Other future work involves extending the framework with as-a-service anomaly detection and healing capabilities to increase the dependability of the monitored system, and caring of the vicinity of the monitored resources to further improve the deployment of the monitoring system.
2301.13845
Interpreting Robustness Proofs of Deep Neural Networks
In recent years numerous methods have been developed to formally verify the robustness of deep neural networks (DNNs). Though the proposed techniques are effective in providing mathematical guarantees about the DNNs behavior, it is not clear whether the proofs generated by these methods are human-interpretable. In this paper, we bridge this gap by developing new concepts, algorithms, and representations to generate human understandable interpretations of the proofs. Leveraging the proposed method, we show that the robustness proofs of standard DNNs rely on spurious input features, while the proofs of DNNs trained to be provably robust filter out even the semantically meaningful features. The proofs for the DNNs combining adversarial and provably robust training are the most effective at selectively filtering out spurious features as well as relying on human-understandable input features.
Debangshu Banerjee, Avaljot Singh, Gagandeep Singh
2023-01-31T18:41:28Z
http://arxiv.org/abs/2301.13845v1
# Interpreting Robustness Proofs of Deep Neural Networks ###### Abstract In recent years numerous methods have been developed to formally verify the robustness of deep neural networks (DNNs). Though the proposed techniques are effective in providing mathematical guarantees about the DNNs behavior, it is not clear whether the proofs generated by these methods are human-interpretable. In this paper, we bridge this gap by developing new concepts, algorithms, and representations to generate human understandable interpretations of the proofs. Leveraging the proposed method, we show that the robustness proofs of standard DNNs rely on spurious input features, while the proofs of DNNs trained to be provably robust filter out even the semantically meaningful features. The proofs for the DNNs combining adversarial and provably robust training are the most effective at selectively filtering out spurious features as well as relying on human-understandable input features. Machine Learning, Robustness, Deep Neural Networks ## 1 Introduction The black box construction and lack of robustness of deep neural networks (DNNs) are major obstacles to their real-world deployment in safety-critical applications like autonomous driving (Bojarski et al., 2016) or medical diagnosis (Amato et al., 2013). To mitigate the lack of trust caused by black-box behaviors, there has been a large amount of work on interpreting individual DNN predictions to gain insights into their internal workings. Orthogonally, the field of DNN verification has emerged to formally prove or disprove the robustness of neural networks in a particular region capturing an infinite set of inputs. Verification can be leveraged during training for constructing more robust models. We argue that while these methods do improve trust to a certain degree, the insights and guarantees derived from their independent applications are not enough to build sufficient confidence for enabling the reliable real-world deployment of DNNs. Existing DNN interpretation methods (Sundararajan et al., 2017) explain the model behavior on individual inputs, but they often do not provide human-understandable insights into the workings of the model on an infinite set of inputs handled by verifiers. Similarly, the DNN verifiers (Singh et al., 2019; Zhang et al., 2018) can generate formal proofs capturing complex invariants sufficient to prove network robustness but it is not clear whether these proofs are based on any meaningful input features learned by the DNN that are necessary for correct classification. This is in contrast to standard program verification tasks where proofs capture the semantics of the program and property. In this work, to improve trust, we propose for the first time, the problem of interpreting the invariants captured by DNN robustness proofs. **Key Challenges.** The proofs generated by state-of-the-art DNN verifiers encode high-dimensional complex convex shapes defined over thousands of neurons in the DNN. It is not exactly clear how to map these shapes to human understandable interpretations. Further, certain parts of the proof may be more important for it to hold than the rest. Thus we need to define a notion of importance for different parts of the proof and develop methods to identify them. **Our Contributions.** We make the following contributions to overcome these challenges and develop a new method for interpreting DNN robustness proofs. * We introduce a novel concept of proof features that can be analyzed independently by generating the corresponding interpretations. A priority function is then associated with the proof features that signifies their importance in the complete proof. * We design a general algorithm called _SuPFEx_ (**S**ifficient **P**roof **F**eature **E**xtraction) that extracts a set of proof features that retain only the more important parts of the proof while still proving the property. * We compare interpretations of the proof features for standard DNNs and state-of-the-art robustly trained DNNs for the MNIST and CIFAR10 datasets. We observe that the proof features corresponding to the standard networks rely on spurious input features while the proofs of adversarially trained DNNs (Madry et al., 2018) filter out some of the spurious features. In contrast, the networks trained with certifiable training (Zhang et al., 2020) produce proofs that do not rely on any spurious features but they also miss out on some meaningful features. Proofs for training methods that combine both empirical robustness and certified robustness (Balunovic and Vechev, 2020) provide a common ground. They not only rely on human interpretable features but also selectively filter out the spurious ones. We also empirically show that these observations are not contingent on any specific DNN verifier. ## 2 Related Work We discuss prior works related to ours. **DNN interpretability.** There has been an extensive effort to develop interpretability tools for investigating the internal workings of DNNs. These include feature attribution techniques like saliency maps (Sundararajan et al., 2017; Smilkov et al., 2017), using surrogate models to interpret local decision boundaries (Ribeiro et al., 2016), finding influential (Koh and Liang, 2017), prototypical (Kim et al., 2016), or counterfactual inputs (Goyal et al., 2019), training sparse decision layers (Wong et al., 2021), utilizing robustness analysis (Hsieh et al., 2021). Most of these interpretability tools focus on generating local explanations that investigate how DNNs work on individual inputs. Another line of work, rather than explaining individual inputs, tries to identify specific concepts associated with a particular neuron (Simonyan et al., 2014; Bau et al., 2020). However, to the best of our knowledge, there is no existing work that allows us to interpret DNN robustness proofs. **DNN verification.** Unlike DNN interpretability methods, prior works in DNN verification focus on formally proving whether the given DNN satisfies desirable properties like robustness (Singh et al., 2019; Wang et al., 2021), fairness (Mazzucato and Urban, 2021), etc. The DNN verifiers are broadly categorized into three main categories - (i) sound but incomplete verifiers which may not always prove property even if it holds (Gehr et al., 2018; Singh et al., 2018, 2019; Ma et al., 2018; Xu et al., 2020; Salman et al., 2019), (ii) complete verifiers that can always prove the property if it holds (Wang et al., 2018; Gehr et al., 2018; Bunel et al., 2020; Bak et al., 2020; Ehlers, 2017; Ferrari et al., 2022; Fromherz et al., 2021; Wang et al., 2021; Palma et al., 2021; Anderson et al., 2020; Zhang et al., 2022) and (iii) verifiers with probabilistic guarantees (Cohen et al., 2019). **Robustness and interpretability.** Existing works (Madry et al., 2018; Balunovic and Vechev, 2020; Zhang et al., 2020) in developing robustness training methods for neural networks provide a framework to produce networks that are inherently immune to adversarial perturbations in input. Recent works (Tsipras et al., 2019; Zhang et al., 2019) also show that there may be a robustness-accuracy tradeoff that prevents highly robust models achieve high accuracy. Further, in (Tsipras et al., 2019) authors show that networks trained with adversarial training methods learn fundamentally different input feature representations than standard networks where the adversarially trained networks capture more human-aligned data characteristics. ## 3 Preliminaries In this section, we provide the necessary background on DNN verification and existing works on traditional DNN interpretability with sparse decision layers. While our method is applicable to general architectures, for simplicity, we focus on a \(l\)-layer feedforward DNN \(N:\mathbb{R}^{d_{0}}\rightarrow\mathbb{R}^{d_{l}}\) for the rest of this paper. Each layer \(i\) except the final one applies the transformation \(X_{i}=\sigma_{i}(W_{i}\cdot X_{i-1}+B_{i})\) where \(W_{i}\in\mathbb{R}^{d_{i}\times d_{i-1}}\) and \(B_{i}\in\mathbb{R}^{d_{i}}\) are respectively the weights and biases of the affine transformation and \(\sigma_{i}\)is a non-linear activation like ReLU, Sigmoid, etc. corresponding to layer \(i\). The final layer only applies the affine transformation and the network output is a vector \(Y=W_{l}\cdot X_{l-1}+B_{l}\). **DNN verification.** At a high level, DNN verification involves proving that the network outputs \(Y=N(X)\) corresponding to all inputs \(X\) from an input region specified by \(\phi\), satisfy a logical specification \(\psi\). A common property is - the local robustness where the output specification \(\psi\) is defined as linear inequality over the elements of the output vector of the neural network. The output specification, in this case, is written as \(\psi(Y)=(C^{T}Y\geq 0)\) where \(C\in\mathbb{R}^{d_{i}}\) defines the linear inequality for encoding the robustness property. For the rest of the paper, we refer to the input region \(\phi\) and output specification \(\psi\) together as _property_\((\phi,\psi)\). Next, we briefly discuss how DNN robustness verifiers work. A DNN verifier \(\mathcal{V}\) symbolically computes a possibly over-approximated output region \(\mathcal{A}\subseteq\mathbb{R}^{d_{l}}\) containing all possible outputs of \(N\) corresponding to \(\phi\). Let, \(\Lambda(\mathcal{A})=\min_{Y\in\mathcal{A}}C^{T}Y\) denote the minimum value of \(C^{T}Y\) where \(Y\in\mathcal{A}\). Then \(N\) satisfies property \((\phi,\psi)\) if \(\Lambda(\mathcal{A})\geq 0\). Most existing DNN verifiers (Singh et al., 2018, 2019; Zhang et al., 2018) are exact for affine transformations. However, for non-linear activation functions, these verifiers compute convex regions that over-approximate the output of the activation function. Note that, due to the over-approximations, DNN verifiers are sound but not complete - the verifier may not always prove property even if it holds. For piecewise linear activation functions like ReLU, complete verifiers exist handling the activation exactly, which in theory always prove a property if it holds. Nevertheless, complete verification in the worst case takes exponential time, making them practically infeasible. In the rest of the paper, we focus on deterministic, sound, and incomplete verifiers which are more scalable than complete verifiers. **DNN interpretation with sparse decision layer.** DNNs considered in this paper, have complex multi-layer structures, making them harder to interpret. Instead of interpreting what each layer of the network is doing, recent works (Wong et al., 2021; Liao and Cheung, 2022) treat DNNs as the composition of a _deep feature extractor_ and an affine decision layer_. The output of each neuron of the penultimate layer represents a single deep feature and the final affine layer linearly combines these deep features to compute the network output. This perspective enables us to identify the set of features used by the network to compute its output and to investigate their semantic meaning using the existing feature visualization techniques Ribeiro et al. (2016); Simonyan et al. (2014). However, visualizing each feature is practically infeasible for large DNNs where the penultimate layer can contain hundreds of neurons. To address this, the work of Wong et al. (2021) tries to identify a smaller set of features that are sufficient to maintain the perfomance of the network. This smaller but sufficient feature set retains only the most important features corresponding to a given input. It is shown empirically Wong et al. (2021) that a subset of these features of size less than 10 is sufficient to maintain the accuracy of state-of-the-art models. ## 4 Interpreting DNN Proofs Next, we describe our approach for interpreting DNN robustness proofs. **Proof features.** Similar to traditional DNN interpretation described above, for proof interpretation, we propose to segregate the final decision layer from the network and look at the features extracted at the penultimate layer. However, DNN verifiers work on an input region (\(\phi\)) consisting of infinitely many inputs instead of a single input as handled by existing work. In this case, for a given input region \(\phi\), we look at the symbolic shape (for example - intervals, zonotopes, polytopes, etc.) computed by the verifier at the penultimate layer and then compute its projection on each dimension of the penultimate layer. These projections yield an interval \([l_{n},u_{n}]\) which contains all possible output values of the corresponding neuron \(n\) with respect to \(\phi\). **Definition 1** (Proof Features).: _Given a network \(N\), input region \(\phi\) and neural network verifier \(\mathcal{V}\), for each neuron \(n_{i}\) at the penultimate layer of \(N\), the proof feature \(\mathcal{F}_{n_{i}}\) extracted at that neuron \(n_{i}\) is an interval \([l_{n_{i}},u_{n_{i}}]\) such that \(\forall X\in\phi\), the output of \(n_{i}\) always lies in the range \([l_{n_{i}},u_{n_{i}}]\)._ Note that, the computation of the proof features is verifier dependent, i.e., for the same network and input region, different verifiers may compute different values \(l_{n}\) and \(u_{n}\) for a particular neuron \(n\). For any input region \(\phi\), the first \((l-1)\) layers of \(N\) along with the verifier \(\mathcal{V}\) act as the **proof feature extractor**. For the rest of this paper, we use \(\mathcal{F}\) to denote the set of all proof features at the penultimate layer and \(\mathcal{F}_{S}\) to denote the proof features corresponding to \(S\subseteq[d_{l-1}]\). \[\mathcal{F}_{S}=\{\mathcal{F}_{n_{i}}\mid i\in S\}\] Suppose \(N\) is formally verified by the verifier \(\mathcal{V}\) to satisfy the property (\(\phi\), \(\psi\)). Then in order to gain insights about the proof generated by \(\mathcal{V}\), we can directly investigate (described in section 4.3) the extracted proof features \(\mathcal{F}\). However, the number of proof features for contemporary networks can be very large (in hundreds). Many of these features may be spurious and not important for the proof. Similar to how network interpretations are generated when classifying individual inputs, we want to identify a smaller set of proof features that are more important for the proof of the property (\(\phi\), \(\psi\)). The key challenge here is defining the most important set of proof features w.r.t the property \((\phi,\psi)\). ### Sufficient Proof Features We argue that a _minimum_ set of proof features \(\mathcal{F}_{S_{0}}\subseteq\mathcal{F}\) that can prove the property \((\phi,\psi)\) with verifier \(\mathcal{V}\) contains an important set of proof features w.r.t \((\phi,\psi)\). The minimality of \(\mathcal{F}_{S_{0}}\) enforces that it can only retain the proof features that are essential to prove the property. Otherwise, it would be possible to construct a smaller set of proof features that preserves the property violating the minimality of \(\mathcal{F}_{S_{0}}\). Leveraging this hypothesis, we can model extracting a set of important proof features as computing a minimum proof feature set capable of preserving the property \((\phi,\psi)\) with \(\mathcal{V}\). To identify a minimum proof feature set, we introduce the novel concepts of proof feature pruning and sufficient proof features below: **Definition 2** (Proof feature Pruning).: _Pruning any Proof feature \(\mathcal{F}_{n_{i}}\in\mathcal{F}\) corresponding to neuron \(n_{i}\) in the penultimate layer involves setting weights of all its outgoing connections to 0 so that given any input \(X\in\phi\) the final output of \(N\) no longer depends on the output of \(n_{i}\)._ Once, a proof feature \(\mathcal{F}_{n_{i}}\) is pruned the verifier \(\mathcal{V}\) no longer uses \(\mathcal{F}_{n_{i}}\) to prove the property \((\phi,\psi)\). **Definition 3** (Sufficient proof features).: _For the proof of property \((\phi,\psi)\) on DNN \(N\) with verifier \(\mathcal{V}\), a nonempty set \(\mathcal{F}_{S}\subseteq\mathcal{F}\) of proof features is sufficient if the property still holds with verifier \(\mathcal{V}\) even if all the proof features **not in \(\mathcal{F}_{S}\)** are pruned._ **Definition 4** (Minimum proof features).: _Minimum proof feature set \(\mathcal{F}_{S_{0}}\subseteq\mathcal{F}\) for a network \(N\) verified with \(\mathcal{V}\) on \((\phi,\psi)\) is a sufficient proof feature set containing the minimum number of proof features._ Extracting a minimum set of proof features \(\mathcal{F}_{S_{0}}\) from \(\mathcal{F}\) is equivalent to pruning maximum number of proof features from \(\mathcal{F}\) without violating the property \((\phi,\psi)\). Let, \(W_{l}[\)\(;,i]\in\mathbb{R}^{d_{l}}\) denote the \(i\)-th column of the weight matrix \(W_{l}\) of the final layer \(N_{l}\). Pruning any proof feature \(\mathcal{F}_{n_{i}}\) results in setting all weights in \(W_{l}[\)\(;\)\(;\)\(i]\) to 0. Therefore, to compute \(\mathcal{F}_{S_{0}}\), it is sufficient to devise an algorithm that can prune maximum number of columns from \(W_{l}\) while still preserving the property \((\phi,\psi)\). For any proof feature set \(\mathcal{F}_{S}\subseteq\mathcal{F}\), let \(W_{l}(S)\in\mathbb{R}^{d_{l}\times d_{l-1}}\) be the weight matrix of the pruned final layer that only retains proof features corresponding to \(\mathcal{F}_{S}\). Then columns of \(W_{l}(S)\) are defined as follows where \(\underline{0}\in\mathbb{R}^{d_{l-1}}\) dentoes a constant all-zero vector \[W_{l}(S)[:,i]=\begin{cases}W_{l}[:,i]&i\in S\\ \underline{0}&\text{otherwise}\end{cases} \tag{1}\] The proof feature set \(\mathcal{F}_{S}\) is sufficient iff the property \((\phi,\psi)\) can be verified by \(\mathcal{V}\) on \(N\) with the pruned weight matrix \(W_{l}(S)\). As described in Section 3, for property verification the verifier computes \(\mathcal{V}\) an over-approximated output region \(\mathcal{A}\) of \(N\) over the input region \(\phi\). Given that we never change the input region \(\phi\) and the proof feature extractor composed of the first \(l-1\) layers of \(N\) and the verifier \(\mathcal{V}\), the output region \(\mathcal{A}\) only depends on the pruning done at the final layer. Now let \(\mathcal{A}(W_{l},S)\) denote the over-approximated output region corresponding to \(W_{l}(S)\). The neural network \(N\) can be verified by \(\mathcal{V}\) on the property \((\phi,\psi)\) with \(W_{l}(S)\) iff the lower bound \(\Lambda(\mathcal{A}(W_{l},S))\geq 0\). Therefore, finding \(S_{0}\) corresponding to a minimum proof feature set \(\mathcal{F}_{S_{0}}\) can be formulated as below where for any \(S\subseteq[d_{l-1}]\), \(|S|\) denotes the number of elements in \(S\). \[\operatorname*{arg\,min}_{S\neq\emptyset,\ S\subseteq[d_{l-1}]}|S|\quad\text{ s.t. }\ \Lambda(\mathcal{A}(W_{l},S))\geq 0 \tag{2}\] ### Approximate Minimum Proof Feature Extraction The search space for finding \(\mathcal{F}_{S_{0}}\) is prohibitively large and contains \(2^{d_{l-1}}\) possible candidates. So, computing a minimum solution with an exhaustive search is infeasible. Even checking the sufficiency of any arbitrary proof feature set \(\mathcal{F}_{S}\) (Definition 3) is not trivial and requires expensive verifier invocations. We note that even making \(O(d_{l-1})\) verifier calls is too expensive for the network sizes considered in our evaluation. Given the large DNN size, exponential search space, and high verifier cost, efficiently computing a _minimum_ sufficient proof feature set is computationally intractable. We design a practically efficient approximation algorithm based on a greedy heuristic that can generate a smaller (may not always be minimum) sufficient feature set with only \(O(\log(d_{l-1}))\) verifier calls. At a high level, for each proof feature \(\mathcal{F}_{n_{i}}\) contained in a sufficient feature set, the heuristic tries to estimate whether pruning \(\mathcal{F}_{n_{i}}\) violates the property \((\phi,\psi)\) or not. Subsequently, we prioritize pruning of those proof features \(\mathcal{F}_{n_{i}}\) that, if pruned, will likely preserve the proof of the property (\(\phi\),\(\psi\)) with the verifier \(\mathcal{V}\). For any proof feature \(\mathcal{F}_{n_{i}}\in\mathcal{F}_{S}\) where \(\mathcal{F}_{S}\) is sufficient and proves the property \((\phi,\psi)\), we estimate the change \(\Delta(\mathcal{F}_{n_{i}},\mathcal{F}_{S})\) that occurs to \(\Lambda(\mathcal{A}(W_{l},S))\) if \(\mathcal{F}_{n_{i}}\) is pruned from \(\mathcal{F}_{S}\). Let, the over-approximated output region computed by verifier \(\mathcal{V}\) corresponding to \(\mathcal{F}_{S}\setminus\{\mathcal{F}_{n_{i}}\}\) be \(\mathcal{A}(W_{l},S\setminus\{i\})\) then \(\Delta(\mathcal{F}_{n_{i}},\mathcal{F}_{S})\) is defined as follows \[\Delta(\mathcal{F}_{n_{i}},\mathcal{F}_{S})=|\Lambda(\mathcal{A}(W_{l},S))- \Lambda(\mathcal{A}(W_{l},S\setminus\{i\}))|\] Intuitively, proof features \(\mathcal{F}_{n_{i}}\) with higher values of \(\Delta(\mathcal{F}_{n_{i}},\mathcal{F}_{S})\) for some sufficient feature set \(\mathcal{F}_{S}\subseteq\mathcal{F}\) are responsible for large changes to \(\Lambda(\mathcal{A}(W_{l}(S)))\) and likely to break the proof if pruned. Note, \(\Delta(\mathcal{F}_{n_{i}},\mathcal{F}_{S})\) depends on the particular sufficient proof set \(\mathcal{F}_{S}\) and does not estimate the global importance of \(\mathcal{F}_{n_{i}}\) independent of the choice of \(\mathcal{F}_{S}\). To mitigate this issue, while defining the priority \(P(\mathcal{F}_{n_{i}})\) of a proof feature \(\mathcal{F}_{n_{i}}\) we take the maximum of \(\Delta(\mathcal{F}_{n_{i}},\mathcal{F}_{S})\) across all sufficient feature set \(\mathcal{F}_{S}\) containing \(\mathcal{F}_{n_{i}}\). Let, \(\mathbb{S}(\mathcal{F}_{n_{i}})\) denote set of all sufficient \(\mathcal{F}_{S}\) containing \(\mathcal{F}_{n_{i}}\). Then, \(P(\mathcal{F}_{n_{i}})\) can be formally defined as follows \[P(\mathcal{F}_{n_{i}})=\max_{\mathcal{F}_{S}\in\mathbb{S}(\mathcal{F}_{n_{i}} )}\Delta(\mathcal{F}_{n_{i}},\mathcal{F}_{S}) \tag{3}\] Given the set \(\mathbb{S}(\mathcal{F}_{n_{i}})\) can be exponentially large, finding the maximum value of \(\Delta(\mathcal{F}_{n_{i}},\mathcal{F}_{S})\) over \(\mathbb{S}(\mathcal{F}_{n_{i}})\) is practically infeasible. Instead, we compute a resonably tight upper bound \(P_{ub}(\mathcal{F}_{n_{i}})\) on \(P(\mathcal{F}_{n_{i}})\) by estimating a global upper bound on \(\Delta(\mathcal{F}_{n_{i}},\mathcal{F}_{S})\), that holds \(\forall\mathcal{F}_{S}\in\mathbb{S}(\mathcal{F}_{n_{i}})\). The proposed upper bound is independent of the choice of \(\mathcal{F}_{S}\in\mathbb{S}(\mathcal{F}_{n_{i}})\) and therefore removes the need to iterate over \(\mathbb{S}(\mathcal{F}_{n_{i}})\) enabling efficient computation. For the network \(N\) and input region \(\phi\), let \(\mathcal{A}_{l-1}\) denote the over-approximate symbolic region computed by \(\mathcal{V}\) at the penultimate layer. Then \(\forall\mathcal{F}_{S}\in\mathbb{S}(\mathcal{F}_{n_{i}})\) the global uppper bound of \(\Delta(\mathcal{F}_{n_{i}},\mathcal{F}_{S})\) can be computed as follows where for any vector \(X\in\mathbb{R}^{d_{l-1}}\), \(x_{i}\) denotes its \(i\)-th coordinate: \[\Delta(\mathcal{F}_{n_{i}},\mathcal{F}_{S}) \leq\max_{X\in\mathcal{A}_{l-1}}|(C^{T}W_{l}(S)X-C^{T}W_{l}(S \setminus\{i\})X)|\] \[=\max_{X\in\mathcal{A}_{l-1}}|(C^{T}W_{l}[:,i])\cdot x_{i})|\] \[P(\mathcal{F}_{n_{i}}) \leq\max_{X\in\mathcal{A}_{l-1}}|(C^{T}W_{l}[:,i])\cdot x_{i})|\] Now, any proof feature \(\mathcal{F}_{n_{i}}=[l_{n_{i}},u_{n_{i}}]\) computed by \(\mathcal{V}\) contains all possible values of \(x_{i}\) where \(X\in\mathcal{A}_{l-1}\). Leveraging this observation, we can further simplify the upper bound \(P_{ub}(\mathcal{F}_{n_{i}})\) of \(P(\mathcal{F}_{n_{i}})\) as shown below. \[P(\mathcal{F}_{n_{i}}) \leq\max_{x_{i}\in[l_{n_{i}},u_{n_{i}}]}|(C^{T}W_{l}[:,i])|\cdot x_{ i})|\] \[P_{ub}(\mathcal{F}_{n_{i}}) =|(C^{T}W_{l}[:,i])|\cdot\max(|l_{n_{i}}|,|u_{n_{i}}|) \tag{4}\] This simplification ensures that \(P_{ub}(\mathcal{F}_{n_{i}})\) for all \(\mathcal{F}_{n_{i}}\) can be computed with \(O(d_{l-1})\) elementary vector operations and a single verifier call that computes the intervals \([l_{n_{i}},u_{n_{i}}]\). Next, we describe how we compute an approximate feature set using the feature priorities \(P_{ub}(\mathcal{F}_{n_{i}})\). For any feature \(\mathcal{F}_{n_{i}}\), \(P_{ub}(\mathcal{F}_{n_{i}})\) estimates the importance of \(\mathcal{F}_{n_{i}}\) in preserving the proof. So, a trivial step is to just prune all the proof features from \(\mathcal{F}\) whose \(P_{ub}\) is 0. These features do not have any contribution to the proof of the property \((\phi,\psi)\) by the verifier \(\mathcal{V}\). This step forms a trivial algorithm. However, this is not enough. We can further prune some more proof features leading to a yet smaller set. For this, we propose an iterative algorithm **SuPFEx** shown in Algorithm 1 (\(\mathbb{A}\)) which maintains two set namely, \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\) and \(\mathcal{F}^{(\mathbb{A})}_{S}\). \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\) contains the features guaranteed to be included in the final answer computed by SuPFEx and \(\mathcal{F}^{(\mathbb{A})}_{S}\) contains the candidate features to be pruned by the algorithm. At every step, the algorithm ensures that the set \(\mathcal{F}^{(\mathbb{A})}_{S}\cup\mathcal{F}^{(\mathbb{A})}_{S_{0}}\) is sufficient and iteratively reduces its size by pruning proof features from \(\mathcal{F}^{(\mathbb{A})}_{S}\). The algorithm iteratively prunes the feature \(\mathcal{F}_{n_{i}}\) with the lowest value of \(P_{ub}(\mathcal{F}_{n_{i}})\) from \(\mathcal{F}^{(\mathbb{A})}_{S}\) to maximize the likelihood that \(\mathcal{F}^{(\mathbb{A})}_{S}\cup\mathcal{F}^{(\mathbb{A})}_{S_{0}}\) remains sufficient at each step. At Line 8 in the algorithm, \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\) and \(\mathcal{F}^{(\mathbb{A})}_{S}\) initialized as \(\{\}\) (empty set) and \(\mathcal{F}\) respectively. Removing a single feature in each iteration and checking the sufficiency of the remaining features in the worst case leads to \(O(d_{l-1})\) verifier calls which are infeasible. Instead, at each step, from \(\mathcal{F}^{(\mathbb{A})}_{S}\) our algorithm greedily picks top-\(|S|/2\) features (line 10) \(\mathcal{F}^{(\mathbb{A})}_{S_{1}}\) based on their priority and invokes the verifier \(\mathcal{V}\) to check the sufficiency of \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\cup\mathcal{F}^{(\mathbb{A})}_{S_{1}}\) (line 12). If the feature set \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\cup\mathcal{F}^{(\mathbb{A})}_{S_{1}}\) is sufficient (line 13), \(\mathbb{A}\) removes all features in \(\mathcal{F}^{(\mathbb{A})}_{S}\setminus\mathcal{F}^{(\mathbb{A})}_{S_{1}}\) from \(\mathcal{F}^{(\mathbb{A})}_{S}\) and therefore \(\mathcal{F}^{(\mathbb{A})}_{S}\) is updated as \(\mathcal{F}^{(\mathbb{A})}_{S_{1}}\) in this step (line 14). Otherwise, if \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\cup\mathcal{F}^{(\mathbb{A})}_{S_{1}}\) does not preserve the property (\(\phi\),\(\psi\)) (line 15), \(\mathbb{A}\) adds all feature in \(\mathcal{F}^{(\mathbb{A})}_{S_{1}}\) to \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\) (line 16) and replaces \(\mathcal{F}^{(\mathbb{A})}_{S}\) with \(\mathcal{F}^{(\mathbb{A})}_{S}\setminus\mathcal{F}^{(\mathbb{A})}_{S_{1}}\) (line 17). The algorithm (\(\mathbb{A}\)) terminates after all features in \(\mathcal{F}^{(\mathbb{A})}_{S}\) are exhausted. Since at every step, the algorithm reduces size of \(\mathcal{F}^{(\mathbb{A})}_{S}\) by half, it always terminates within \(O(\log(d_{l-1}))\) verifier calls. **Limitations.** We note that the scalability of our method is ultimately limited by the scalability of the existing verifiers. Therefore, SuPFEx currently cannot handle networks for larger datasets like ImageNet. Nonetheless, SuPFEx is general and compatible with any verification algorithm. Therefore, SuPFEx will benefit from any future advances to enable the neural network verifiers to scale to larger datasets. Next, we derive mathematical guarantees about the correctness and efficacy of Algorithm 1. For correctness, we prove that the feature set \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\) is always sufficient (Definition 3). For efficacy, we theoretically find a non-trivial upper bound on the size of \(\mathcal{F}^{(\mathbb{A})}_{S}\). **Theorem 1**.: _If the verifier \(\mathcal{V}\) can prove the property \((\phi,\psi)\) on the network \(N\), then \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\) computed by Algorithm 1 is sufficient (Definition 3)._ This follows from the fact that SuPFEx Algorithm ensures at each step that \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\cup\mathcal{F}^{(\mathbb{A})}_{S}\) is sufficient. Hence, at termination the feature set \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\) is sufficient. The complete proof of Theorem 1 is in appendix A. Next, we find a non-trivial upper bound on the size of \(\mathcal{F}^{(\mathbb{A})}_{S}\) computed by the algorithm. **Definition 5**.: _For \(\mathcal{F}\), zero proof features set \(Z(\mathcal{F})\) denotes the proof features \(\mathcal{F}_{n_{i}}\in\mathcal{F}\) with \(P_{ub}(\mathcal{F}_{n_{i}})=0\)._ Note, any proof feature \(\mathcal{F}_{n_{i}}\in Z(\mathcal{F})\) can be trivially removed without breaking the proof. Further, we show that some additional proof features will be filtered out from the original proof feature set. So, the size of the proof feature set \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\) extracted by SuPFEx is guaranteed to be less than the value computed in Theorem 2. **Theorem 2**.: _Let, \(P_{max}\) denote the maximum of all priorities \(P_{ub}(\mathcal{F}_{n_{i}})\) over \(\mathcal{F}\). Given any network \(N\) is verified on \((\phi,\psi)\) with verifier \(\mathcal{V}\) then \(|\mathcal{F}^{(\mathbb{A})}_{S_{0}}|\leq d_{l-1}-|Z(\mathcal{F})|-\lfloor\frac{ \Lambda(\mathcal{A})}{P_{max}}\rfloor\)_ The exact proof for Theorem 2 can be found in Appendix A ``` 1:Input: network \(N\), property \((\phi,\psi)\), verifier \(\mathcal{V}\). 2:Output: approximate minimal proof features \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\), 3:if\(\mathcal{V}\) can not verify \(N\) on \((\phi,\psi)\)then 4:return 5:endif 6:Calculate all proof features for input region \(\phi\). 7:Calculate priority \(P_{ub}(\mathcal{F}_{n_{i}})\) all proof features \(\mathcal{F}_{n_{i}}\). 8:Initialization:\(\mathcal{F}^{(\mathbb{A})}_{S_{0}}=\{\}\), \(\mathcal{F}^{(\mathbb{A})}_{S}=\mathcal{F}\) 9:while\(\mathcal{F}^{(\mathbb{A})}_{S}\) is not empty do 10:\(\mathcal{F}^{(\mathbb{A})}_{S_{1}}=\) top-\(|S|/2\) features selected based on \(P_{ub}(\mathcal{F}_{n_{i}})\) 11:\(\mathcal{F}^{(\mathbb{A})}_{S_{2}}=\mathcal{F}^{(\mathbb{A})}_{S}\setminus\mathcal{F }^{(\mathbb{A})}_{S_{1}}\) 12: Check sufficiency of \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\cup\mathcal{F}^{(\mathbb{A})}_{S_{1}}\) with \(\mathcal{V}\) on \((\phi,\psi)\) 13:if\(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\cup\mathcal{F}^{(\mathbb{A})}_{S_{1}}\) is sufficient then 14:\(\mathcal{F}^{(\mathbb{A})}_{S}=\mathcal{F}^{(\mathbb{A})}_{S_{1}}\) {all features in \(\mathcal{F}_{S_{2}}\) are pruned} 15:else 16:\(\mathcal{F}^{(\mathbb{A})}_{S_{0}}=\mathcal{F}^{(\mathbb{A})}_{S_{0}}\cup\mathcal{F }^{(\mathbb{A})}_{S_{1}}\) 17:\(\mathcal{F}^{(\mathbb{A})}_{S}=\mathcal{F}^{(\mathbb{A})}_{S_{2}}\) 18:endif 19:endwhile 20:return proof features \(\mathcal{F}^{(\mathbb{A})}_{S_{0}}\). ``` **Algorithm 1** Approx. minimum proof feature computation ### Interpreting proof features For interpreting proofs of DNN robustness, we now develop methods to analyze the semantic meaning of the extracted proof features. There exists a plethora of works that compute local DNN explanations (Sundararajan et al., 2017; Smilkov et al., 2017). However, these techniques are insufficient to generate an explanation w.r.t an input region. To mitigate this, we adapt the existing local visualization techniques for visualizing the extracted proof features. Given a proof feature \(\mathcal{F}_{n_{i}}\), we intend to compute \(\mathcal{G}(\mathcal{F}_{n_{i}},\phi)=\mathbb{E}_{X\sim\phi}\,\mathcal{G}(n_{i},X)\) which is the mean gradient of the output of \(n_{i}\) w.r.t the inputs from \(\phi\). For each input dimension (pixel in case of images) \(j\in[d_{0}]\) the \(j\)-th component of \(\mathcal{G}(\mathcal{F}_{n_{i}},\phi)\) estimates its relevance w.r.t proof feature \(\mathcal{F}_{n_{i}}\) - the higher is the gradient value, the higher is its relevance. Considering that the input region \(\phi\) contains infinitely many inputs, exactly computing \(\mathcal{G}(\mathcal{F}_{n_{i}},\phi)\) is impossible. Rather, we statistically estimate \(\mathcal{G}(\mathcal{F}_{n_{i}},\phi)\) by a resonably large sample \(X_{S}\) drawn uniformly from \(\phi\). ## 5 Experimental Evaluation ### Experimental setup For evaluation we use convolutional networks trained on two popular datasets - MNIST (LeCun et al., 1989) CIFAR-10 (Krizhevsky, 2009) shown in Table 1. The networks are trained with standard training and three state-of-the-art robust training methodologies - adversarial training (PGD training) (Madry et al., 2018), certified robust training (CROWN-IBP) (Zhang et al., 2020) and a combination of both adversarial and certified training (COLT) (Balunovic and Vechev, 2020). For our experiments, we use pre-trained publically available networks - the standard and PGD-trained networks are taken from the ERAN project (Singh et al., 2019), COLT-trained networks from COLT website (Balunovic and Vechev, 2020), and CROWN-IBP trained networks from the CROWN-IBP repository (Zhang et al., 2020). Similar to most of the existing works on neural network verification (Carlini and Wagner, 2017; Singh et al., 2019), we use \(L_{\infty}\)-based local robustness properties. Here, the input region \(\phi\) contains all images obtained by perturbing the intensity of each pixel in the input image independently within a bound \(\epsilon\in\mathbb{R}\). \(\psi\) specifies a region where the network output for the correct class is higher than all other classes. We use \(\epsilon_{train}=0.3\) for all robustly trained MNIST networks and \(\epsilon_{train}=8/255\) for all robustly trained CIFAR-10 networks. Unless specified otherwise, the proofs are generated by running the popular DeepZ (Singh et al., 2019) verifier. We perform all our experiments on a 16-core 12th-gen i7 machine with 16 GB of RAM. ### Efficacy of SuPFEx Algorithm In this section, we experimentally evaluate the efficacy of the SuPFEx based on the size of the output sufficient feature sets. Given that there is no existing work for pruning proof feature sets, we use the upper bound computed in Theorem 2 as the baseline. Note that this bound is better than the size of the proof feature set extracted by the Figure 1: Distribution of the size of the proof feature set computed by SuPFEx Algorithm on COLT-trained networks. that only removes only "zero" features which include the proof features (\([l,u]\)) where both \(l=u=0\). (Definition 5) For each network, we use \(500\) randomly picked images from their corresponding test sets. The \(\epsilon\) used for MNIST networks is \(0.02\) and that for CIFAR-10 networks is \(0.2/255\). We note that although the robustly trained networks can be verified robust for higher values of \(\epsilon\), it is not possible to verify standard networks with such high values. To achieve common ground, we use small \(\epsilon\) values for experiments involving standard networks and conduct separate experiments on only robustly trained networks with higher values of \(\epsilon\) (\(0.1\) for MNIST, \(2/255\) for CIFAR-10 networks). As shown in Table 1 we do not observe any significant change in the performance of SuPFEx w.r.t different \(\epsilon\)-values. In Table 1, we show the value of \(\epsilon\) used to define the region \(\phi\) in column 3, and the total number of properties proved out of 500 in column 4. The size of the original proof feature size corresponding to each network is shown in column 5, the mean and median of the proof feature set size computed using Theorem 2 in columns 6 and 7 respectively, and the mean and median of the proof feature set size computed using SuPFEx in columns 8 and 9 respectively. We note that feature sets obtained by SuPFEx are significantly smaller than the upper bound provided by Theorem 2. For example, in the case of the PGD trained MNIST network with \(1000\) neurons in the penultimate layer, the average size computed from Theorem 2 is \(218.02\), while that obtained using SuPFEx is only \(5.57\). In the last two columns of Table 1, we summarise the percentage of cases where we are able to achieve a proof feature set of size less than or equal to \(5\) and \(10\) respectively. Figures 0(a) and 0(b) display a histogram where the x-axis is the size of the extracted proof feature set using SuPFEx and y-axis is the number of local robustness properties for COLT-trained DNNs. Histograms for other DNNs are presented in Appendix B. These histograms are skewed towards the left which means that for most of the local properties, we are able to generate a small set of proof features using SuPFEx. ### Qualititive comparison of robustness proofs It has been observed in (Tsipras et al., 2019) that the standardly trained networks rely on some of the spurious features in the input in order to gain a higher accuracy and as a result, are not very robust against adversarial attacks. On the other hand, the empirically robustly trained networks rely more on human-understandable features and are, therefore, more robust against attacks. This empirical robustness comes at cost of reduced accuracy. So, there is an inherent dissimilarity between the types of input features that the standard and adversarially trained networks rely on while classifying a single input. Also, certified robust trained networks are even more robust than the empirically trained ones, however, they report even less accuracy (Muller et al., 2021). In this section, we interpret proof features obtained with SuPFEx and use these interpretations to qualitatively check whether the dissimilarities are also evident in the invariants captured by the different proofs of the same robustness property on standard and robustly trained networks. We also study the effect of certified robust training methods like CROWN-IBP (Zhang et al., 2020), empirically robust training methods like PGD (Madry et al., 2018) and training methods that combine both adversarial and certified training like COLT (Balunovic and Vechev, 2020) on the proof features. For a local input region \(\phi\), we say that a robustness proof is semantically meaningful if it focuses on the relevant features of the output class for images contained inside \(\phi\) and not on the spurious features. In the case of MNIST or CIFAR-10 images, spurious features are the pixels that form a part of the background of the image, whereas important features are the pixels that are a part of the actual object being identified by the network. Gradient map of the extracted proof features w.r.t. to the input region \(\phi\) gives us an idea of the input pixels that the network focuses on. We obtain the gradient maps by calculating the mean gradient over 100 uniformly drawn samples from \(\phi\) as described in Section 4.3. As done in (Tsipras et al., 2019), to avoid introducing any inherent bias in proof feature visualization, no preprocessing (other than scaling and clipping for visualization) is applied to the gradients obtained for each individual sample. In Fig. 2, we compare the gradient maps corresponding to the top proof feature (the one having the highest prior Figure 2: The top proof feature corresponding to DNNs trained using different methods rely on different input features. ity \(P_{ub}(\mathcal{F}_{n_{i}})\)) on networks from Table 1 on representative images of different output classes in the MNIST and CIFAR10 test sets. The experiments leads us to interesting observations - even if some property is verified for both the standard network and the robustly trained network, there is a difference in the human interpretability of the types of input features that the proofs rely on. The standard networks and the provably robust trained networks like CROWN-IBP are the two extremes of the spectrum. For the networks obtained with standard training, we observe that although the top-proof feature does depend on some of the semantically meaningful regions of the input image, the gradient at several spurious features is also non-zero. On the other hand, the top proof feature corresponding to state-of-the-art provably robust training method CROWN-IBP filters out most of the spurious features, but it also misses out on some meaningful features. The proofs of PGD-trained networks filter out the spurious features and are, therefore, more semantically aligned than the standard networks. The proofs of the training methods that combine both empirical robustness and provable robustness like COLT in a way provide the best of both worlds by not only selectively filtering out the spurious features but also highlighting the more human interpretable features, unlike the certifiably trained networks. So, as the training methods tend to regularize more for robustness, their proofs become more conservative in relying on the input features. To further support our observation, we show additional plots for the top proof feature visualization in Appendix B.2 and visualization for multiple proof features in Appendix B.4. We also conduct experiments for different values of \(\epsilon\) used for defining \(\phi\). The extracted proof features set w.r.t high \(\epsilon\) values (\(\epsilon=0.1\) for MNIST and \(\epsilon=2/255\) for CIFAR-10) are similar to those generated with smaller \(\epsilon\). The gradient maps corresponding to the top feature for higher \(\epsilon\) values are also similar as shown in Appendix B.3. For COLT-trained MNIST networks, in -B.5 we compare the gradients of top proof features retained by SuPFex with the pruned proof features with low priority. As expected, gradients of the pruned proof features with low priority contain spurious input features. ### Sensitivity analysis on training parameters It is expected that DNNs trained with larger \(\epsilon_{train}\) values are more robust. So, we analyze the sensitivity of the extracted proof features to \(\epsilon_{train}\). We use the DNNs trained with PGD and COLT and \(\epsilon_{train}\in\{0.1,0.3\}\) on the MNIST dataset. Fig 3 visualize proof features for the DNNs with additional plots are available in Appendix B.6. We observe that by increasing the value of \(\epsilon_{train}\), the top proof feature filters out more input features. This is aligned with our observation in Section 5.3 that a more robustly trained neural networks are more conservative in using the input features. ### Comparing proofs of different verifiers The proof features extracted by SuPFex are specific to the proof generated by the verifier. In this experiment, we compare proof features generated by two popular verifiers IBP (Zhang et al., 2020; an, 2018) and DeepZ on networks shown in Table 1 for the same properties as before. Note that, although IBP is computationally efficient, it is less precise than DeepZ. For standard DNNs, most of the properties cannot be proved by IBP. Hence, in this experiment, we omit standard DNNs and also, consider only the properties that can be verified by both DeepZ and IBP. Table 2 presents the % cases where the top proof feature computed by both the verifiers is the same (column 2), the % cases where the top 5 proof features computed by both the verifiers are the same and the % cases where the complete proof feature sets computed by both the verifiers are same. We observe that for the MNIST dataset, in 100% of the cases for PGD-trained and COLT-trained networks and in 99.79% cases for the CROWN-IBP trained networks, the top feature computed by both the verifiers is the same. Detailed table is available in Appendix B.7. ## 6 Conclusion In this work, we develop a novel method called SuPFex to interpret neural network robustness proofs. We empirically establish that even if a property holds for a DNN, the proof for the property may rely on spurious or semantically meaningful features depending on the training method used to train the DNNs. We believe that SuPFex can be applied for diagnosing the trustworthiness of DNNs inside their development pipeline. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Training & \% proofs with the & \% proofs with the & \% proofs with the & \% proofs with the \\ Method & same top feature & same top-5 feature & same feature set & \\ & MNIST & CIFAR10 & MNIST & CIFAR10 & MNIST & CIFAR10 \\ \hline PGD Trained & 100 \% & 100 \% & 92.0 \% & 98.31 \% & 92.0 \% & 96.87 \% \\ COLT & 100 \% & 97.87 \% & 87.17 \% & 92.53 \% & 82.05 \% & 89.36 \% \\ CROWN-IBP & 99.79 \% & 100 \% & 96.26 \% & 97.92 \% & 93.15 \% & 95.89 \% \\ \hline \hline \end{tabular} \end{table} Table 2: Comparing proofs of IBP & DeepZ Figure 3: Visualization of gradients of the top proof feature for PGD and COLT networks trained using different values of \(\epsilon_{train}\).
2309.09781
Talking to Data Visualizations: Opportunities and Challenges
Speech is one of the interaction modalities that we increasingly come across in natural user interfaces. However, its use in collaborative scenarios has not yet been thoroughly investigated. In this reflection statement, we discuss the opportunities and challenges of integrating speech interaction in multimodal solutions for collaborative work with data visualizations. We discuss related findings from other research communities and how we could build upon their work to explore and make use of speech interaction for data visualizations in co-located, hybrid, and remote settings.
Gabriela Molina León, Petra Isenberg, Andreas Breiter
2023-09-18T13:59:03Z
http://arxiv.org/abs/2309.09781v1
# Talking to Data Visualizations: Opportunities and Challenges ###### Abstract Speech is one of the interaction modalities that we increasingly come across in natural user interfaces. However, its use in collaborative scenarios has not yet been thoroughly investigated. In this reflection statement, we discuss the opportunities and challenges of integrating speech interaction in multimodal solutions for collaborative work with data visualizations. We discuss related findings from other research communities and how we could build upon their work to explore and make use of speech interaction for data visualizations in co-located, hybrid, and remote settings. Human-centered computing--Interaction design; Human-centered computing--Collaborative and social computing systems and tools; Human-centered computing--Visualization--Visualization design and evaluation methods ## 1 Introduction Since the launch of ChatGPT in November 2022 [15], the interest of the general public in artificial intelligence (AI) has vastly increased [16]. Among the most notorious features of this chatbot is its ability to process questions in more than 50 languages and to generate answers accordingly. As a result, more than 100 million people have interacted with it worldwide so far [5]. In the context of such technological advancements, we want to reflect on the opportunities and challenges that leveraging natural language interaction may pose for collaborative data visualization. More specifically, we examine the case where natural language is used via voice input and not only via written text. Oral communication plays a critical role in collaborative activities, and thus, speech input can leverage people's existing language skills and provide new ways of interacting with data. In this reflection statement, we highlight the opportunities and challenges that we consider most important for introducing speech interaction into collaborative visual systems. Most previous work in speech interaction with data visualizations has focused on single-user scenarios [10, 23]. We build on this work by discussing how we could transfer findings on speech interaction for single-user activities for designing collaborative systems. Accordingly, we propose a research agenda to understand better how speech interaction could support or influence collaboration. The opportunities and challenges we discuss are by no means exhaustive. Our goal is to start a discussion on the potential role of speech interaction in different collaboration settings. ## 2 Speech Interaction Speech interaction refers to the use of voice commands to produce a change or response in a computing system. In the field of human-computer interaction, using natural language to interact is recommended due to its expressiveness and the possibility of supporting interaction by broad audiences [21]. Interacting through written text already works successfully in commercial visualization tools such as Tableau [25]. Moreover, speech is considered an interaction modality that can help make data visualizations more accessible to, for example, blind and low vision users [9]. However, speech commands can be hard to discover [4], so using them efficiently requires a learning phase. Baughan et al. [3] analyzed the failures of voice assistants and found that incorrect command execution, missed triggers, and overcapturing affect user trust. Prior work on visualization systems for single-user scenarios suggests that the limitations of speech interaction can be addressed by a multimodal interaction design. Kim et al. [10] designed a mobile application that leverages touch and speech interaction to explore personal health data. In their study, participants found speech fast and flexible to make comparison queries and considered that the combination of touch and speech was helpful to refine previous commands. Aurisano et al. [1] proposed to combine speech and mid-air gestures to create and manipulate views on a wall display and found that their study participants combined both modalities efficiently to create multiple views. Srinivasan et al. [23] proposed combining touch, pen, and speech interaction to work with unit visualizations. They found that using direct manipulation and speech interaction facilitated a fluid experience and that participants solved speech recognition errors using touch and pen. For multi-user scenarios, Tse et al. [26] started exploring the multimodal design space for co-located collaboration combining speech and touch in a tabletop application. The authors concluded that if most interaction techniques are associated with speaking, that may influence how often collaborators talk to each other. Additionally, the authors suggested using speech commands for global mode switches and touch gestures for individual mode switches. However, speech interaction may have different effects and uses in hybrid and remote settings, where people are not necessarily next to each other, may move around vertical displays, and work asynchronously. Figure 1: Examples of collaborative scenarios: (a) A presenter giving a lecture to an audience. (b) Two collaborators interacting with data visualizations. (c) Two persons interact through virtual reality. Silhouettes by Gineton Rodrigues [18], © © CC-BY 4.0. ## 3 Opportunities In the following, we describe the main opportunities that leveraging speech interaction can provide in co-located, hybrid, and remote collaborative work with data visualizations. ### _Interacting from any distance_ Speech input enables distant interaction, which is relevant in collaborative scenarios involving large vertical displays. People tend to navigate physically by walking in front of such screens, and physical navigation may correlate with user performance [2]. In a study that we conducted recently, participants favored speech over mid-air gestures to interact with data visualizations from afar [12]. While speech can facilitate distant interaction, other modalities can enable interaction up close in a multimodal system. For example, Srinivasan et al. [23] proposed to explore network data on large vertical displays combining direct manipulation (touch and pen) with speech interaction. Such a combination of modalities can be of use not only in co-located scenarios but also in hybrid and remote setups. In those cases, participants may wish to move even while alone in the room (e.g., in a virtual reality scenario as shown in Fig. 0(c)). ### _Requiring only a microphone_ Regarding hardware, speech interaction only requires a working microphone to listen to the speech commands. Therefore, users are free to use their hands for interacting with the system in other ways. Speech neither requires looking at a screen nor an additional device, so gaze distractions are not a problem. Regardless of whether the collaboration is happening in a co-located, hybrid, or remote setting, the physical characteristics of the interaction do not change. Additionally, using a microphone is a standard in collaborative commercial platforms, especially when the collaborators need to communicate synchronously. Thus, no extra equipment is necessary. Giving each person access to a microphone in a co-located scenario, however, is recommended to ensure accuracy and to identify each speaker. From a software perspective, speech recognition only demands using dedicated libraries or tools, such as web APIs (e.g., Mozilla's API [14]) or AI-based tools to support a custom vocabulary (e.g., Piconvoice [7]). Nowadays, being an AI expert is not a requirement to leverage AI-based approaches. While free and open-source toolkits are still rare, there are already a few options, such as Vosk [6]. ### _Adapting to speech-heavy activities_ Collaborative scenarios can involve not only sensemaking but also other activities, such as presenting and teaching about visualization techniques and tools (e.g., Fig. 0(a)), where one person is responsible for most of the oral communication. In such cases, we could incorporate speech commands directly into the presentation or teaching content. The presenter or the audience can then use other modalities to interact while the person is talking. Interactive speeches could work in co-located, hybrid, or remote settings, as long as the setup includes a microphone. Although voice assistants are usually activated with a wake word (e.g., "Alexa", "Hey Siri", etc.), using such a phrase is not required. Instead, the system could listen to the conversation, waiting for an opportunity to participate [27], for example, to provide a data fact relevant to the discussion. Studying storytelling techniques may help to propose ways of incorporating speech commands into a presentation or lecture. For example, Shin et al. [22] proposed a related solution by generating natural language narratives to present a sequence of data changes. Their system created textual narratives and animations to highlight temporal changes in a scatterplot. Thus, the presenter could trigger the animations with commands that form parts of their speech and use other interaction modalities for further effects. ### _Supporting multilingual interaction_ Nowadays, speech interaction tools support the recognition of dozens of languages, which could support collaborative work between people who speak different languages. Previous work shows that people appreciate speech interaction because they can refer orally to concepts that are tedious to specify as data queries through direct manipulation [21]. The option to interact orally in their native language may improve the user experience of the collaborators. Moreover, a system could provide similar flexibility for using expert and non-expert terms to interact orally. For example, a person could say "draw a line that shows if my points are increasing or decreasing" while someone else could say "add a linear trend line". ## 4 Challenges We now present the main challenges that researchers and practitioners may face when leveraging speech interaction in the design of collaborative systems. ### _Recognition errors_ In the context of interactive data visualization, Srinivasan and Stasko [24] reported that study participants sometimes became frustrated while interacting due to speech detection errors. When voice assistants fail to understand speech commands time after time, people tend to trust them less [3]. Providing multimodal interaction may be a solution to this challenge. For example, in the study of Sakheeswaran et al. [19], participants appreciated being able to correct speech recognition and ambiguity errors via touch. Moreover, we are currently witnessing significant improvements in voice recognition systems. The recent launch of deep-learning-based tools such as Whisper [17], Vosk [6], and Picovoice [7] suggests promising advances in speech interaction accuracy. ### _Collaboration conflicts_ Given that dialogue is a fundamental aspect of collaboration, using speech to interact with the system, and not only with other humans, may disturb the conversation flow. Collaboration partners may hesitate to use speech commands to avoid interrupting each other. However, in our recent elicitation study [12], participants sometimes wished for the system to listen to them and directly map part of their conversation to interactions with the data visualizations. Furthermore, problems may arise during turn-taking, as overlapping speech commands may make speech recognition difficult. As the number of collaborators increases, avoiding overlapping voices becomes harder. In this situation, it is also crucial that the system gives appropriate feedback to help the collaborators understand whether the speech commands are understood and processed. ### _Privacy concerns_ People may not necessarily be comfortable with a device constantly listening to them, waiting for voice input. Although the wish for privacy-friendly designs can depend on the country and social norms [20], privacy is a primary factor for user acceptance. Moreover, people are protective of saving and making voice recordings of themselves and others available online with cloud services they do not trust. Thus, it is necessary to find alternatives to constant listening and storing voice recordings [11]. Additionally, speech interaction requires talking out loud, which does not allow users to make their interactions private. However, that may not be a problem in asynchronous collaborations. Also, multimodal systems could support private interactions through other modalities to complement speech-based public interactions [26]. For limiting data sharing, there are speech tools that do not require processing voice input online (e.g., Picovoice [7]). They process it on the local machine, which may be a suitable alternative. ### Support for language learners Although chatbots and voice assistants support multiple languages, we must consider that many people interact with computing systems in English or another language that may not be their first language. Having an accent or not being familiar with the pronunciation of specific words can be an obstacle to speech recognition [13, 24]. Similarly, there are thousands of spoken languages worldwide, and thus, speech interaction is limited by the languages supported by the corresponding software. Machine translation advances may provide more support in the future, but it has limits as well. ## 5 Research Agenda Based on the opportunities and challenges mentioned above, we list the research directions we consider most critical and promising for developing scientific knowledge on the role of speech interaction in multimodal and collaborative scenarios. ### Understanding speech in collaboration As research on conversational user interfaces and multimodal interaction has focused on single-user scenarios, we need to conduct more studies to investigate different collaborative activities to understand how speech interaction can support and affect collaborative work with data visualizations. Similar to how Kim et al. [9] created and published a corpus of questions that participants posed in their study for future speech-based design, collaboration studies could help generate such corpora of questions and dialogues that may be relevant for designing collaborative systems. Collecting these data can help us understand better what tasks are most suitable to perform via speech during collaborative work and what commands are relevant for different collaboration settings and styles. ### Leveraging more design methods Although current commercial products supporting natural language have known limitations in speech recognition, that should not be an impediment to investigating the potential of speech interaction. Given the latest advancements in AI, natural language technologies may evolve rapidly. We should think beyond the current technological capabilities, as diverse research communities have successfully done. Voice assistant researchers often investigate and propose conversational scenarios through storyboards and Wizard of Oz studies (e.g., [8, 27]) to investigate the relevant factors in conversational interface design. We should leverage such methods to inform the design of future visual systems by exploring what could help collaborators most and how. ### Exploring collaborative and multimodal solutions As mentioned above through different examples, many limitations of speech interaction can be addressed by combining speech with other interaction modalities. Prior work suggests that combining natural language with direct manipulation is a promising solution [23], as well as combining speech commands with mid-air gestures [1]. We should examine how collaboration partners use multiple interaction modalities and how those may influence their strategies and interaction choices. Combining modalities that support up close and distant interaction could help support collaboration at different distances from the visualizations [12]. ### Providing voice-based feedback Natural language interfaces can include not only voice input but also voice output. Given the advancements in voice assistants, we should explore the scenarios in which a system can generate prompts in natural language and be part of the conversation. Zargham et al. [27] already started exploring these scenarios by investigating in what situations a proactive voice assistant could intervene in a decision-making conversation or debate. In the context of presentations and teaching activities, we should consider how a voice assistant could support explanatory visualizations or how it could explain how to read a visualization. Hence, it is worth investigating how the human-assistant interaction could work and how the participation of voice assistants may influence the collaborators. This would be an interdisciplinary endeavour to better understand flaws in communication. ## 6 Conclusion This reflection statement discussed the use of speech interaction in collaborative scenarios. We reflected on the opportunities and challenges of incorporating this interaction modality into multi-user interactive systems. Accordingly, we discussed how to leverage speech in diverse collaborative activities and setups involving data visualizations. Given the recent advancements in AI-based approaches for natural language interaction, we should investigate how natural language could facilitate working with data visualizations from any distance, in a group, and even in multiple languages. We should explore this topic from the different perspectives of the related research communities, such as those working on human-computer interaction, natural language processing, and computer-supported cooperative work. Such interdisciplinary efforts will help us better understand whether and how speech could become a valuable interaction modality in collaborative visual systems. ## Acknowledgments This work was partly funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - project number 374666841 - SFB 1342.
2309.07974
A Data Source for Reasoning Embodied Agents
Recent progress in using machine learning models for reasoning tasks has been driven by novel model architectures, large-scale pre-training protocols, and dedicated reasoning datasets for fine-tuning. In this work, to further pursue these advances, we introduce a new data generator for machine reasoning that integrates with an embodied agent. The generated data consists of templated text queries and answers, matched with world-states encoded into a database. The world-states are a result of both world dynamics and the actions of the agent. We show the results of several baseline models on instantiations of train sets. These include pre-trained language models fine-tuned on a text-formatted representation of the database, and graph-structured Transformers operating on a knowledge-graph representation of the database. We find that these models can answer some questions about the world-state, but struggle with others. These results hint at new research directions in designing neural reasoning models and database representations. Code to generate the data will be released at github.com/facebookresearch/neuralmemory
Jack Lanchantin, Sainbayar Sukhbaatar, Gabriel Synnaeve, Yuxuan Sun, Kavya Srinet, Arthur Szlam
2023-09-14T18:17:16Z
http://arxiv.org/abs/2309.07974v1
# A Data Source for Reasoning Embodied Agents ###### Abstract Recent progress in using machine learning models for reasoning tasks has been driven by novel model architectures, large-scale pre-training protocols, and dedicated reasoning datasets for fine-tuning. In this work, to further pursue these advances, we introduce a new data generator for machine reasoning that integrates with an embodied agent. The generated data consists of templated text queries and answers, matched with world-states encoded into a database. The world-states are a result of both world dynamics and the actions of the agent. We show the results of several baseline models on instantiations of train sets. These include pre-trained language models fine-tuned on a text-formatted representation of the database, and graph-structured Transformers operating on a knowledge-graph representation of the database. We find that these models can answer some questions about the world-state, but struggle with others. These results hint at new research directions in designing neural reasoning models and database representations. Code to generate the data will be released at github.com/facebookresearch/neuralmemory. ## Introduction Advances in machine learning (ML) architectures [23], large datasets and model scaling for pre-training [15, 16, 17], modeling approaches [20, 21], and multiple dedicated reasoning datasets [22, 23, 24] have driven progress in both building models that can succeed in aspects of "reasoning" and in automatically evaluating such capabilities. This has been evident particularly in the text setting, but also in computer vision [14, 15, 26]. In parallel, the last decade has seen advances in the ability to train embodied agents to perform tasks and affect change in their environments. These have been also powered in part by data, with many environments made available for exploring modeling approaches and benchmarking. In particular, with respect to "reasoning" in embodied agents, there have been works showing that adding inductive biases to support reasoning can lead to improved performance with end-to-end training [16] and other works have shown how models can be augmented with extra supervision [18]. Recently, several works have shown how large language-model pre-training can be used to affect planners for embodied agents [10, 15]. More generally, symbolic representations can be a hub for connecting perception, memory, and reasoning in embodied agents. Figure 1: (Top): Example of a generated scene in our 3d gridworld. (Middle): Given the 3d scene, we can convert the information in the render to a text or structured representation. Here we show the text Sequence Context representation. “inst_segs” represent the block items such as structures or holes. (Bottom): For a particular scene, we can generate a wide variety of queries. Here we show and example of a distance query asking which object is the closest to the speaker, which is the chicken named “marbles”. However, the growing literature in NLP reasoning models is missing data grounded in a dynamic and agent-alterable world. Models trained on traditional text datasets struggle to handle physically grounded queries such as those that involve geometric reasoning. In other words, recent large language models trained on internet data are not well equipped to simple questions about a physical environment such as "who is to my left?". Grounding large-language models may allow them more powerful reasoning; and vice versa, may help us use them as agent controllers. In this work we describe a data source (i.e., a toolbox to generate data) designed to help train ML models grounded in a physical environment, allowing them to make the connection between perception, memory, and reasoning. It consists of context-question-answer triples, where the context corresponds to a dynamic and agent-affected 3d gridworld, and the questions may involve temporal or spatial reasoning, as well as questions about the agent's own actions. A sample generated from our data source is shown in Fig. 1. While the environment allows rendering the world-context as a sequence of images, one of our goals is to support research toward answering the question "what are good formats for agent memory systems?". In pursuit of this, we abstract the context to a database format that does not require any perceptual modules, and provide code for converting the database into a templated text dump, as demonstrated in Fig. 1 (right top). Here, the order of the facts within each timestep are written randomly, and sequentially written according to the timesteps. Our hope is that the data source can be used for augmenting the training (or allowing the assembly) of reasoning embodied agents by bringing to bear the advances in reasoning in language models, or as a supplement to training language models with grounding from embodied agents. We train baseline neural models to represent the database and process the queries. These include finetuning pre-trained language models on the text version of the database, and Transformers that input the structured database directly. We find that while certain queries are easily solved by these baselines, others, in particular - those having to deal with spatial geometry, are more difficult. In short, the contributions of this paper are: **Environment**: We introduce an environment for embodied agents and a data source for generating data to train agents in this environment (detailed in the Environment, Queries, and Data section). We provide the code to generate world contexts, as well as complex queries. We hope this will aid researchers to isolate and tackle difficult problems for reasoning for embodied agents. **Baselines**: We evaluate the abilities of baseline models to answer queries in this environment (Experiments section). We compare different representations of the world context, including a pure text based representation as well as a more structured representation. ## Environment, Queries, and Data We propose a context-question-answer data generator for embodied agents. In this section, we outline the context, or environment we generate data in, the types of queries we create for the agent to solve, and specifics of the data samples. ### Environment We work in a finite three-dimensional gridworld. There is a primary agent, zero or more other agents, zero or more get/fetchable items, zero or more placeable/breakable blocks. The other agents come in two types: they might represent human "players" that can give commands to the agent and have the same capabilities as the agent, or animate non-player characters (NPCs) that follow simple random movement patterns. The placeable/breakable blocks have colors and integer grid coordinates; all other objects have float coordinates. Animate objects (the players, NPCs, and the agent) have a yaw and pitch pose representing the location they are looking, in addition to three location coordinates. To build a scene, we generate some random objects (spheres, cubes, etc.), randomly place a number of NPCs, a player, and an agent. With some probability, the agent executes a command (to either: build an object, destroy an object, move to a location, dig a hole, follow an NPC, etc.) The execution of a command is scripted; the task executor is from the Minecraft agent in [20]. Whether or not the agent executes a task, the world steps a fixed number of times (so, e.g., NPCs may move or act). In the experiments described below, the world is fully observed at a fixed number of temporal snapshots, and all poses, object locations and NPC movements are recorded. However, not every world step is snapshotted, so the total sequence is not fully observed. Following [20], the environment is presented to the agent as an object-centered key-value store. Each object, NPC, and the agent's self have a "memid" keying a data structure that depends on the object type, and may contain string data (for example a name) or float or integer data (e.g. the pose of an NPC). The key-value store also has subject-predicate-object triples (e.g. "memid has tag mottled"); the triples themselves also have a unique memid as key. The generated scene presented as the key-value store defines the agent's context, \(C\). In our experiments below, we represent this context in one of two ways. The first is a text sequence (\(C_{t}\)), where for each snapshot step, all objects and their properties in the key-value store are flattened into a templated language, as shown in Fig. 1 (right). Multiple time snapshots of the context are represented by appending each successive event in the sequence. While this is in some sense simplistic, it allows ready application of large pre-trained language models for ingesting the context; and this kind of approach has been shown to be effective in [21, 22]. Alternatively, we can leverage the relational properties of the context by representing it as a graph where objects and properties are nodes, and the connections between them are the edges. For example, if we know that "bb" is a horse and is the color brown, the node for bob connects to a "horse" and a "brown" object. Specifically, this "Sequence Context" contains reference object nodes (\(C_{R}\)), which are the object instantiations, and triple nodes (\(C_{T}\)), which are the properties of the reference objects. Each reference object node holds the following information: _reference_object_hash_ (R_id) is a unique identifier for each reference object, _reference_objects_words_ holds identifier words of the object such as its name, and _reference_objects_float_ is the floating point properties of the object such as its (x, y, z) coordinates and pitch/yaw. These are combined into a single node (detailed in the Models section). Similarly, each property triple is composed of the following elements: _triples_hash_ (T_id) contains a unique identifier for the triple, as well as the reference object hash that it is linked to, and _triples_words_ are a descriptive text of a reference object's property such as "has_color blue". These are also combined into a single node. We therefore have nodes of both reference objects and triples, and the hashes encompass the edges or relationships between them. We do not consider the text sequence or graph-structured representations to be canonical. On the contrary, our goal is stimulate research into what the correct representation of world state should be to allow easier model training and transfer between agents. We simply provide these two representations as examples and baselines. Given the determined set of snapshots, queries are designed to be answerable. That is, all information is known to answer the questions. We leave to future work making ambiguous queries, but note here that it is not difficult to purposefully build and record queries that could theoretically be answered in some scene but cannot be answered in the particular scene instantiation, for example because they refer to an event that occurred between snapshots. It would otherwise be easy to restrict full observability within each snapshot. ### Queries The embodied agent environment allows for a rich set of possible queries to ask the agent. We structure the types of queries we use into three main categories, as covered in Table 1. _Property_ queries are those which operate on the current state of the memory such as the current properties or locations of an object, and are given with an explicit relation. Property queries with a single clause can be read directly from the database or text dump without any "reasoning". _Temporal_ queries are those which operate over spans of the memory such as the movement of object. _Geometric_ queries are those concerned with the geometric structure of the environment such as how far away two object are from each other. Note that many queries are mixes of these types, and the categorization is blurred. Within each query class, there are several different "clause" types. These can be combined as applicable into a multi-clause query, where the clauses are combined by an "and" or "or" conjunction randomly. In addition, each clause can be negated by prepending the word "not". For example, "what are the name of the objects that do not have the property brown and where the x coordinate is less than 4". The query, \(Q\), is composed of one or both of the following representations: _query_text_ (\(Q_{t}\)): a text representation of the query (e.g. "find the reference_objects with color brown"), _query_tree_logical_form_ (\(Q_{lf}\)): a tree logical form representation of the query (e.g. two clauses in an "and" or "or" query are connected by a parent in the tree). Given the context and query, the agent should return some answer, \(A\). Depending on the query type, we randomly ask for one of the following answer types in the query: _name_ ("what is the name of..."), _properties_ ("what are the properties of..."), _location_ ("what is the location of..."), _distance_ ("how far is..."), and _count_ ("how many..."). In general, we are interested in predicting the text answer such as "brown" for the query "what is the color of the horse?". However, we may also be interested in pointing to the relevant context objects or properties (in the structural context representation). Thus, for each query, we provide both the text answer as well as the relevant context node IDs (from \(C_{R}\) and \(C_{T}\)). ### Data With this data generation framework, we can create arbitrary amounts of simulated data. Each data sample contains a (\(C\), \(Q\), \(A\)) triple. There are several parameters of the world that affect the difficulty of question answering, including the size Figure 2: Structured context + Transformer model. The bottom left demonstrates the structured representation of the 3d gridworld, where R_id is the unique reference object identifier, and T_id is the unique triple property identifier (which connects to one of the reference objects via R_id). Context nodes (\(C_{\{R,t\}}\)) and query tokens (\(Q_{t}\)) are first featurized with learnable embedding layers. We process the featurized context and query jointly with a Transformer encoder that considers the context structure via relational embeddings (\(r_{ij}\)). Finally, a text decoder predicts tokens, and a memid decoder predicts relevant context memids (not pictured). of the 3d gridworld (e.g., 15x15x15), the set of possible objects in the world (e.g., bob, alice,...), and the set of possible properties that can belong to the objects (e.g. cow, horse, blue, green,...). The number of time-steps and number of snapshots also crucially affect the difficulty of the problem. Similarly, the distributions over the queries (e.g., how many and what kind of clauses) can make the problem more difficult. ## Related Work Real-world QA datasets have long been used to test different aspects of ML model performance such as reading comprehension Rajpurkar et al. (2016); Hill et al. (2016), commonsense reasoning Talmor et al. (2019), multi-hop reasoning Yang et al. (2018), and visual understanding Agrawal et al. (2015); Hudson and Manning (2019). While real-world datasets can provide reliable performance benchmarks and better approximate the problems faced by practitioners, synthetic datasets can allow for more control and the ability to isolate the exact limitations of current models. Notably, bAbI Weston et al. (2016) is a set of toy QA tasks testing various reasoning abilities over short text stories that showed the limitations recurrent neural networks. Since proposed, all bAbI tasks have been solved by novel memory architectures Henaff et al. (2017); Dehghani et al. (2019). An gridworld environment for embodied agents with language instructions for tasks is described in Chevalier-Boisvert et al. (2018); our work here is complementary, giving question-answer pairs based on abstracted environment histories. CLEVR Johnson et al. (2017) is a popular synthetic data for testing visual reasoning given text queries. Yi et al. (2020) extends CLEVR to reasoning over temporal events in videos. Embodied question answering (EmbodiedQA) Das et al. (2018) proposes a task where an agent must navigate an environment in order to answer a question. VideoNavQA Cangea et al. (2019) was proposed in the EmbodiedQA domain to evaluate short video-question pairs. Our work has elements of each these. The agent is embodied, and might need to answer questions about its actions or hypotheticals, but does not need to act or change the current state of the environment to answer (as in EmbodiedQA). In comparison to the EmbodiedQA dataset where the agent has to query the environment to get more information, our setting doesn't require the agent to interact and get more information. As in Yi et al. (2020), the agent needs to be able to reason over spatio-temporal events, in our case, including its own actions. As in CLEVR, we use programmatically generated queries to probe various reasoning modalities. One large difference between this work and those is that we do not focus on computer vision. While it is possible to render the scenes from our data generator, our goal is to be agnostic about perceptual modality and abstract away perceptual modeling, and to the extent possible, focus on the reasoning aspects of the data. Within the vision community, other works have approached the VQA problem from this angle Yi et al. (2018). Because the agent's abstracted world representation has a database-like structure, our work falls into the literature on ML for question answering on structured data, for example Pasupat and Liang (2015). Our structured Transformer baseline is inspired by the literature on neural database representation, for example Wang et al. (2019); Yin et al. (2020), and references therein. Our LM baseline is inspired by the many works that flatten or otherwise textify databases, and use pretrained language models as bases for neural query executors, e.g. Thorne et al. (2020, 2021); Liu et al. (2021). There are other fundamentally different approaches than these for neural query execution, for example Ren et al. (2020); our hope is that our data source is useful for exploring these. Tuan et al. (2022) introduce a Transformer to generate responses to questions by reasoning over differentiable knowledge graphs in both task-oriented and domain specific chit-chat dialogues. Our structured neural memory baseline follows works such as Locatello et al. (2020); Santoro et al. (2018). In this work, the relational "objects' do not need to be discovered by the learning algorithm, and their properties explicitly given to the model to use in featurizing the objects. Our work is most related to the pigen environment \begin{table} \begin{tabular}{l l l} \hline \hline Query Class & Clause types & Example \\ \hline Property & name & what are the properties of the objects that have the name alice? \\ & tag & what are the names of the objects that has the property brown? \\ & absolute cardinal & what are the locations of the objects where the x coordinate is less than 4? \\ \hline Temporal & cardinal & what is the name of the object that increased x the most? \\ & relative & what is the name of the object that moved to my left the most? \\ & _farthest moved object_ & what is the name of the object that moved the farthest? \\ & _location at time_ & what was the location of bob at the beginning? \\ & _action_ & what did you do? \\ & _object tracking_ & where would the ball be if i moved to (4,7,2)? \\ \hline Geometric & absolute distance & what is the count of the objects where the distance to (2, 6, 5) is greater than 3? \\ & direction & what are the names of the objects to my right? \\ & _closest object_ & what is the name of the object that is closest to the cow? \\ & _max direction_ & what is the name of the object that is the most to my right? \\ & _distance between_ & how far is the horse from you? \\ & _distance from position_ & what is the location 3 steps to your right? \\ \hline \hline \end{tabular} \end{table} Table 1: Query clause types. We categorize the queries we can ask the agent into three separate classes. Within each class, there are several clause types. _Italicized_ clauses cannot be combined with others. of (Zellers et al., 2021). In comparison to that work, ours uses more impoverished templated language, but has a larger and more flexible space of queries. The orientation of our work is also different: in (Zellers et al., 2021) the learner is given only a few labeled QA examples (or dynamics prediction examples) but can use many "unsupervised" state-transitions to build a world-dynamics model. In our work, we allow large numbers of labeled context-query-answer examples; but the difficulty of the queries makes the task non-trivial. TextWorld (Cote et al., 2018) and and QAit (Yuan et al., 2019) are text-based environments for game play and question answering that require interactive reasoning. The main difference is that our generator is grounded in a 3D gridworld scene, and there is both a user and interactive agent. We build our data generator on top of the Droidlet agent (Pratik et al., 2021), in part using the grammar in (Srinet et al., 2020) to generate the queries and using the Droidlet agent memory to execute them. This work points the way towards using neural networks to execute the functions of the Droidlet agent memory (and hopefully can be used as a resource for training other agent-memory models). ## Experiments Since the agent's memory or state can take different forms (sequence and structured), we compare two separate models for answering queries about the context of the world. We consider four different datasets for our experiments, as covered in Table 1: Property queries, Temporal queries, Geometric queries, and All queries, where _All queries_ is the three previous categories combined (each query type has roughly the same likelihood of occurring - we provide the configuration files in the code). Each of these except for Properties (which do not require temporal information) is generated using two time snapshots with 50 world steps, which gives enough steps for actions to occur. Properties queries use one snapshot and zero world steps. For all queries, we place five NPCs in the world, one of which is a "player" that might have given a command. For all query types, we choose the world to be 15x15x15. ### Models We generate data as described in the Environment, Queries, and Data section, and analyze the performance of some baseline models trained on this data. Text Sequence Context.Since the text sequence form of the context is English text, we use a language model to read the context (\(C_{t}\)) and query (\(Q_{t}\)), and predict the correct answer tokens sequentially (if there are multiple outputs, they are ordered alphabetically). We use the pretrained GPT-2 small model (Radford et al., 2019) from the HuggingFace library (Wolf et al., 2019) (licensed under the Apache License 2.0) to predict all relevant tokens sequentially: \[\mathbf{\hat{W}}=\text{GPT2}([C_{t},Q_{t}]), \tag{1}\] where \(\mathbf{\hat{W}}\in\mathbb{R}^{L\times V}\) is a soft-max normalized matrix of sequence length \(L\) and vocabulary size \(V\), and \([\,]\) is the concatenation operation. This model is fine-tuned using a sum of the cross entropy between each token prediction \(\mathbf{\hat{W}_{i}}\) and ground truth token \(\mathbf{W_{i}}\): \[\mathcal{L}_{text}=-\sum_{i\in S}\sum_{j\in V}\mathbf{W}_{ij}\log\mathbf{\hat{W}}_{ij}. \tag{2}\] Structured Context.While the text Sequence Context is useful in that it allows easy application of standard pre-trained models, it may be that for certain tasks other representations are more appropriate. We also show results with simple models that are designed around the relational structure of the context. Given a set of \(\rho\) reference objects and \(\tau\) triples, we first featurize the nodes using a learned convolutional layer given the word, float, and hash values as outlined in the Environment, Queries, and Data section. The output of the featurizer layer gives reference object embeddings \(C_{R}\in\mathbb{R}^{\rho\times d}\), and triple embeddings \(C_{T}\in\mathbb{R}^{\tau\times d}\). Similarly, text queries \(Q_{t}\in\mathbb{R}^{r\times d}\) are created using a learned lookup table from the query tokens. We use a Transformer (Vaswani et al., 2017) encoder to process the context and query. The output of the encoder is then used to predict both: the text answer as well as the relevant context memory IDs. We use a Transformer decoder for the text prediction, and a simple linear layer to predict the memory values: Figure 3: (left): Exact match error for the four different generated datasets. Sequence Context + GPT-2 outperform the Structured + Transformer method in all datasets. (middle, right): Loss curves for the All queries dataset. We show the mean loss with min/max error bars over all hyperparameters for the first 1,000 epochs. The pre-trained GPT-2 model learns much faster than the from-scratch relational model. \[(C_{T}^{\prime},C_{R}^{\prime},Q_{t}^{\prime})=\text{Encoder}([C_{T},C_{R},Q_{t}]) \tag{3}\] \[\hat{\mathbf{m}}=\text{MemidDecoder}([C_{R}^{\prime},C_{T}^{\prime}])\] (4) \[\hat{\mathbf{w}}=\text{TextDecoder}([C_{T}^{\prime},C_{R}^{\prime},Q _{t}^{\prime}]), \tag{5}\] where \(\hat{\mathbf{m}}\in\mathbb{R}^{\rho}\) is predicted relevant manifolds, and \(\hat{\mathbf{w}}\in\mathbb{R}^{V}\) represents predicted relevant text tokens. This model is trained using a cross-entropy text token loss (eq 2) and cross-entropy memid loss from the relevant memids \(\mathbf{m}\in\mathbb{R}^{\rho}\): \[\mathcal{L}_{memid}=-\sum_{i=1}^{C}\hat{m}_{i}\log\left(m_{i}\right). \tag{6}\] The total loss is a weighted sum of the two losses: \(\mathcal{L}_{text}\) + \(\lambda\cdot\mathcal{L}_{memid}\), where \(\lambda=0.5\). In the Sequence Context, the entities are represented as a set, with defined relationships between them. Therefore, encoding temporal information is not straightforward. To do so, we add a special "time" embedding to each Sequence Context node. That is, for timestep 0, we add the \(time\)=0 embedding, for timestep 1, we add the \(time\)=1 embedding. For the Sequence Context, the model will return the relevant memory IDs, a text sequence answer, or both. The text output prediction is shown in Fig. 2. Relational Embedding.One important form of structure in data is relations where entities are connected to other entities. In our data, for example, triple nodes are connected to object nodes by R_id. Since the vanilla Transformer without positional embeddings treats input tokens as a set, it lacks a mechanism for taking account of this relational information. Thus, we propose a novel way of encoding relational information directly into the self-attention mechanism of the Sequence Context Transformer model. Specifically, we add an extra term to the softmax attention: \[a_{ij}=\text{Softmax}(q_{i}^{T}k_{j}+q_{i}^{T}r_{ij}) \tag{7}\] Here \(r_{ij}\) is a _relation embedding_ vector corresponding to the relation type between token \(i\) and \(j\). Relation embeddings can be used to encode various types of relations. We note that the commonly used relative position embeddings [21, 20] are a special case of relation embeddings where \(r_{ij}=e_{i-j}\); more sophisticated relation embeddings have appeared in [1]. ### Model and Training Details All our models are trained using Adam [13] for 5,000 epochs, where each epoch is over a chunk of 10,000 training samples. Since we are generating the data, we vary the training samples from 1k to 1M, and use a validation set of 10k samples. We use a linear warmup of 10,000 steps and cosine decay [11]. For the GPT2 model, we consider learning rates {1e-4, 5e-4, 1e-5} using a batch size of 32. For the structured model, we consider learning rates {1e-4, 5e-4, 1e-5}, batch size 32, layers {2, 3}, and embedding dimensions {256, 512}. Hyperparameters were chosen with arbitrary initial values and increased until validation performance decreased or resources were depleted. The best performing structured model has 74.58M parameters, whereas GPT-2 small has 325.89M. All words are encoded with the GPT-2 tokenizer. ## Results Fig. 3 (left) shows the results for the four different dataset versions we consider. We report the exact match error for all data splits. That is, \[\text{Exact Match Error}=\frac{1}{n}\sum_{i=1}^{n}I\left(\mathbf{y}^{i}\neq\hat{ \mathbf{y}}^{i}\right), \tag{8}\] where \(\mathbf{y}^{i}\in\mathbb{R}^{N}\) are the \(N\) ground truth tokens for sample \(i\), and \(\hat{\mathbf{y}}^{i}\) are the top \(N\) predicted tokens. Fig. 3 (right) shows the loss curves for the All queries dataset. For property and geometric queries, the Sequence Context + GPT-2 method performs the best. Since GPT-2 is pre-trained on millions of samples, it can easily learn basic property queries such as "what is the color of bob?". GPT-2 has a near zero test set loss for the properties queries. Geometric queries, while difficult for both models to answer is solved more effectively by GPT-2. Table 2 (top) shows GPT-2 model variation studies for the All queries dataset. Notably, we test the performance of a Sequence Context + randomly initialized GPT-2 model (i.e. one with the same architecture, but not pretrained). We see Figure 4: Validation set exact match error using varying amounts of training data. The pre-trained GPT-2 model is particularly useful when the number of training samples is small, and when reasoning over proprty queries. On the other hand, non-pre-trained GPT-2 style models do not learn, see Table 2. Both models require more than 100K training samples to achieve the best results. that the performance is worse than that of the Structured Context + Transformer. This indicates that the primary reason for the GPT-2 model achieving the best results is due to the pretraining. Table 2 (bottom) shows a Structured Context + Transformer method variation for the All queries dataset. We consider an alternative to the relational embeddings proposed in the Models section, which is adding random hash embeddings to the nodes, or tokens, which are connected to each other in the Sequence Context representation. This method performs slightly worse than relational embeddings, hinting that a more explicit representation of the context structure is important. Fig. 5 shows a 2-snapshot scene from our test set. Finally, Fig. 4 shows the result of our baseline models when using varying amounts of training samples. Since our data source can generate arbitrary amounts of data, it's important to understand how much data is needed to solve certain problems. Notably, most of the datasets require at least 100,000 samples to achieve a reasonable validation loss. In addition, the Sequence + GPT-2 model significantly outperforms the Structured + Transformer model on the datasets with small amounts of training samples. ## Conclusion In this work, we introduced a framework for generating world state contexts with agents, queries, and their answers. This provides researchers with a flexible sandbox for training and testing reasoning in embodied agents. Notably, our sequential data format lets us easily evaluate the ability of large language models such as GPT to understand a physical world. We show baseline results with two representations of the world state context: a pure text sequence representation that can be used by any off-the-shelf language model, as well as a structured representation of the entities in the world. We demonstrate the ability of the models to answer queries in several query domains, such as temporal or geometric. We emphasize that our goal in this work is not to create a fixed static data set, but a resource for generating data. Many of our experimental choices (w.r.t. environment and query difficulty) were aimed to allow the baseline models some traction, but not able to solve the tasks completely. In particular, we find it likely that large pre-trained language models could do better than the GPT-2 base models we used with the described data generation parameters. On the other hand, the difficulty of the problem can be scaled trivially by increasing the number of possible reference objects and properties, by increasing the number of reference objects in the world and increasing the number of instantiated properties; by making the world bigger, and by increasing the time-length of episodes and the number of snapshots recorded. If nothing else, these changes would quickly lead to the context being too large to fit in the memory of a standard LM, and necessitating a large-memory LM (Lample et al., 2019; Rae et al., 2019; Beltagy, Peters, and Cohan, 2020); or leading to other approaches e.g. (Ji et al., 2017). Other, more subtle difficulties can be introduced in straightforward ways by reducing full observability, or otherwise asking queries that cannot be answered with the information in the context, requiring agents some amount of meta-cognition and/or environment actions to find the answers, as in (Das et al., 2018). In future work, we plan to introduce more query types, including arithmetic and hypotheticals. We hope that researchers will use the data generator as a flexible resource for augmenting their own agent training or LM pre-training, or for exploring new models for database reasoning. \begin{table} \begin{tabular}{l l c c c} \hline \hline Model & Variation & Train Loss & Test Loss & Test Err \\ \hline \multirow{3}{*}{GPT-2} & sm + rand init & 0.361 & 1.912 & 47.1\% \\ & sm + pretrain & 0.015 & 0.710 & 14.9\% \\ & med + pretrain & 0.012 & 0.635 & 13.8\% \\ \hline \multirow{2}{*}{Transformer} & relational emb & 0.230 & 0.921 & 19.7\% \\ & random hash & 0.630 & 1.393 & 30.0\% \\ \hline \hline \end{tabular} \end{table} Table 2: Variations of models (sm=small, med=medium). Pre-training is a key component of the GPT-2 model success, even when given large numbers of training examples. Relational embeddings result in a slightly lower test loss than random-hash in the graph-structured models. Figure 5: Sample scene with two snapshots. The chicken named “honey” moves from the bottom right toward the center, and some other objects move slightly. Both models correctly answer the query “which object moved the farthest?”.
2309.09959
Optimization of probe separation distance and cooling time in multi-probe cryoablation technique by arranging probes in triangular and square pattern-A computational approach
Cryoablation is a minimally invasive and efficient therapy option for liver cancer. Liquid nitrogen was used to kill the unwanted cells via freezing. One of the challenges of cryosurgery is to destroy the complete tumor without damaging the surrounding healthy cells when the tumor is large. To overcome this challenge, multi-cryoprobes were arranged in a polygonal pattern to create a uniform cooling and optimum ablation zone in the tissue. Single, three, and four cryoprobes were placed in the center, triangle, and square patterns to analyze the temperature profile and ablation zone. The results showed that tissue will freeze quickly when cryoprobes are placed in a square pattern. After the treatment of 600 seconds, $99\%$, $96\%$, and $31\%$ of the tumor were killed using four, three, and single cryoprobes, respectively. One of the difficulties in the multi-probe technique is choosing the probe separation distance and cooling time. The volume of the ablation zone, the thermal damage to healthy cells, and the volume of tumor cells killed during the treatment for different probe separation distances of 10 mm, 15 mm, and 20 mm are analyzed. Compared to other settings, a multi-probe technique destroys the entire tumor with the least harm to healthy cells when probes are arranged in a square pattern with a 15 mm space between them.
Gangadhara Boregowda, Panchatcharam Mariappan
2023-09-18T17:34:26Z
http://arxiv.org/abs/2309.09959v1
Optimization of probe separation distance and cooling time in multi-probe cryoablation technique by arranging probes in triangular and square pattern-A computational approach ###### Abstract Cryoablation is a minimally invasive and efficient therapy option for liver cancer. Liquid nitrogen was used to kill the unwanted cells via freezing. One of the challenges of cryosurgery is to destroy the complete tumor without damaging the surrounding healthy cells when the tumor is large. To overcome this challenge, multi-cryoprobes were arranged in a polygonal pattern to create a uniform cooling and optimum ablation zone in the tissue. Single, three, and four cryoprobes were placed in the center, triangle, and square patterns to analyze the temperature profile and ablation zone. The results showed that tissue will freeze quickly when cryoprobes are placed in a square pattern. After the treatment of 600 seconds, 99%, 96%, and 31% of the tumor were killed using four, three, and single cryoprobes, respectively. One of the difficulties in the multi-probe technique is choosing the probe separation distance and cooling time. The volume of the ablation zone, the thermal damage to healthy cells, and the volume of tumor cells killed during the treatment for different probe separation distances of 10 mm, 15 mm, and 20 mm are analyzed. Compared to other settings, a multi-probe technique destroys the entire tumor with the least harm to healthy cells when probes are arranged in a square pattern with a 15 mm space between them. ARTICLE TEMPLATE Cryoablation; FEM; Ablation zone; Heat conduction ## 1 Introduction World Health Organization (WHO) estimates that cancer was one of the major causes of death globally in 2019 [3]. This is because of changes in human lifestyle and the environment. In 2020, there were approximately 19.3 million new instances of cancer and 10 million cancer-related deaths, according to the International Agency for Research on Cancer (IARC) [21]. Patients with hepatocellular carcinoma (HCC) benefit most from surgical and liver transplantation procedures, which also lower their chance of acquiring new HCC [4]. Oncologists advise minimally invasive procedures such as local ablative techniques and liver resection due to patients' habits, the location of the tumor, and a lack of adequate donors. For the treatment of cancer, a variety of ablative methods have been employed, including cryoablation, radiofrequency ablation, and microwave ablation. For some tumor ablations, cryosurgery is a minimally invasive and efficient therapy option. It is predicated on the idea that unhealthy tissue can be destroyed by subjecting tumor cells to extremely cold temperatures created by cryogenic agents [6]. The goal is to totally destroy the tumor cells while causing the least amount of cryoinjury to the healthy tissue around them. This sort of treatment, which can be utilized to eliminate skin malignancies, liver, lung, and prostate tumors, is typically carried out concurrently with other cancer-treating techniques like radiotherapy or chemotherapy. Due to its minimum invasiveness, which results in less pain and bleeding, cheaper treatment costs, and shorter hospital stays, cryosurgery can be a suitable substitute for open surgery [7]. It only has a small cut for the cryoprobe to be inserted. The cryoprobe is often cooled by convection of liquid nitrogen or argon which circulates inside the probe. Based on its capacity to attain the lowest temperature achievable, the coolant liquid is selected [17]. Liquid nitrogen and argon have boiling points of \(-196^{\circ}\) C and \(-186^{\circ}\) C, respectively. Conduction transfers heat from the cancerous tissue to the cryoprobes during freezing. Numerous researchers have looked into ways to enhance the cryosurgery process in order to reach the goal of completely destroying tumor cells with the least amount of harm to neighboring healthy cells [7, 14, 15, 17]. Insufficient and non-uniform freezing of tumor cells may lead to incomplete destruction of the tumor. Many challenges have arisen in this field, such as the functionality of the cryoprobe, effects of vessel size and network on the ablation zone, irregular tumor shape, and creating an ice ball for large tumor [15, 22]. In the literature [14], the Author numerically investigated the effect of vessel size and network on the ablation zone. The literature [15] demonstrated the shape and formation of the ice ball for large tumors, and it showed that a single cryoprobe is inefficient in creating the large ice ball for large tumor. An optimization method for planning multi-probe cryosurgery was demonstrated in [10]. In this work, multiple cryoprobes are used to create a large ablation zone, and we numerically investigated the shape and size of the ablation zone, and temperature profile in the tissue for different numbers of cryoprobes. The different patterns for cryoprobe locations are proposed for large tumors to optimize the cryoinjury. The tumor nodules have different shapes and sizes in the cancer types, such as breast, lung, and liver. One of the common shapes, which includes the majority of shapes, is the spherical tumor [19, 20, 23]. As a result, we assumed the tumor as a sphere with a 25 mm radius. The destruction index \(\theta_{d}\) was used to quantify the cryoinjury or ablation zone [14]. To achieve uniform cooling in the tissue and a spherical ablation zone, the cryoprobes were arranged in a polygonal pattern. Selecting the probe separation distance and cooling time, which depends on tumor size, is one of the difficulties of the multi-probe technique. For various probe separation distances and cooling times, we evaluate the volume of the ablation zone, thermal damage to healthy cells, and the volume of tumor cells killed during the treatment. Three different arrangements are used to simulate the results: (1) Cryoprobes are arranged in a triangle and square pattern with a d=10 mm distance between any two probes, (2) Cryoprobes are arranged in a triangle and square pattern with a d=15 mm distance between any two probes, and (3) Cryoprobes are arranged in a triangle and square pattern with a d=20 mm distance between any two probes. The bio-heat equation with temperature-dependent thermal parameters was used to analyze the temperature profile in the tissue. The bio-heat equation was numerically solved using the Finite Element Method (FEM). The model was discretized and then solved using the open software Gmsh and FEniCS [1], respectively. The 3D domain and the bio-heat equation were discretized using the tetrahedral element and Lagrange basis functions, respectively. ## 2 Materials and methods This section explains the model's geometry, governing equations, boundary conditions, and FEM implementation. The degree of cryoinjury is measured using the destruction index, which ranges from 0 to 1. In order to analyze the temperature profile and ablation zone, the cryoprobes are arranged in a center, triangle, and square configuration. The effect of probe separation distances and cooling times on the ablation zone is analyzed. The problem is discretized and solved using Gmsh and FEniCS, respectively. ### Geometry of the model The computational domain, which was considered to be a sphere of liver tissue with a radius of 50 mm, is depicted schematically in Figure 1. In the middle of the computational domain spherical tumor with a radius of 25 mm was placed. The cryoprobe is represented by a vertical cylinder (6 mm in diameter) with the tip inserted 30 mm into the tumor. The cryoprobe was divided into two parts. One is an active part, and the other is a non-active part. The length of the active part is 20 mm and it is filled with liquid nitrogen. The non-active part was thermally insulated. The dimension of the probe is based on the literature [6, 14]. The position of cryoprobes in the tissue is explained in subsection 2.3. ### Governing equation In the literature, [14, 15], the domains of the liver, tumor, and blood were taken into consideration to examine the impact of artery size and vascular network complexity Figure 1: Cross sectional view of the computational domain. on cryosurgery. Since we are investigating the size of the ablation zone for different numbers of antennas, we only take into account the liver and tumor in the computation domain. Moreover, cryoablation has little impact on nearby large blood vessels [16]. The bio-heat equation was used to analyse the temperature distribution in the liver and tumor [18]. \[\rho_{t}c_{t}\frac{\partial T}{\partial t}=\nabla\cdot(k_{t}\nabla T)+\rho_{b} \omega_{b}c_{b}(T_{b}-T)+Q_{m}\ \ \mbox{in}\ \ \Omega \tag{1}\] where \(\Omega\) is the computational domain, \(\rho_{t}\) is the tissue density (kg/m\({}^{3}\)), \(c_{t}\) is the specific heat capacity of the tissue (J/ kg.\({}^{\circ}C\)), \(k_{t}\) is the tissue thermal conductivity (W/m\({}^{\circ}C\)), \(T\) is the temperature (\({}^{\circ}C\)), \(Q_{e}\) is the absorbed electromagnetic energy (W/m\({}^{3}\)), \(\omega_{b}\) is the blood perfusion rate (kg/ m\({}^{3}\cdot\) s), \(c_{b}\) is the specific heat capacity of blood (J/ kg.\({}^{\circ}C\)), \(T_{b}\) is the blood temperature (\({}^{\circ}C\)), \(Q_{m}\) (W/m\({}^{3}\)) is the metabolic heat rate of the tissue. The cells will go through a phase change at the freezing point during freezing, losing the latent heat of freezing in the process. Clinical research has shown that the freezing occurs between \(-1\)\({}^{\circ}C\) and \(-8\)\({}^{\circ}C\). The analysis will be broken down into three different temperature ranges [6, 17], \(-1\)\({}^{\circ}C<T<37\)\({}^{\circ}C\) when cells are unfrozen, \(-8\)\({}^{\circ}C<T<-1\)\({}^{\circ}C\) when cells are freezing and \(-196\)\({}^{\circ}C<T<-8\)\({}^{\circ}C\) when cells are frozen. The tissue's thermophysical characteristics in the frozen, unfrozen, and freezing stages are as follows [7]: \[k_{t} = \begin{cases}k_{u}&-1\ ^{\circ}C<T\\ \frac{k_{u}+k_{f}}{2}&-8\ ^{\circ}C\leq T\leq-1\ ^{\circ}C\\ k_{f}&-196\ ^{\circ}C<T<-8\ ^{\circ}C\\ \end{cases} \tag{2}\] \[C_{t} = \begin{cases}c_{u}&-1\ ^{\circ}C<T\\ \frac{c_{u}+c_{f}}{2}+\frac{Ql}{\rho_{t}(T_{u}-T_{l})}&-8\ ^{\circ}C\leq T\leq-1\ ^{\circ}C\\ c_{f}&-196\ ^{\circ}C<T<-8\ ^{\circ}C\\ \end{cases}\] (3) \[\rho_{t} = \begin{cases}\rho_{u}&-1\ ^{\circ}C<T\\ \frac{\rho_{u}+\rho_{f}}{2}&-8\ ^{\circ}C\leq T\leq-1\ ^{\circ}C\\ \rho_{f}&-196\ ^{\circ}C<T<-8\ ^{\circ}C\\ \end{cases} \tag{4}\] where the subscripts \(u\) and \(f\) stand for unfrozen and frozen, respectively and \(Q_{l}\) is the latent heat of fusion. The liver and tumor are modeled as homogeneous mediums and the model parameters are listed in Table 1. The different perfusion and metabolic rates were considered for the liver and tumor. The metabolic and perfusion rates are higher in the tumor compared to the liver [7]. #### 2.2.1 Boundary conditions The following boundary conditions were used to simulate the cryosurgery process: * Thermally insulated boundary condition was applied to the surface of the non-active part of the cryoprobe. \[\vec{n}\cdot\nabla T=0\] (5) * The temperature of the surface of the cryoprobes's active part was assumed to be the boiling temperature of the liquid nitrogen (\(-196^{\circ}C\)). * The temperature at the external boundary of the liver is assumed as body temperature (\(37^{\circ}C\)). * The initial and blood temperature are taken as the body temperature. ### Positions of the cryoprobes in the tissue The efficiency of the cryoablation is measured by the ablation zone created during the treatment. For large tumors, a single probe is inefficient in killing all the unhealthy cells in a short time. Therefore, in this research work, we simulated and analyzed the results by using a different number of probes inside the tissue. The three settings are used for the arrangement of cryoprobes in the tissue: (1) a single cryoprobe at the center of the tumor; (2) three cryoprobes located in a triangular pattern with vertices \((-9,5)\), \((9,5)\), and \((0,-10)\) in the x-y plane; and (3) four cryoprobes are located in a square pattern with vertices \((-7,7)\), \((7,7)\), \((-7,-7)\), and \((7,-7)\) in the \(x-y\) plane. To provide a uniform temperature profile in the tissue and a large ablation zone, the probes are positioned in an almost equilateral triangle and square patterns, as shown in Figure 2. According to the experiment results [9], the tissue's temperature at 10 mm away from the probe's center after 600 sec was \(-50^{\circ}C\). Clinical research has shown that cells will die at \(-50^{\circ}C\), regardless of how long they are frozen [8]. Therefore, cryoprobes are placed approximately 10 mm away from the center of the tumor domain to produce an optimum ablation zone. ### Calculation of ablation zone The complete death of abnormal cells is necessary for the cryoablation of a tumor. Scientists have demonstrated that necrotic mechanisms begin to function at a temperature of \(-5^{\circ}C\), but some undesirable cell survival remains [12]. Two methods can be used to guarantee cryo-damage and cell death: (1) The cells will die if the temperature falls below the \(-50^{\circ}C\) (cryogenic necrosis temperature \(T_{\rm nc}\)) [8, 17], (2) The cells will die if the temperature stays at \(-20^{\circ}C\) (cryogenic damage temperature \(T_{\rm nc}\)) for 60 sec (\(t_{\rm dc}\)) [5]. The first mechanism of destruction takes place close to the cryoprobe, and the second frequently happens in additional tissue that is exposed to lower temperatures. An index between 0 and 1 was given to various positions to represent the amount of \begin{table} \begin{tabular}{c c c} \hline \hline Parameter & Units & Values \\ \hline Conductivity of the tissue, \(k_{t}\) & W/m\({}^{\circ}\)C & \(k_{u}=0.5\), \(k_{f}=2\) \\ Specific heat of the tissue, \(c_{t}\) & J/ kg\({}^{\circ}\)C & \(c_{u}=3600\), \(c_{f}=1800\) \\ Density of the tissue, \(\rho_{t}\) & kg/m\({}^{\circ}\) & \(\rho_{u}=1000\), \(\rho_{f}=998\) \\ Conductivity of the blood, \(\rho_{b}\) & W/m\({}^{\circ}\)C & 0.49 \\ Specific heat of the blood, \(c_{b}\) & J/ kg\({}^{\circ}\)C & 3600 \\ Density of the blood, \(\rho_{b}\) & kg/m\({}^{3}\) & 1050 \\ Blood perfusion rate, \(\omega_{b}\) & ml/s ml & Liver=0.0005, Tumor=0.002 \\ Metabolic heat generation, \(Q_{m}\) & W/m\({}^{3}\) & Liver=4200, Tumor=42,000 \\ Latent heat of fusion, \(Q_{l}\) & MJ/m\({}^{3}\) & 250 \\ Upper limit of the phase transition, \(T_{u}\) & \({}^{\circ}\)C & -1 \\ Lower limit of the phase transition, \(T_{l}\) & \({}^{\circ}\)C & -8 \\ \hline \hline \end{tabular} \end{table} Table 1: Thermal parameters for the model [6, 7, 13]. the above-mentioned conditions has been met. Zero, on the other hand, denotes an entirely unharmed tissue. Obviously, a higher percentage of tissue damage resulted from the allocated value being closer to 1. The destruction index \(\theta_{d}\) was calculated using the following equations [14, 15]: When the temperature dips below the temperature at which cryogenic damage occurs \(T_{\mathrm{dc}}\) for an extended period of time \(t_{\mathrm{dc}}\): \[\alpha_{1}=\frac{1}{t_{\mathrm{dc}}}\int_{0}^{t}(T<T_{\mathrm{dc}})dt \tag{6}\] When the temperature goes below the cryogenic necrosis temperature \(T_{\mathrm{nc}}\): \[\alpha_{2}=\int_{0}^{t}(T<T_{\mathrm{nc}})dt \tag{7}\] Combining the two conditions, the overall destruction index is equal to: \[\begin{cases}\theta_{d}=1&\text{if}\ \ \alpha_{2}>0\\ \theta_{d}=\min(1,\alpha_{1})&\text{if}\ \ \alpha_{2}<0\end{cases} \tag{8}\] ### FEM implementation to the model The bio-heat equation (1) can be expressed in a variational form by multiplying the test function \(V\in H_{0}^{1}(\Omega)\) and performing integration by part. The finite element problem is to find \(T(x,t)\in H_{0}^{1}(\Omega)\) for any \(t>0\) such that \[\rho_{t}c_{t}\langle\partial_{t}T,V\rangle+k_{t}a(T,V)+\rho_{b}\omega_{b}c_{b }(T,V)=\langle f,V\rangle,\ \ \forall\ V\ \in H_{0}^{1}(\Omega) \tag{9}\] where \(\partial_{t}T=\dfrac{\partial T}{\partial t}\), \(f=\rho_{b}C_{b}\omega_{b}T_{b}+Q_{m}\ \in L^{2}(\Omega)\), \(\langle T,V\rangle=\int_{\Omega}TVd\Omega\), and \[a(T,V)=\int_{\Omega}\nabla T\cdot\nabla Vd\Omega\] By using the Galerkin approach and basis functions \(\{\phi_{1},\phi_{2},...,\phi_{N}\}\), the unknown Figure 2: cryoprobe locations during the treatment. variable is expressed as follows: \[T(x,t)=\sum_{i=1}^{N}T_{i}(t)\phi_{i}(x) \tag{10}\] where \(N\) is the number of nodes and coefficient \(T_{i}(t)\) is functional value of \(T(x,t)\) at node \(i\). The continuous variational form transformed to discrete variational form using equation (10). \[\rho_{t}c_{t}\sum_{i=1}^{N}\partial_{t}T_{i}(t)\langle\phi_{i},\phi_{j}\rangle +k_{t}\sum_{i=1}^{N}T_{i}(t)a(\phi_{i},\phi_{j})+\rho_{b}\omega_{b}c_{b}\sum_{i =1}^{N}T_{i}(t)\langle\phi_{i},\phi_{j}\rangle=\langle f(t),\phi_{j}\rangle,\] for \(j=1,2,3...N\) or in matrix form \[(\rho_{t}c_{t}B)\partial_{t}T(t)+k_{t}AT(t)+\rho_{b}c_{b}\omega_{b}BT(t)=F(t) \tag{11}\] where \(B=(b_{ij}),\ A=(a_{ij}),\ F=(F_{i}),\ T=(T_{i})\) \[b_{ij}=\langle\phi_{i},\phi_{j}\rangle=\int_{\Omega}\phi_{i} \phi_{j}d\Omega\] \[a_{ij}=a(\phi_{i},\phi_{j})=\int_{\Omega}\nabla\phi_{i}\cdot \nabla\phi_{j}d\Omega\] \[F_{i}=\langle f,\phi_{i}\rangle=\int_{\Omega}f\phi_{i}d\Omega\] Here \(B\) and \(A\) are mass matrix and stiffness matrix, respectively, which are symmetric and positive definite [11] and FEniCS tools [1] was used to calculate the matrix \(B\) and \(A\). By discretizing the time interval \((0,T)\) into a uniform grid with step size \(\Delta t\) and using Euler forward scheme to time derivative term in the equation (11), one can find the temperature profile in the tissue at \(n^{\text{th}}\) step as follows: \[\rho_{t}c_{t}B\left(\frac{T^{n}-T^{n-1}}{\Delta t}\right)+k_{t}AT ^{n}+\rho_{b}\omega_{b}c_{b}BT^{n} =F^{n},\] \[\left[(\rho_{t}c_{t}+\Delta t\rho_{b}\omega_{b}c_{b})B+\Delta tk _{t}A\right]T^{n} =\rho_{t}c_{t}BT^{n-1}+\Delta tF^{n}.\] MUltifrontal Massively Parallel Sparse direct Solver(MUMPS) [2] was used to solve the above system of equations. ### Numerical simulation and convergence analysis The bio-heat equation is solved by using FEM via FEniCS. The temporal term in the bio-heat equation was discretized using an unconditionally stable implicit Euler forward scheme. The time step was changed from 0.1 to 0.4 with a relative tolerance of 0.0001. The number of elements was altered from 20000 to 120000 and the obtained temperature at the points \((0,0.01,0)\) and \((0,0.02,0)\). Figure 3 shows that after 40000 tetrahedral elements, the solution is independent of the number of elements. The number of elements for different models is mentioned in Table 2. The domain is discretized using tetrahedral elements via open-source software Gmsh, as shown in Figure 4. A core i5 intel 10\({}^{\text{th}}\) generation CPU with 8GB memory RAM was used for simulation. ## 3 Results and Discussion In this research work, the model of cryosurgery for liver tumors was solved numerically using the FEM with a finite difference scheme for time derivatives. The effect of the arrangements of the cryoprobes in the tissue on the size of the ablation and temperature profile in the tissue was studied. To verify the simulation method and settings, the numerical results are compared with the experiment results. In this study, we suggested a few patterns for cryoprobe placement in the tissue to create a spherical-shaped ablation zone with homogeneous cooling. ### Numerical validation Figure 5 illustrates the validation of numerical simulation results with experimental results obtained in [9]. To compare the numerical results with the experimental results for a 10-minute treatment, the temperature was measured 10 mm away from the cryoprobe's center. The numerical results were in good agreement with experimental results, and the maximum difference was observed at the end of the treatment. The root means square error (RMSE) between the numerical and experimental results is 0.83 \({}^{\circ}\)C. We validated the numerical methods, model parameters, and implementation by using RMSE as a reference. ### Effect of cryoprobes arrangement on temperature profile and ablation zone The cryoprobe is inserted into the tissue and filled with liquid nitrogen at \(-196^{\circ}C\) to kill the unwanted cells through freezing. Due to conduction, heat transfers from the liver and tumor towards the cryoprobe during the treatment. As a result, cooling will begin close to the cryoprobe and spread throughout the tissue over time. The temperature of the tumor decreases from \(37^{\circ}C\) to \(-196^{\circ}C\) as time increases and the Figure 5: Comparison of Numerical results with the experimental result presented in [9] at 20 mm away from the center of the probe. necrosis mechanism activates at \(-5^{\circ}C\)[12]. The goal of this treatment is to totally destroy the unwanted cells with minimum damage to the healthy tissue around the tumor. With less pain, blood, and hospitalization time, cryosurgery is performed by introducing the tiny cryoprobe into the patient's body through a small incision. The treatment will cause structural and phase changes in both healthy and unhealthy cells. As a result, it is considered that density, thermal conductivity, and specific heat capacity are piecewise temperature-dependent functions. In this work, we assumed the computational domain as only the liver and tumor since the cryoablation has less impact on nearby blood vessels [16]. The computational domain is split into two subdomains because the liver and tumor have different types of material properties [6]. One of the challenges in cryoablation is forming a large ice ball for a large tumor without damaging healthy cells. The single cryoprobe is inefficient for creating large ice balls to destroy the large tumor. At the same time, introducing a more number of cryoprobes into the tissue without proper arrangement may kill the healthy cells surrounded by the tumor. Since the spherical shape covers most of the tumor shape [23], the cryoprobes are placed in the tissue in a polygonal pattern by keeping approximately the same length between any two cryoprobes to create the ablation zone in a spherical shape. The results are simulated for three different configurations: (1) a single cryoprobe positioned at the tumor's center; (2) three cryoprobes arranged in a triangular pattern, each about 10 mm from the tumor's center; and (3) four cryoprobes Figure 6: The temperature profile (\({}^{\circ}\)C) in the tissue using cryoprobes in a different pattern. arranged in a square pattern, each about 10 mm from the tumor's center. The positions of the cryoprobes in a triangular pattern are \((-9,5)\), \((9,5)\) and \((0,-10)\) in the x-y plane. Similarly, in a square pattern, positions for cryoprobes are \((-7,7)\), \((7,7)\), \((-7,-7)\), and \((7,-7)\). The length of the square and triangular sides can be adjusted according to the size of the tumor. Figure 6 illustrates the temperature distribution in the tissue at 200 sec, 400 sec, and 600 sec using single, three, and four cryoprobes in the center, triangle pattern, and square pattern, respectively. As time passes due to conduction, the temperature in the tumor begins to drop from \(37^{\circ}C\) to a lower temperature. The tissue temperature near the cryoprobe is \(-196^{\circ}C\), and it increases to a body temperature of \(37^{\circ}C\) as it moves away from the cryoprobe. The formation of ice balls in the tissue increases as time increases. The size of the ice ball is a little more when cryoprobes are placed in a square pattern rather than in a triangle pattern. The size of the ice ball is very small when a single cryoprobe is used in cryosurgery. Therefore, using a single cryoprobe for a large tumor may not kill all the unhealthy cells in the tissue. The tissue temperature decreases fast in the first 100 sec, then it remains constant for the rest of the time, as shown in Figure 7 and Figure 8. The temperature was monitored at two positions \((0,0.01,0)\) and \((0,0.015,0)\) in the tissue during the treatment of 600 sec. When one, three, or four cryoprobes are employed in the procedure, the tissue temperature at \((0,0.01,0)\) is reported as \(-57^{\circ}C,-122.2^{\circ}C\), and \(-153.1^{\circ}C\), respectively. Similar to this, the temperature is reported as \(-23.7^{\circ}C,-80.72^{\circ}C\), and \(-100^{\circ}C\) at \((0,0.015,0)\). The tissue will freeze quickly when cryoprobes are placed in a square pattern during the treatment compared to a triangular pattern. Using a single cryoprobe in treatment will take more time to freeze the tissue compared to the other two cases. To guarantee cell death in the tissue, two strategies were used [5, 17]. One involves cooling the cell to \(-50^{\circ}C\), while the other involves keeping it at \(-20^{\circ}C\) degrees for 60 seconds. Based on the above two techniques, the destruction index was defined between 0 and 1 to find the ablation zone [14, 15]. If the destruction index is close to 1, then the percentage of cell injury is higher. For dead and undamaged cells, a destruction index is defined as 1 and 0, respectively. Figure 9 illustrates the destruction index for different arrangements of the cryoprobes. The destruction index is 1 near the cryoprobes and Figure 7: The temperature distribution at the position \((0,0.01,0)\) during the treatment. it decreases to 0 when it moves away from the cryoprobes, which means cells are dead only around the cryoprobes. One of the challenges in ablation treatments is to kill the complete tumor cells in a short time. When the diameter of the volume is more than 4 cm, using a single cryoprobe in the treatment fails to kill all the tumor cells in a short time. The shape of the ablation zone is approximately spherical when the cryoprobes are located in the square and triangular patterns, and the ablation zone is prolate in Figure 8: The temperature distribution at the position \((0,0.015,0)\) during the treatment. Figure 10: The ablation zone (red color) using a single cryoprobe at the center of the tumor, three cryoprobes located in a triangle pattern, and four cryoprobes located in a square pattern for the treatment of 600 sec. Figure 9: Destruction index for the treatment of 600 sec. shape when a single cryoprobe is placed at the center, as shown in Fig. 10. After the treatment of 600 sec, 99%, 96%, and 31% of the tumor were killed using four, three, and single cryoprobes, respectively. The volume of the ablation zone increases as time increases, as shown in Figure 11. The maximum number of tumor cells is killed when four cryoprobes are placed in a square pattern. ### Effect of separation distance and cooling time on the ablation zone for multi-probe cryoablation One of the difficulties in the multi-probe technique is choosing the probe separation distance and cooling time, which depend on tumor size. In this work, we analyze the volume of the ablation zone, thermal damage to healthy cells, and the volume of tumor cells killed during the treatment for different probe separation distances and cooling times. Three different arrangements are used to simulate the results: (1) Cryoprobes are arranged in a triangle, and square pattern with a d=10 mm distance between any two probes, (2) Cryoprobes are arranged in a triangle and square pattern with a d=15 mm distance between any two probes, and (3) Cryoprobes are arranged in a triangle and square pattern with a d=20 mm distance between any two probes. The ablation volume and thermal damage increase with the separation distance and cooling time. The multi-probe technique kills 79 %, 95 %, and 98 % of the tumor for a 10 minutes cooling period when cryoprobes are spaced 10 mm, 15 mm, and 20 mm from one another in a triangle arrangement. Similarly, when cryoprobes are separated by 10 mm, 15 mm, and 20 mm in a square pattern, the multi-probe technique kills 94 %, 100%, and 100% of the tumor for 10 minutes cooling time, respectively. Multi-probe techniques fail to kill the complete tumor when cryoprobes are separated by 10 mm and 15 mm in a triangular pattern. It kills almost the entire tumor with thermal damage of 20 cm\({}^{3}\) when cryoprobes are separated by 20 mm, as shown in Figure 12. Multi-probe cryoablation technique kills almost tumor when probes are separated by 15 mm and 20 mm from each other in a square pattern with thermal damage of 19 cm\({}^{3}\) and 40 cm\({}^{3}\), respectively, for 360 seconds, as shown in Figure 13. When probes Figure 11: Comparison of ablation volumes obtained in all three settings with tumor volume. are arranged in a square pattern with a 15 mm spacing between them, a multi-probe approach optimizes thermal damage and cooling time. Probe separation distance and cooling time depend on the tumor shape and size. The relation between the separation distance, cooling time, tumor size, and shape for the multi-probe ablation technique is left for future investigation. ## 4 Conclusion The liver and tumor were modeled in three dimensions for computational analysis. The thermo-physical properties were applied to liver and tumor tissues in three different phases: unfrozen, freezing, and frozen. Single, three, and four cryoprobes were arranged in the center, triangle, and square patterns, respectively, in order to study the impact of cryoprobe arrangement on the temperature profile and ablation zone. This work shows that placing cryoprobes in a polygonal pattern will freeze the tissue uniformly and create a large ablation zone in a spherical shape. Using four, three, and one cryoprobe, respectively, \(99\%,96\%\), and \(31\%\) of the tumor were destroyed after the Figure 12: Cells responses: when three probes are located in a triangle pattern for different probe separation distances (d) during the treatment Figure 13: Cells responses: when four probes are located in a square pattern for different probe separation distances (d) during the treatment 600 second treatment. And also, this study analyzes the volume of the ablation zone, thermal damage to healthy cells, and the volume of tumor cells killed during the treatment for different probe separation distances and cooling times. When probes are arranged in a square pattern with a 15 mm spacing between them, a multi-probe technique kills the complete tumor with minimum damage to healthy cells in a short cooling time compared to other settings. Therefore, this study helps physicians to arrange cryoprobes during the treatment of a large tumor. ## Acknowledgement None. ## Disclosure statement No potential conflict of interest was reported by author(s). ## Funding None.
2305.19647
Some properties of I*-sequential topological space
In this paper, we will define $\mathcal{I}^{*}$-sequential topology on a topological space $(X,\tau)$ where $\mathcal{I}$ is an ideal of the subset of natural numbers $\mathbb{N}$. Besides the basic properties of the $\mathcal{I}^{*}$-sequential topology, we proved that $\mathcal{I}^{*}$-sequential topology is finer than $\mathcal{I}$-sequential topology. Further, we will discus main properties of $\mathcal{I}^{*}$ sequential continuity and $\mathcal{I}^{*}$ sequential compactness.
H. Sabor Behmanush, M. Kucukaslan
2023-05-31T08:23:44Z
http://arxiv.org/abs/2305.19647v1
# Some properties of \(\mathcal{I}^{*}\)-sequential topological space ###### Abstract. In this paper, we will define \(\mathcal{I}^{*}\)-sequential topology on a topological space \((X,\tau)\) where \(\mathcal{I}\) is an ideal of the subset of natural numbers \(\mathbb{N}\). Besides the basic properties of the \(\mathcal{I}^{*}\)-sequential topology, we proved that \(\mathcal{I}^{*}\)-sequential topology is finer than \(\mathcal{I}\)-sequential topology. Further, we will discus main properties of \(\mathcal{I}^{*}\) sequential continuity and \(\mathcal{I}^{*}\) sequential compactness. 2010 Mathematics Subject Classification: 54A20; 54B15; 54C08; 54D55; 26A03; 40A05 Keywords: \(\mathcal{I}\)- convergence, \(\mathcal{I}^{*}\)-convergence, Statistical convergence, Sequentially \(\mathcal{I}\)-topology, Sequentially \(\mathcal{I}^{*}\)-topology ## 1. Introduction The main topic of this paper is to introduce \(\mathcal{I}^{*}\)-sequential topological space as the \(\mathcal{I}\)-sequential topological space introduced by X. Zhou, L. Liu, S. Lin, in [22]. Let us start with a few word about the history of this concept. The concept of statistical convergence was defined by Steinhouse [20] and Fast [5], independently in 1951. Many studies on statistical convergence have been conducted over the years. It has many application in different field of mathematics like, summability theory, number theory, trigonometric series, probability theory, measure theory, optimization and approximation and etc. After that in 2000, authors P. Kostyrko, T. Salat and W. Wilczynski introduced the notion of \(\mathcal{I}\)-convergence in [10] which, in its particular case, coincides with statistical and classical convergence. Because of the flexibility of the ideal concept, several results relating to \(\mathcal{I}\)-convergence given in different spaces [17, 18, 9, 6, 12, 21, 8, 16, 7]. Between the years 2012-2019, especially the authors B.K. Lahiri and P. Das, S.K. Pal, A.K. Banerjee and A. Banerjee, X. Zhou and L. Liv, S. Lin in the papers [11, 14, 2, 22] extend the ideal of \(\mathcal{I}\)-convergence of a sequence to any topological space and they introduced several properties of this concept in a topological space, recently researches on \(\mathcal{I}\)-convergent, generalized in \(\mathcal{I}^{*}\)-convergent. In this paper, we continue the study of topological space which is defined by \(\mathcal{I}^{*}\)-convergence and we derive some basic properties. Apart from that we are going to compare the \(\mathcal{I}\)-sequential topology with \(\mathcal{I}^{*}\)-sequential topology. Recall the notion of statistical convergence in a topological space. For any subset \(A\) in \(\mathbb{N}\) the asymptotic density of \(A\) is given by \[\delta(A):=\lim_{n\to\infty}\frac{1}{n}\left|\{k\in\mathbb{N}:k\leq n\}\right|\] if the limit exists. Let \((X,\tau)\) be a topological space, a sequence \(\tilde{x}=(x_{n})\subseteq X\) is said to convergent statistically to a point \(x\in X\) if \[\delta(\{n\in\mathbb{N},x_{n}\notin U\})=0,\] holds for any neighborhood \(U\) of \(x\). Equivalently, if \[\delta(\{n\in\mathbb{N},x_{n}\in U\})=1,\] holds. It is denoted by \[st-\lim_{x\to\infty}x_{n}=x\ or\ x_{n}\stackrel{{ st}}{{\to}}x\] For a sequence \(\tilde{x}=(x_{n})\), let a subset \(U\) of \(X\), denote by \[A_{U}(\tilde{x}):=\{n\in\mathbb{N}:x_{n}\notin U\}\] which is also denoted by \(A_{U}\). It is easy to see in [22] that a sequence in a topological space \(X\) converges statistically to a point \(x\in X\) if and only if for any neighborhood \(U\) of \(x\), we have \[\delta(A_{U})=0,\] (equivalently \(\delta(A_{U}^{c})=1\)). Now, we recall the concept of sequential topological space, let \((X,\tau)\) be a topological space, a subset \(F\subseteq X\) is called sequentially closed if for each sequence \(\tilde{x}=(x_{n})\subset F\) with \(x_{n}\to x\in X\) then \(x\in F\). The space \((X,\tau)\) is called sequential topolgical space if each sequentially closed subset of \(X\) is closed. A subset \(U\subseteq X\) is said to be sequentially open if \(X-U\) is sequentially closed. A sequence \(\tilde{x}=(x_{n})\subset X\) is said to be eventually in an open subset \(U\) of \(X\), if there exists \(n_{0}\in\mathbb{N}\) such that \(x_{n}\in U\) holds, for all \(n>n_{0}\). Obviously, a subset \(U\subseteq X\) is sequentially open if and only if for each sequence \(\tilde{x}=(x_{n})\) converging to a point \(x\in U\), then \(\tilde{x}=(x_{n})\) is eventually in \(U\). **Definition 1**.: _Let \(S\) be a set then a family \(\mathcal{I}\subseteq P(S)\) is called an ideal on \(S\) if (i) For all \(A,B\in\mathcal{I}\) implies \(A\cup B\in\mathcal{I}\) (ii) If \(A\in\mathcal{I}\) and \(B\subseteq A\) then \(B\in\mathcal{I}\)._ An ideal on \(S\) is called admissible if \(\{s\}\in\mathcal{I}\) holds for all \(s\in S\); an ideal on \(S\) is called proper ideal if \(S\notin\mathcal{I}\). A proper ideal is called maximal ideal if it is maximal element of the set of all proper ideals on \(S\) ordered by inclusion. An ideal \(\mathcal{I}\) is called non trivial if \(\mathcal{I}\neq\phi\) and \(S\notin\mathcal{I}\). **Example 1**.: _The families \(\mathcal{I}_{Fin}=\left\{A\subset\mathbb{N}:A\text{ is finite set}\right\}\) and \(\mathcal{I}_{\delta}=\left\{A\subset\mathbb{N}:\delta(A)=0\right\}\) are ideal on the set of natural numbers._ **Remark 1**.: _If we consider; (i) \(\mathcal{I}_{\delta}:=\left\{A\subset\mathbb{N}:\delta(A)=0\right\},\) then statistical and ideal convergence are coincide. (ii) \(\mathcal{I}_{Fin}:=\left\{A\subset\mathbb{N}:A\text{ is finite set}\right\},\) then classical convergence and ideal convergence are coincide._ **Example 2**.: _[_9_]_ _Let \(\mathbb{N}=\bigcup_{i=1}^{\infty}\Delta_{i}\) be a decomposition of \(\mathbb{N}\) such that for all \(i\in\mathbb{N}\) the set \(\Delta_{i}\) are infinite subsets of \(\mathbb{N}\) and \(\Delta_{i}\cap\Delta j=\phi\) for all \(i\neq j\)._ _Let_ \[\mathcal{I}=\left\{B\subset\mathbb{N}:B\text{ intersect at most finite number of }\Delta_{j}^{\prime}s\right\}.\] _Then, \(\mathcal{I}\) is an admissible and nontrivial ideal._ The dual notion to the notion of ideal is called filter and defined as follow: **Definition 2**.: _[_17_]_ _A family \(\mathscr{F}\subseteq P(S)\) is said to be filter if (i) \(A\cap B\in\mathscr{F}\) for all \(A,B\in\mathscr{F}\), (ii) If \(A\in\mathscr{F}\wedge A\subseteq B\), then \(B\in\mathscr{F}\)._ A filter \(\mathscr{F}\) is called proper if \(\phi\notin\mathscr{F}\). For every non-trivial ideal \(\mathcal{I}\) defines a dual filter \[\mathscr{F}(\mathcal{I}):=\{A\subseteq S:S-A\in\mathcal{I}\}\] on the set \(S\) and we say that \(\mathscr{F}(\mathcal{I})\) is the filter associated by \(\mathcal{I}\). In this paper, we are consider \(S=\mathbb{N}\) the set of natural numbers and \((X,\tau)\) denotes a topological space and \(\mathcal{I}\) is an admissible ideal defined on the subset of \(\mathbb{N}\). Unless otherwise stated, the triple \(X\), \(\tau\) and \(I\) will be displayed in \((X,\tau,\mathcal{I})\) format. **Definition 3**.: _[_22_]_ _A sequence \(\tilde{x}=(x_{n})\) in a topological space \((X,\tau,\mathcal{I})\) is said to be \(I\)-convergent to a point \(x\in X,\) if_ \[\{n\in\mathbb{N}:x_{n}\notin U\}\in\mathcal{I},\] _holds for any neighborhood \(U\) of \(x\) and it is denoted by \(\mathcal{I}-limx_{n}=x\) (or \((x_{n})\stackrel{{\mathcal{I}}}{{\rightarrow}}x).\)_ **Remark 2**.: _If \(\mathcal{I}\) is an admissible ideal, then classical convergence implies \(\mathcal{I}\)-convergent to \(x\)._ Unfortunately, the converse of the Remark 2 is not true. To see this let \(x\) and \(y\) be two different elements of \(X\) and let \(A\in\mathcal{I}\) and consider a sequence \(x_{n}=x\) when \(n\in A\) and \(x_{n}=y\) when \(n\notin A\). Clear that the sequence \((x_{n})\) is \(\mathcal{I}\) convergent to \(y\) but not convergent to \(y\) in usual case. **Definition 4**.: _[_1_]_ _Let \(\mathcal{I}\) be an ideal of \(\mathbb{N}\) and \(X\) be a topological space,_ _(i) For each subset \(A\subseteq X\), we define \(\bar{A}^{\mathcal{I}}\), the \(\mathcal{I}\)-closure of \(A\) by_ \[\bar{A}^{\mathcal{I}}=\{x\in X:\exists\ (x_{n})\subset A:x_{n}\to x\}.\] _(ii) A subset \(F\subseteq X\) is said to be \(\mathcal{I}\)-closed if \(\bar{F}^{\mathcal{I}}=F\)._ _(iii) A subset \(A\subseteq X\) is said to be \(I\)-open if \(X-A\) is \(\mathcal{I}\)-closed._ It is clear that \(\bar{\phi}^{\mathcal{I}}=\phi\) and \(A\subseteq\bar{A}^{\mathcal{I}}\) hold. Also, it can be easily seen that any open subset of topological space \((X,\tau,\mathcal{I})\) is also \(\mathcal{I}\)-open. ## 2. \(\mathcal{I}^{*}\)- sequential topological space Now, we are going to observe the notion of \(\mathcal{I}^{*}\)- sequential topological space which is mostly similar to the \(\mathcal{I}\)- sequential topological space but this topology is finer then \(\mathcal{I}\)- sequential topological space. **Definition 5**.: _Let \((X,\tau,\mathcal{I})\) be a topological space, a sequence \(\tilde{x}=(x_{n})\) in \(X\) is said to be \(\mathcal{I}^{*}\)-convergent to a point \(x\in X\) if there exists \(M\in\mathscr{F}(\mathcal{I})\),_ \[M=\{m_{1}<m_{2}<\cdots<m_{k}<\cdots\}\] _such that for any neighborhood \(U\) of \(x\), there exists \(N\in\mathbb{N}\) such that \(x_{k}\in U\) holds for all \(m_{k}>N\)._ If in topological space \((X,\tau,\mathcal{I})\) the set \(X\) is also a field then the definition of \(\mathcal{I}^{*}\)-convergence can be reformatted in the form of decomposition theorem as follows: **Theorem 1**.: _Let \((X,\tau,\mathcal{I})\) be a topological space. A sequence \(\tilde{x}=(x_{n})\subset X\) is \(\mathcal{I}^{*}\)-convergent to a point \(x\in X\) if and only if it can be written as \(x_{n}=t_{n}+s_{n}\) for all \(n\in\mathbb{N}\) such that \(\tilde{t}=(t_{n})\subset X\) is a \(\mathcal{I}_{Fin}\)-convergent to \(x\) and \(\tilde{s}=(s_{n})\subset X\) is non zero only for a set in ideal \(\mathcal{I}\)._ Proof.: Let \(x_{n}=t_{n}+s_{n}\), for all \(n\in\mathbb{N}\) where \(t_{n}{\rightarrow}x(\mathcal{I}_{Fin})\) and \((s_{n})\) is non zero only on set from ideal \(\mathcal{I}\). Since \(t_{n}{\rightarrow}x(\mathcal{I}_{Fin})\), then for any neighborhood \(U\) of \(x\) \[\{n\in\mathbb{N}:\ t_{n}\notin U\}\notin F_{in}\] holds. Let \[M=\mathbb{N}-\{n\in\mathbb{N}:\ t_{n}\notin U\}.\] Then, \(s_{n}=0\) for all \(n\in M\). So, \(x_{n}=t_{n}\) and this implies that for any neighborhood \(U\) of \(x\)\(x_{n}\in U\) holds for all \(n\in M\). Hence, \(x_{n}\overset{I^{*}}{\rightarrow}x\). Conversely, let \(x_{n}\overset{I^{*}}{\rightarrow}x\). Then, there exists \(M\in F(\mathcal{I})\) and \(N\in\mathbb{N}\) such that \(x_{n}\in U\) holds for all \(n>N\) and \(n\in M\) for any neighborhood \(U\) of \(x\). Define the sequence \(\tilde{t}=(t_{n})\) as \[t_{n}=\begin{cases}x_{n},&n\in M,\\ x,&n\notin M,\end{cases}\] and the sequence \(\tilde{s}=(s_{n})\) as \[s_{n}=\begin{cases}\ \ \ 0,&n\in M,\\ x_{n}-x,&n\notin M.\end{cases}\] Then, \(t_{n}{\rightarrow}x(\mathcal{I}_{Fin})\) and \((s_{n})\) is nonzero only on a set from ideal \(\mathcal{I}\) and \(x_{n}=t_{n}+s_{n}\) holds for all \(n\in\mathbb{N}\). The fact every \(\mathcal{I}^{*}\)-convergent sequence is \(\mathcal{I}\)-convergence sequence was stated by B. K. Lahiri and P. Das in [11]. Because this concept plays an important role in seeing the relation between \(\mathcal{I}\)-sequential topology and \(\mathcal{I}^{*}\)-sequential topology so we repeated it again: **Theorem 2**.: _[_11_]_ _Let \((X,\tau,\mathcal{I})\) be a topological space. If \(x_{n}\overset{\mathcal{I}^{*}}{\rightarrow}x\), then \(x_{n}\overset{\mathcal{I}}{\rightarrow}x\)._ Proof.: Let \(x_{n}\overset{\mathcal{I}^{*}}{\rightarrow}x\). Hence, there exists \(M\in\mathscr{F}(\mathcal{I})\) such that \(M=\mathbb{N}-K\) where \(K\in\mathcal{I}\) \[M=\{m_{1}<m_{2}<\ldots,<m_{k}<\ldots\}\] such that for any neighborhood \(U\) of \(x\), there exists \(N\in\mathbb{N}\) such that \(x_{mk}\in U\) holds for all \(m_{k}>N\). This implys that \[\{n:x_{n}\notin u\}\subset K\cup\{m_{1},m_{2},\ldots,m_{N}\}\in\mathcal{I}.\] Hence, \(x_{n}\overset{\mathcal{I}}{\rightarrow}x\). Following example will show that the converse of Theorem 2 is not true: **Example 3**.: _Let \((\mathbb{R},\tau_{e})\) be Euclidean topology and zero is a limit point. Let \(n\in\mathbb{N}\) and consider_ \[B_{n}(0):=(-\frac{1}{2n},\frac{1}{2n})\] _be a monotonically decreasing open base at zero._ _Define a real valued sequence \(\tilde{x}=(x_{n})\) such that_ \[x_{n}\in B_{n}(0)-B_{n+1}(0)\] _as follows_ \[(x_{n})=(\frac{2n+1}{4n^{2}+4n}).\] _It is clear that \(x_{n}\to 0\) when \(n\to\infty\). Consider the ideal which is given in Example 2 and let us note that any \(\Delta_{i}\) is a member of \(I\) for all \(i\in\mathbb{N}\)._ _Let \(\tilde{y}=(y_{n})\) be a sequence defined by \(y_{n}=x_{j}\) if \(n\in\Delta j\). Let \(U\) be any open set containing zero. Choose a positive integer \(m\) such that_ \[B_{n}(0)\subset U,\] _for all \(n>m\), then_ \[\{n:y_{n}\notin U\}\subset\Delta_{1}\cup\Delta_{2}\cup\Delta_{3}...\cup\Delta_ {m}\in\mathcal{I}\] _this implies that \(y_{n}\stackrel{{\mathcal{I}}}{{\to}}0\)._ _Now suppose that \(y_{n}\stackrel{{\mathcal{I}^{*}}}{{\to}}0\), then there exists \(H\in\mathcal{I}\) such that for_ \[M=\mathbb{N}-H=\{m_{1}<m_{2}<...<m_{k}...\}\in\mathscr{F}(\mathcal{I})\] _we have: there exists \(N\in\mathbb{N}\) such that \(x_{m_{k}}\in U\) for all \(m_{k}>N\) and for any neighborhood \(U\) of zero. Let \(l\in\mathbb{N}\) and assume that_ \[H\subset\Delta_{1}\cup\Delta_{2}\cup\Delta_{3}...\cup\Delta_{l}\] _then \(\Delta_{i}\subset\mathbb{N}-H\) holds for all for \(i>l+1\)._ _Therefore, for each \(i>l+1\), there is infinitely many \(k\)'s such that \(y_{m_{k}}=x_{i}\). But, then the \(limy_{n_{k}}\) doesn't exists because \(x_{i}\neq x_{j}\) for all \(i\neq j\)._ **Theorem 3**.: _Let \(\mathcal{I}\) and \(\mathcal{J}\) be two ideals of \(\mathbb{N}\) such that \(\mathcal{I}\subseteq\mathcal{J}\) and \(\tilde{x}=(x_{n})\) be a sequence in a topological space \((X,\tau)\). If \(x_{n}\stackrel{{\mathcal{I}^{*}}}{{\to}}x\), then \(x_{n}\stackrel{{\mathcal{J}^{*}}}{{\to}}x\)._ Proof.: Let \((x_{n})\stackrel{{\mathcal{I}^{*}}}{{\to}}x\). Then, there exists \(M\in\mathscr{F}(\mathcal{I})\) as \[M=\{m_{1}<m_{2}<\dots,<m_{k}<\dots\}\] such that for any neighborhood \(U\) of \(x\), there exists \(N\in\mathbb{N}\) such that \(x_{m_{k}}\in U\), for all \(m_{k}>N\) holds. Hence, \(M\in\mathscr{F}(\mathcal{I})\) implies that \(\mathbb{N}-M\in\mathcal{I}\). From the assumption of Theorem we have \(\mathbb{N}-M\in\mathcal{J}\) so \(M\in\mathscr{F}(\mathcal{J})\) hence \((x_{n})\stackrel{{ J^{*}}}{{\to}}x\). **Remark 3**.: _Converse of Theorem 3 is not true, in general._ **Definition 6**.: _Let \((X,\tau,\mathcal{I})\) be a topological space._ _(i) For each subset \(A\) of \(X\) we define \(\overline{A}^{\mathcal{I}^{*}}\) (\(\mathcal{I}^{*}\)-Closure of \(A\)) by_ \[\overline{A}^{\mathcal{I}^{*}}=\{x\in X:\exists(x_{n})\subset A\text{ such that }x_{n}\stackrel{{ I^{*}}}{{\to}}x\}\] _(ii) A subset \(F\subseteq X\) is said to be \(\mathcal{I}^{*}\)-closed if \(\overline{F}^{\mathcal{I}^{*}}=F\)._ _(iii) A subset \(U\subseteq X\) is said to be \(\mathcal{I}^{*}\)-open if \(X-U\) is \(\mathcal{I}^{*}\)-closed._ **Remark 4**.: _It is clear that \(\overline{\phi}^{\mathcal{I}^{*}}=\phi\) and \(A\subset\overline{A}^{\mathcal{I}^{*}}\) are true for all \(A\subseteq X\)._ **Theorem 4**.: _Let \((X,\tau)\) be a topological space, and \(\mathcal{I}\) be finite ideal. Then, \(\mathcal{I}\)-convergent of a sequence and \(\mathcal{I}^{*}\)-convergence of a sequence are coincide._ Proof.: We proved that if \(x_{n}\stackrel{{\mathcal{I}^{*}}}{{\to}}x\) then \(x_{n}\stackrel{{\mathcal{I}}}{{\to}}x\), for any ideal of \(\mathbb{N}\). Let a sequence \(x_{n}\stackrel{{\mathcal{I}}}{{\to}}x\), then for any neighborhood \(U\) of \(x\), \[A:=\{n\in\mathbb{N}:x_{n}\notin U\}\in\mathcal{I}.\] Consider \(M=\mathbb{N}-A\in\mathscr{F}(\mathcal{I})\) and arrange \(M\) as \[M=\{m_{1}<m_{2}<\cdots<m_{k}<\cdots\}.\] Since the set \(A\) is finite, then there exists \(N\in\mathbb{N}\) such that \(x_{m_{k}}\in U\) holds for all \(m>N\). Therefore, this implies \(x_{n}\stackrel{{\mathcal{I}^{*}}}{{\to}}x\). **Theorem 5**.: _Let \((X,\tau,\mathcal{I})\) be a topological space. If \(\mathcal{I}\) is admissible ideal, then every \(\mathcal{I}\)-open subset of \(X\) is \(\mathcal{I}^{*}\)- open subset of \(X\)._ Proof.: Let \(U\) be an \(\mathcal{I}\)-open subset of \(X\). Then, \(X-U\) is \(\mathcal{I}\)-closed, that is \(X-U=\overline{X-U}^{\mathcal{I}}\) holds. To prove \[X-U=\overline{X-U}^{\mathcal{I}^{*}}\] it is sufficient to show that \[\overline{X-U}^{\mathcal{I}^{*}}\subset X-U\] holds. Let \(x\in\overline{X-U}^{\mathcal{I}^{*}}\) be an arbitrary point. Then, there exists a sequence \(\tilde{x}=(x_{n})\subset X-U\) such that \(x_{n}\stackrel{{\mathcal{I}^{*}}}{{\to}}x\) holds, then by Theorem 1 we have \(x_{n}\stackrel{{\mathcal{I}}}{{\to}}x\). This implies that \[x\in\overline{X-U}^{\mathcal{I}}=X-U.\] Hence, the proof of Theorem is completed. **Corollary 1**.: _Let \((X,\tau,\mathcal{I})\) be a topological space. If \(\mathcal{I}\) is a finite ideal then \(A\subset X\) is \(\mathcal{I}\)-open if and only if \(A\) is \(\mathcal{I}^{*}\)-open subset of \(X\)._ **Definition 7**.: _Let \(A\) be a subset of topological space \((X,\tau,\mathcal{I})\). We define \(A^{o^{\mathcal{I}^{*}}}\) (called \(\mathcal{I}^{*}\) interior of \(A\)) as_ \[A^{o^{\mathcal{I}^{*}}}:=A-\overline{(X-A)}^{\mathcal{I}^{*}}.\] **Proposition 1**.: _Let \(A\) be a subset of topological space \((X,\tau,\mathcal{I})\). Then, the set \(A\) is \(\mathcal{I}^{*}\)-open if and only if \(A^{o^{\mathcal{I}^{*}}}=A\)._ Proof.: Let \(A\) be \(\mathcal{I}^{*}\)-open subset of topological space \((X,\tau,\mathcal{I})\). Then, the related set \(X-A\) is \(\mathcal{I}^{*}\)-closed, that is \[X-A=\overline{X-A}^{\mathcal{I}^{*}}.\] Therefore, \[A^{o^{\mathcal{I}^{*}}}=A-\overline{(X-A)}^{\mathcal{I}^{*}}=A-(X-A)=A\] holds. Conversely assume that \(A=A^{o^{\mathcal{I}^{*}}}\) holds. As we defined \[A^{o^{\mathcal{I}^{*}}}:=A-\overline{(X-A)}^{\mathcal{I}^{*}}.\] Then, \[A=A-\overline{(X-A)}^{\mathcal{I}^{*}}\] implies that \(A\cap\overline{(X-A)}^{\mathcal{I}^{*}}=\phi\). Therefore, \(\overline{(X-A)}^{\mathcal{I}^{*}}\subset X-A\) and this show that \({(X-A)}^{\mathcal{I}^{*}}=X-A\). Hence, \(X-A\) is \(\mathcal{I}^{*}\)- closed and \(A\) is \(\mathcal{I}^{*}\)-open. **Theorem 6**.: _Let \(A\) be a subset of topological space \((X,\tau,\mathcal{I})\). Then, the following assertions are equivalent:_ _(i) \(A\) is \(\mathcal{I}^{*}\)- closed._ _(ii) \(A=\bigcap\{F:F\text{ is }\mathcal{I}^{*}-\text{ closed and }A\subset F\}\)._ Proof.: \((i)\Rightarrow(ii)\) is obvious. Let \[A=\bigcap\{F:F\text{ is }\mathcal{I}^{*}-\text{ closed and }A\subset F\}.\] To show that \(\bar{A}^{\mathcal{I}^{*}}=A\) hol,ds it is sufficient to prove that \[\bar{A}^{\mathcal{I}^{*}}\subseteq A\] holds. Let \(x_{0}\in\bar{A}^{\mathcal{I}^{*}}\) is an arbitrary point, then there exists a sequence \((x_{n})\subset A\) such that \((x_{n})\) is \(\mathcal{I}^{*}\)-convergent to \(x_{0}\). Assume that \(x_{0}\notin A\). So, \[x_{0}\notin\bigcap\{F:F\text{ is }\mathcal{I}^{*}-\text{ closed and }A\subset F\}.\] Then, there exists an \(\mathcal{I}^{*}\)-closed subset \(F\) of \(X\) which \(x_{0}\notin F\). Since \((x_{n})\subset A\) and \(A\subset F\) then \(x_{0}\) must be in \(F\) which is contradiction to our assumption. **Theorem 7**.: _Let \(A\) be a subset of topological space \((X,\tau,\mathcal{I})\). Then, the following assertions are equivalent:_ _(i) \(A\) is \(\mathcal{I}^{*}\)- open._ _(ii) \(A=\bigcup\{U:U\text{ is }\mathcal{I}^{*}-\text{open and }U\subset A\}\)._ Proof.: \((i)\Rightarrow(ii)\) is obvious. Conversely, let \[A=\bigcup\{U:U\text{ is }\mathcal{I}^{*}-\text{open and }U\subset A\}.\] To prove \(A\) is \(\mathcal{I}^{*}\)- open subset of \(X\), we must to show that \(A=A^{o^{\mathcal{I}^{*}}}\) holds. It is known that \(A^{o^{\mathcal{I}^{*}}}\) always subset of \(A\). So, it is sufficient to show that \(A\subset A^{o^{\mathcal{I}^{*}}}\). Let \(x_{0}\in A\) be an arbitrary point, then there is an open subset \(U\) of \(A\) such that \(x_{0}\in U\). Since \(U\subset A\) then \(x_{0}\in A^{o^{\mathcal{I}^{*}}}\) and this implies that \[A\subset A^{o^{\mathcal{I}^{*}}}\] holds. So, proof is ended. **Definition 8**.: _Let \(\mathcal{I}\) be an ideal and \(U\) be a subset of topological space \(X\). A sequence \(\tilde{x}=(x_{n})\subset X\) is \(\mathcal{I}^{*}\)-eventually in \(U\) if there exists \(M\in\mathscr{F}(\mathcal{I})\) such that \(x_{m}\in U\) holds for all \(m\in M\)._ **Proposition 2**.: _Let \(\mathcal{I}\) be a maximal ideal of \(\mathbb{N}\) and \((X,\tau)\) be a topological space. Then, a subset \(U\subseteq X\) is \(\mathcal{I}^{*}\)-open if and only if each \(\mathcal{I}^{*}\)-convergent sequence to a point \(x\in U\) in \(X\) is \(\mathcal{I}^{*}\)-eventually in \(U\)._ Proof.: Let us assume each \(\mathcal{I}^{*}\)-convergent sequence to a point \(x_{0}\in U\) is \(\mathcal{I}^{*}\)-eventually in \(U\). That is, if \((x_{n})\subset X\) is a sequence which \((x_{n})\) is \(\mathcal{I}^{*}\)-convergent to \(x_{0}\in U\) then there exists \(M\in\mathscr{F}(\mathcal{I})\) such that \(x_{m}\in U\) holds for all \(m\in M\). Now, we are going to show that \(U\) is \(I^{*}\)-open. For this purpose, we will show that \[\overline{X-U}^{\mathcal{I}^{*}}=(X-U).\] It is clear that \((X-U)\subset\overline{X-U}^{\mathcal{I}^{*}}\). So, to finish the proof it is sufficient to show that \[\overline{X-U}^{\mathcal{I}^{*}}\subseteq(X-U)\] holds. Let \(x\in\overline{X-U}^{I^{*}}\) be an arbitrary point. Then, there exists a sequence \((x_{n})\subset(X-U)\) such that \((x_{n})\) is \(\mathcal{I}^{*}\)-convergent to \(x\). We must to show that \(x\in X-U\). Assume that \(x\notin X-U\). Then, \(x\in U\). Since every \(\mathcal{I}^{*}\) convergent sequence to a point of \(U\) is eventually on \(U\), then there exists \(M\in\mathscr{F}(\mathcal{I})\) such that \(x_{m}\in U\) for all \(m\in M\), but we have \(x_{n}\in X-U\), for all \(n\) which is contradiction. Hence \(x\in X-U\) and \(U\) is \(\mathcal{I}^{*}\)-open. Conversely, let \(U\) is \(\mathcal{I}^{*}\) open subset of \((X,\tau)\). Let \((x_{n})\subset X\) be an be an \(\mathcal{I}^{*}\)-convergent sequence which \(\mathcal{I}^{*}\)-converges to a point \(x\in U\). Since \(U\) is \(\mathcal{I}^{*}\)-open subset of the space \(X\), then it is neighborhood of the point \(x\), also as the sequence \(x_{n}\stackrel{{\mathcal{I}^{*}}}{{\rightarrow}}x\) then \(x_{n}\to x\) so \[E:=\{n:x_{n}\notin U\}\in\mathcal{I}\] Therefore, \[\mathbb{N}-E=\{n:x_{n}\in U\}=M\in\mathscr{F}(\mathcal{I})\] Hence, there exists \(m\in M\) such that \(x_{m}\in U\). This implys that \((x_{n})\) is \(\mathcal{I}^{*}\)-eventually in \(U\) and this completed the proof. **Theorem 8**.: _Let \(\mathcal{I}\) be an admissible ideal of \(\mathbb{N}\) and \((X,\tau)\) be a topological space, if \(U\) and \(V\) are \(\mathcal{I}^{*}\)-open subsets of \(X\), then \(U\cap V\) is \(\mathcal{I}^{*}\)-open subset of \(X\)._ Proof.: Let \((x_{n})\) be an \(\mathcal{I}^{*}\)-convergent sequence in \(X\) which convergent to a point \(x\in U\cap V\), then \(x\in U\) and \(x\in V\). Since \(U\) and \(V\) are \(\mathcal{I}^{*}\)-open sets and the sequence \((x_{n})\) is \(\mathcal{I}^{*}\)-converging to a point \(x\) in \(U\) also in \(V\). So, by Proposition 3 the sequence \((x_{n})\) is \(\mathcal{I}^{*}\)-eventually in \(U\) and also in \(V\), then there exists \(M_{1}\in\mathscr{F}(\mathcal{I})\) and \(M_{2}\in\mathscr{F}(\mathcal{I})\) such that \(x_{m}\in U\) for all \(m\in M_{1}\) and \(x_{m}\in V\) for all \(m\in M_{2}\). Let \(M=M_{1}\cap M_{2}\), then there exists \(M\in\mathscr{F}(\mathcal{I})\) such that \(x_{m}\in U\cap V\) holds for all \(m\in M\). This shows that \(U\cap V\) is \(\mathcal{I}^{*}\)-open subset of \(X\). **Theorem 9**.: _Let \(\mathcal{I}\) be a maximal ideal of \(\mathbb{N}\) and \((X,\tau)\) be a topological space. A sequence \((x_{n})\subset X\) is \(\mathcal{I}^{*}\)-convergent to an element \(x\in X\) if and only if for any \(\mathcal{I}^{*}\)-open subset \(U\) of \(X\) with \(x\in U\), there is \(M\in\mathscr{F}(\mathcal{I})\) such that \(x_{m}\in U\), for all \(m\in M\)._ Proof.: Let \(\mathcal{I}\) be a maximal ideal and \((x_{n})\) be an \(\mathcal{I}^{*}\)-convergent sequence which \(\mathcal{I}^{*}\)-converges to \(x\in X\). Let \(U\) be an \(\mathcal{I}^{*}\)-open subset of \(X\) with \(x\in U\). Then \((x_{n})\) will be \(\mathcal{I}^{*}\)-eventually in \(U\). Hence there exists a set \(M\in\mathscr{F}(\mathcal{I})\) such that \(x_{m}\in U\), for all \(m\in M\). The converse of the Theorem is clear by Definition of \(I^{*}\)-convergence. So, it is ommited here. **Theorem 10**.: _Let \((X,\tau,\mathcal{I})\) be a topological space. Then, the family_ \[\tau_{\mathcal{I}^{*}}:=\{U\in P(X):U\ is\ \mathcal{I}^{*}-\text{open set}\}\] _is a topology on \(X\). (It is called \(\mathcal{I}^{*}\)-sequential topology on \(X\))_ Proof.: It is obvious that \(X\) and \(\phi\) are \(\mathcal{I}^{*}\)-open sets, because the set \(X\) and empty set \(\phi\) are \(\mathcal{I}^{*}\)-closed so their complement are \(\mathcal{I}^{*}\)-open subset of topological space \(X\). Let \(U\) and \(V\in\tau_{\mathcal{I}^{*}}\). By Theorem 8, we can say that \(U\cap U\in\tau_{\mathcal{I}^{*}}\). Hence, finite intersection of \(\mathcal{I}^{*}\)-open sets is \(\mathcal{I}^{*}\)-open. Let \((U_{\alpha})_{\alpha\in\Lambda}\) be an arbitrary family of elements of \(\tau_{\mathcal{I}^{*}}\). We are going to show that \[\bigcup_{\alpha\in\Lambda}U_{\alpha}\in\tau_{\mathcal{I}^{*}}.\] Since \[X-\bigcup_{\alpha\in\Lambda}U_{\alpha}=\bigcap_{\alpha\in\Lambda}(X-U_{\alpha }),\] then it is sufficient to show that \[\bigcap_{\alpha\in\Lambda}(X-U_{\alpha})\] is \(\mathcal{I}^{*}\)- closed. That is, \[\overline{\bigcap_{\alpha\in\Lambda}(X-U_{\alpha})}^{\mathcal{I}^{*}}=\bigcap _{\alpha\in\Lambda}(X-U_{\alpha}).\] Let \(x\in\overline{\bigcap_{\alpha\in\Lambda}(X-U_{\alpha})}^{\mathcal{I}^{*}}.\) Then, there exists a sequence \[(x_{n})\subset\bigcap_{\alpha\in\Lambda}(X-U_{\alpha})\] such that \(x_{n}\stackrel{{\mathcal{I}^{*}}}{{\rightarrow}}x\) satisfies. Therefore, for all \(\alpha\in\Lambda\) the sequence \((x_{n})\subseteq(X-U_{\alpha})\) with \(x_{n}\stackrel{{\mathcal{I}^{*}}}{{\rightarrow}}x\). Since \(X-U_{\alpha}\) is \(\mathcal{I}^{*}\)- closed for all \(\alpha\in\Lambda\), then \(x\in X-U_{\alpha}\). Hence, \(x\in\bigcap_{\alpha\in\Lambda}(X-U_{\alpha})\) thus \(\bigcap_{\alpha\in\Lambda}X-U_{\alpha}\) is \(\mathcal{I}^{*}\)- closed. **Theorem 11**.: _If \(\mathcal{I}\) is admissible ideal and the topological space \((X,\tau)\) has no limit point, then every \(\mathcal{I}^{*}\)-open set is \(\mathcal{I}\)-open set._ Proof.: Let \(U\) be an \(\mathcal{I}^{*}\)-open set. To prove \(U\) is \(I\)-open set, it is enouh to show that \(X-U\) is \(\mathcal{I}\)-closed. That is, we are going to focus that \(X-U=\overline{X-U}^{\mathcal{I}}\). It is clear that \(X-U\subseteq\overline{X-U}^{\mathcal{I}}\). The proof will finish if we show that \(\overline{X-U}^{\mathcal{I}}\subseteq X-U\). Let \(x\in\overline{X-U}^{\mathcal{I}}\). Then, there exists a sequence \((x_{n})\subset X-U\) such that \((x_{n})\) is \(\mathcal{I}\)-converges to \(x\). Since \(\mathcal{I}\) is admissible and \(X\) has no limit point, then by [11] the sequence \((x_{n})\) will \(\mathcal{I}^{*}\)-converge to \(x\). Therefore, \(x\in\overline{X-U}^{\mathcal{I}^{*}}\). As \(U\) is \(\mathcal{I}^{*}\)- open, then \(\overline{X-U}^{\mathcal{I}^{*}}=X-U\) holds. This implies that \(x\in X-U\) which complete the proof of Theorem. **Corollary 2**.: _If \(X\) has no limit point and \(\mathcal{I}\) is admissible ideal, then the topological spaces \((X,\tau_{\mathcal{I}})\) and \((X,\tau_{\mathcal{I}^{*}})\) are coincide._ **Definition 9**.: _Let \(\mathcal{I}\) be an ideal of \(\mathbb{N}\)_ **Definition 10**.: _Let \((X,\tau_{1},\mathcal{I})\) and \((Y,\tau_{2},\mathcal{I})\) be two topological space and \(f:X\to Y\) be a mapping. The function \(f\) is called;_ _(i) \(\mathcal{I}^{*}\)-continuous if \(U\) is \(\mathcal{I}^{*}\)-open subset of \(Y\) then \(f^{-1}(U)\) is \(\mathcal{I}^{*}\)-open subset of \(X\)._ _(ii) Sequentially \(\mathcal{I}^{*}\)-continuous if for each sequence \((x_{n})\) in \(X\) which \((x_{n})\) is \(\mathcal{I}^{*}\) convergent to \(x\), we have \(f(x_{n})\) is \(\mathcal{I}^{*}\)-convergent to \(f(x)\)._ **Theorem 12**.: _Let \((X,\tau_{1},\mathcal{I})\) and \((Y,\tau_{2},\mathcal{I})\) be two topological space and \(f:X\to Y\) be a function. Then, \(f\) is sequentially \(\mathcal{I}^{*}\)-continuous if and only if \(f\) is \(\mathcal{I}^{*}\)-continuous function._ Proof.: Let \(f\) be sequentially \(\mathcal{I}^{*}\)-continuous function then for any sequence \(\tilde{x}=(x_{n})\subset X\), if \((x_{n})\), \(\mathcal{I}^{*}\)-converges to \(x\) then \(f(x_{n}),\mathcal{I}^{*}\)-converges to \(f(x)\). Let \(U\) be any \(\mathcal{I}^{*}\)-open set in \(Y\) assume that \(f^{-1}(U)\) is not \(\mathcal{I}^{*}\)- open set in \(X\) so \(X-f^{-1}(U)\) is not \(\mathcal{I}^{*}\)- closed; i.e, \[X-f^{-1}(U)\neq\overline{X-f^{-1}(U)}^{\mathcal{I}^{*}}.\] This implies that \(\overline{X-f^{-1}(U)}^{\mathcal{I}^{*}}\) is not subset of \(X-f^{-1}(U)\) so there exists a point \(x\in\overline{X-f^{-1}(U)}^{\mathcal{I}^{*}}\) such that \(x\notin X-f^{-1}(U)\), this means that there exists a sequence \((x_{n})\subset X-f^{-1}(U)\) such that it is \(\mathcal{I}^{*}\)- converging to \(x\) and \(x\in f^{-1}(U)\). Since \(f\) is sequentially continuous so \(f(x_{n})\), \(I^{*}\)-converging to \(f(x)\) this employs that \(f(x_{n})\subset Y-U\) which is not in case so \(f^{-1}(U)\) is \(\mathcal{I}^{*}\)-open subset of \(X\). Conversely: Let \(f:X\to Y\) be an \(\mathcal{I}^{*}\)-continuous mapping, let \(x_{n}\xrightarrow{\mathcal{I}^{*}}x\), then for any neighborhood \(U\) of \(x\), there exists \(N\in\mathbb{N}\) and \(M\in\mathscr{F}(\mathcal{I})\) such that \(x_{m_{k}}\in U\) for all \(m_{k}\in M\). Let \(V\) be any \(\mathcal{I}^{*}\)-open neighborhood of \(f(x)\), then \(f^{-1}(V)\) is \(\mathcal{I}^{*}\)-open in \(X\) and contain \(x\), then there exists \(N\in\mathbb{N}\), \(M\in\mathscr{F}(\mathcal{I})\) such that \(x_{m_{k}}\in f^{-1}(V)\) this implies that \(f(x_{m_{k}})\in V\) hence \(f(x_{n})\xrightarrow{\mathcal{I}^{*}}f(x)\). ## 3. Sequentially \(I^{*}\)-compact subsets The notion of compactness is one of the most significant topological properties. Compactness was formally introduced by M. Frechet in 1906. After that many types of compactness introduced by mathematicians. The concept of \(I\)-compactness was defined by Newcomb [13] and had been studied by Rancin [15]. In this section, we are going to observe the notion of \(\mathcal{I}^{*}\)-sequentially compact which is a generalization of \(\mathcal{I}\)-sequentially compact. **Definition 11**.: _Let \((X,\tau,\mathcal{I})\) be a topological space._ _(i) \(X\) is called \(\mathcal{I}^{*}\)-separated if for any two distinct points \(x,y\in X\) there is \(\mathcal{I}^{*}\)-open subsets \(U\) and \(V\) containing \(x\) and \(y\) respectively, and \(U\cap V=\phi\)._ _(ii) A subset \(F\) in \(X\) is called \(\mathcal{I}^{*}\)-compact if for every \(\mathcal{I}^{*}\)-open cover of \(F\) there exists a finite subcover of \(F\)._ _(iii) A subset \(F\subset X\) is said to sequentially \(\mathcal{I}^{*}\)-compact if any sequence \((x_{n})\subset F\) has an \(\mathcal{I}^{*}\)-convergent sub sequence \((x_{n_{k}})\subset F\) such that \(x_{n_{k}}\xrightarrow{I^{*}}x\in F\)._ _(iv) The space \(X\) is said to be countably \(\mathcal{I}^{*}\)-compact if for every countably \(\mathcal{I}^{*}\)-open cover of \(X\) there exists a finite subcover._ **Theorem 13**.: _Let \((X,\tau,\mathcal{I})\) be a topological space and \((Y,\tau_{Y},\mathcal{I})\) be a subspace of \(X\). If the set \(Y\) is \(\mathcal{I}\)-compact, then it is \(\mathcal{I}^{*}\)-compact._ Proof.: Let \(Y\) be a \(\mathcal{I}\)-compact subset of \(X\). So, every \(\mathcal{I}\)-open cover of \(Y\) has a finite subcover. Since every \(I\)-open set is \(\mathcal{I}^{*}\)-set so every \(\mathcal{I}^{*}\)-open cover of \(Y\) has finite subcover, this show that \(Y\) is \(\mathcal{I}^{*}\)-compact. **Definition 12**.: _[_18_]_ _Let \(X\) be a normed space and \(\mathcal{I}\) be an ideal of \(\mathbb{N}\). A sequence \(\tilde{x}=(x_{n})\) in \(X\) is \(\mathcal{I}\)-bounded if there exist \(N>0\) such that_ \[\{n\in\mathbb{N}:\|x_{n}\|>N\}\in\mathcal{I}\] _holds._ **Definition 13**.: _Let \(X\) be a normed space, and \(\mathcal{I}\) be an ideal of \(\mathbb{N}\). A sequence \(\tilde{x}=(x_{n})\) in \(X\) is said to be \(\mathcal{I}^{*}\)-bounded if there exists \(M\in\mathscr{F}(\mathcal{I})\) such that \((x_{n})_{n\in M}\) is bounded._ **Theorem 14**.: _Let \(X\) be a normed space and \(\mathcal{I}\) be an ideal of \(\mathbb{N}\). Then, every \(\mathcal{I}\)-bonded sequence is \(\mathcal{I}^{*}\)-bounded._ Proof.: Assume that \((x_{n})\subset X\) is \(\mathcal{I}\)-bounded sequence in \(X\). Then, there exists \(K>0\) such that \[\{n:\|x_{n}\|>K\}\in\mathcal{I}\] hold. If we denote \[M:=\{m:\|x_{n}\|<K\}\] Then, \(M\in\mathscr{F}(\mathcal{I})\) and \(\|x_{n}\|<K\), for all \(n\in M\). Hence, \((x_{n})\) is \(\mathcal{I}^{*}\)- bounded sequence. **Corollary 3**.: _Let \(X\) be a normed space and \(\mathcal{I}\) be an ideal of \(\mathbb{N}\). Then, every bounded sequence is \(\mathcal{I}^{*}\)-bounded._ Proof.: Let \(X\) be a normed space and \((x_{n})\subset X\) be a bounded sequence in \(X\). Then, the sequence \((x_{n})\) is \(\mathcal{I}\)-bounded [1] and by Theorem 13 it is \(\mathcal{I}^{*}\)-bounded. **Theorem 15**.: _Let \((X,\tau,\mathcal{I})\) be a topological space and \(\mathbb{R}\) under it's usual topology and \(f:X\to\mathbb{R}\) a sequentially \(\mathcal{I}^{*}\)-continuous mapping. If \(A\) is sequentially \(\mathcal{I}^{*}\)-compact subset of \(X\), then \(f(A)\) is \(\mathcal{I}^{*}\)-bounded._ Proof.: Let \(f(A)\) is not \(\mathcal{I}^{*}\)-bounded then there exists a sequence \((y_{n})\) in \(f(A)\) such that it is not \(\mathcal{I}^{*}\)-bounded then \[\{n\in\mathbb{N}:|y_{n}|<M\}\notin\mathscr{F}(\mathcal{I}).\] also there exists a sequence \((x_{n})\subset A\) such that \(f(x_{n})=y_{n}\) holds \(n\in\mathbb{N}\). Since \(A\) is sequentially \(\mathcal{I}^{*}\)- compact, then there exists a convergent subsequence \((x_{n_{k}})\) of \((x_{n})\) which is \(\mathcal{I}^{*}\)-convergent to a point \(x_{0}\) of \(A\). Because \(f\) is sequentially \(\mathcal{I}^{*}\)-continuous function then \(f(x_{n})\) is \(\mathcal{I}^{*}\)-convergent to \(f(x_{0})\). So, there exists \(E\in\mathscr{F}(\mathcal{I}),(\mathbb{N}-E\in\mathcal{I})\) \[E=\{m_{1}<m_{2}<\cdots<m_{k}<\cdots\}\] such that for any neighborhood \(U\) of \(f(x_{0})\), there exists \(N\in\mathbb{N}\) such that \(f(x_{n_{m_{k}}})\in U\) holds for all \(m_{k}>N\). This implies that \((y_{n})=f(x_{n_{k}})\) is \(\mathcal{I}\)-convergent to \(f(x_{0})\). Then, for any neighborhood \(U\) of \(f(x_{0})\), \[\{n\in\mathbb{N}:|f(x_{n})|>M\}\in\mathcal{I}\] so \[\{n\in\mathbb{N}:|x_{n}|<M\}\in\mathscr{F}(\mathcal{I}).\] Which is not in case so \(f(A)\) is \(\mathcal{I}^{*}\)-bounded. **Lemma 1**.: _Let \(X\) and \(Y\) be topological spaces. \(X\) and \(Y\) are sequentially \(\mathcal{I}^{*}\)-compact, if and only if \(X\times Y\) is._ Proof.: Let \(X\) and \(Y\) be \(\mathcal{I}^{*}\)- compact then for any sequence \((X_{n})\subset X\) and \((y_{n})\subset Y\) there exists convergent sub sequence \((x_{n_{k}})\subset X\) and \((y_{n_{j}})\subset Y\). Which are \(\mathcal{I}^{*}\)*- converging to a point \(x\in X\) and \(y\in Y\) respectively. Thus, for sequence \((x_{n},y_{n})\subset X\times Y\) there exists sub sequence \((x_{n_{k}},y_{n_{j}})\), which are \(\mathcal{I}^{*}\)-converging to \((x,y)\in X\times Y\). Hence \(X\times Y\) is sequentially \(\mathcal{I}^{*}\)-compact. Conversely: Let \(P_{x}:X\times Y\to X\) and \(P_{y}:X\times Y\to Y\) be two projections and \(X\times Y\) be compact, then \(P_{x}(X\times Y)\) and \(P_{y}(X\times Y)\) are sequentially \(\mathcal{I}^{*}\)- compact, this show that \(X\) and \(Y\) are sequentially \(\mathcal{I}^{*}\)- compact. **Theorem 16**.: _Let \(X\) and \(Y\) be topological spaces. If \(X\) is sequentially \(\mathcal{I}^{*}\)-compact and \(f:X\to Y\) is sequentially \(\mathcal{I}^{*}\)-mapping, then \(f(X)\) is sequentially \(\mathcal{I}^{*}\)-compact._ **Theorem 17**.: _Let \(\mathcal{I}\) and \(\mathcal{J}\) be two ideal of \(\mathbb{N}\) such that \(\mathcal{I}\subset\mathcal{J}\) and \(X\) be a topological space. If \(U\subset X\) is \(\mathcal{J}^{*}\)- open then it is \(\mathcal{I}^{*}\)-open._ Proof.: Let \(U\) be \(\mathcal{J}^{*}\)-open then \(X-U\) is \(\mathcal{J}^{*}\)-closed and \(X-U=\overline{X-U}^{\mathcal{J}^{*}}\) holds. We must to prove \(\overline{X-U}^{\mathcal{I}^{*}}\subset X-U\). Let \(x\in\overline{X-U}^{\mathcal{I}^{*}}\) be an arbitrary point, then there exists a sequence \((x_{n})\subset X-U\) such that \((x_{n})\) is \(\mathcal{I}^{*}\)-convergent to a point \(x\in X-U\), then by Theorem 3 the sequence \((x_{n})\), \(\mathcal{J}^{*}\) converges to \(x\). Hence, \(x\in\overline{X-U}^{\mathcal{J}^{*}}=X-U\) this implies that \(x\in X-U\) and \(U\) is \(\mathcal{J}^{*}\)-open. **Corollary 4**.: _Let \(\mathcal{I}\) and \(\mathcal{J}\) be two ideal of \(\mathbb{N}\) such that \(\mathcal{I}\subset\mathcal{J}\). Then, the topological space \(\tau_{\mathcal{J}}^{*}\) is finer then the topological space \(\tau_{\mathcal{I}}^{*}\)._ **Theorem 18**.: _Let \(\mathcal{I}\) be an ideal and \(X\) be topological space, if every sub-sequence \((x_{n_{k}})\) of \((x_{n})\subseteq X\) is \(\mathcal{I}^{*}\)- convergent to a point \(x_{0}\in X\) then \((x_{n})\) is \(\mathcal{I}^{*}\)- convergent to \(x_{0}\)._ Proof.: Let \((x_{n})\) is not \(\mathcal{I}^{*}\)-convergent to point \(x_{0}\) then for all \(M\in\mathscr{F}(\mathcal{I})\) and for all \(N\in\mathbb{N}\) there exists \(n_{k}>N\) such that \(x_{n_{k}}\notin U\), where \(U\) is any neighborhood of \(x_{0}\), if we take \(N=1\) then there exists the sub-sequence \((x_{n_{k}})\notin U\), for all \(n_{k}>1\) this means that there exists a sub-sequence of \((x_{n})\) which is not converging to the point of \(x_{0}\) which is contradiction. The converse of Theorem 18 is not true as follow: **Example 4**.: _Let consider \((\mathbb{R},\tau_{e})\), the set of real numbers with it's usual topology, let \(\mathcal{I}\) be any ideal and \(K\in\mathscr{F}(\mathcal{I})\) define a sequence as_ \[y_{n}=\begin{cases}2^{n}&n\notin K\\ \frac{1}{n}&n\in K\end{cases}\] \((y_{n})\) _is \(\mathcal{I}^{*}\)-convergent to zero but the subsequence \(y_{n_{k}}=2^{n},n\notin K\) is not \(\mathcal{I}^{*}\)-convergent._ **Conclusion 1**.: _In the paper, we defined the \(\mathcal{I}^{*}\)-sequential topology on a topological space \((X,\tau)\) and we proved that \(\mathcal{I}^{*}\)-sequential topology is finer then \(\mathcal{I}\)-sequential topology. That is; \(\tau_{\mathcal{I}}\prec\tau_{\mathcal{I}^{*}}\)._ _We also observed that under the condition of if the space \(X\) has no limit point and \(\mathcal{I}\) be an admissible ideal then, \(\mathcal{I}\)-sequentially topology and \(\mathcal{I}^{*}\)-sequentially topology are coincide, i.e \(\tau_{\mathcal{I}}=\tau_{\mathcal{I}^{*}}\)._ _Also, Lemma 2 in the paper [1] stated that "Every subsequence of an \(\mathcal{I}\)-convergent sequence in a topological space is also \(\mathcal{I}\)-convergent" but in Example 3 of this paper we saw that it is not true in \(\mathcal{I}^{*}\)-sequentially topological space._ _As a continuation of this study, some questions can be asked:_ \(Q1:\)_Is there a finer topology than \(\mathcal{I}^{*}\)-sequentially topology space?_ \(Q2:\)_Is there any topological between \(\mathcal{I}\)-sequential topological space and \(\mathcal{I}^{*}\)-sequential topological space?_
2309.07784
Klein-bottle quadrupole insulators and Dirac semimetals
The Benalcazar-Bernevig-Hughes (BBH) quadrupole insulator model is a cornerstone model for higher-order topological phases. It requires \pi-flux threading through each plaquette of the two-dimensional Su-Schrieffer-Heeger model. Recent studies showed that particular \pi-flux patterns can modify the fundamental domain of momentum space from the shape of a torus to a Klein bottle with emerging topological phases. By designing different \pi-flux patterns, we propose two types of Klein-bottle BBH models. These models show rich topological phases, including Klein-bottle quadrupole insulators and Dirac semimetals. The phase with nontrivial Klein-bottle topology shows twined edge modes at open boundaries. These edge modes can further support second-order topology, yielding a quadrupole insulator. Remarkably, both models are robust against flux perturbations. Moreover, we show that different \pi-flux patterns dramatically affect the phase diagram of the Klein-bottle BBH models. Going beyond the original BBH model, Dirac semimetal phases emerge in Klein-bottle BBH models featured by the coexistence of twined edge modes and bulk Dirac points.
Chang-An Li, Junsong Sun, Song-Bo Zhang, Huaiming Guo, Björn Trauzettel
2023-09-14T15:18:17Z
http://arxiv.org/abs/2309.07784v4
# Klein-bottle quadrupole insulators and Dirac semimetals ###### Abstract The Benalcazar-Bernevig-Hughes (BBH) quadrupole insulator model is a cornerstone model for higher-order topological phases. It requires \(\pi\) flux threading through each plaquette of the two-dimensional Su-Schrieffer-Heeger model. Recent studies show that particular \(\pi\)-flux patterns can modify the fundamental domain of momentum space from the shape of a torus to a Klein-bottle with emerging topological phases. By designing different \(\pi\)-flux patterns, we propose two types of Klein-bottle BBH models. These models show rich topological phases including Klein-bottle quadrupole insulators and Dirac semimetals. The phase with nontrivial Klein-bottle topology shows twined edge modes at open boundaries. These edge modes can further support second-order topology yielding a quadrupole insulator. Remarkably, both models are robust against flux perturbations. Moreover, we show that different \(\pi\)-flux patterns dramatically affect the phase diagram of the Klein-bottle BBH models. Going beyond the original BBH model, Dirac semimetal phases emerge in Klein-bottle BBH models featured by the coexistence of twined edge modes and bulk Dirac points. ## I Introduction The quadrupole insulator model proposed by Benalcazar, Bernevig, and Hughes [1; 2] (BBH) is an important model for the study of higher-order topological insulators [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41]. It exhibits a quadrupole insulator phase with quantized bulk multipole moments. This phase is characterized by corner states carrying fractional corner charges \(\pm e/2\) generalizing the bulk-boundary correspondence to higher order. The basic construction unit of quadrupole insulators is the Su-Schrieffer-Heeger (SSH) model [42]. The SSH model possesses quantized dipole moments in the bulk. Since a quadrupole consists of two separated dipoles, one may couple the SSH models in a particular way to obtain quantized quadrupole moments in the two-dimensional (2D) bulk. However, direct coupling of one-dimensional (1D) SSH chains from two directions does not work. It results in a gapless 2D SSH model with topological properties [43; 44]. To get an insulating phase with quantized quadrupole moments, the indispensable ingredients are \(\pi\) fluxes threading each plaquette of the entire 2D lattice. The \(\pi\)-flux pattern generates an insulating phase at half-filling. It projectively modifies the mirror symmetry \(M_{x}\) and \(M_{y}\) from commuting \([M_{x},M_{y}]=0\) to anticommuting \(\{M_{x},M_{y}\}=0\). This change results in the BBH model with quantized bulk quadrupole moments [1; 2]. In our work, we vary the \(\pi\)-flux pattern of the BBH model. This modification gives rise to a new class of models, which we call Klein-bottle BBH models. The name stems from the shape of the fundamental domain of momentum space is modified from a torus to be a Klein bottle in mathematics by particular \(\pi\)-flux patterns [45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57]. In the first type of Klein-bottle BBH model, the \(\pi\) fluxes are applied only at the even number of columns of plaquettes in 2D SSH lattice [see Fig. 1(a)]. This model supports nontrivial Klein-bottle quadrupole insulator phases with corresponding boundary signatures such as quantized edge polarizations and fractional corner charges. We show that the nontrivial Klein-bottle quadrupole insulator is robust against flux perturbations. In the second Klein-bottle BBH model, we instead apply \(\pi\) fluxes at the odd number of columns of plaquettes [see Fig. 7(a)]. This subtle difference of \(\pi\)-flux patterns dramatically changes the phase diagram of the system. The second model does not support nontrivial Klein-bottle quadrupole insulators anymore. It shares some features with the first model, such as the twined edge modes and corner-localized charges, but its insulator phase is trivial with vanishing bulk quadrupole moments. Interestingly, we identify emergent Klein-bottle Dirac semimetal phases in both models, characterized by the coexistence of twined edge modes and bulk Dirac points. In particular, four Dirac points are located at high symmetry points of BZ. They are related by glide-mirror symmetry in momentum space. There are no such Dirac semimetal phases in the original BBH model. The article is organized as follows. In Sec. II, we present the first Klein-bottle BBH model with emphasis on the nontrivial Klein-bottle quadrupole insulator phase. In Sec. III, we study Klein-bottle Dirac semimetal phases. In Sec. IV, we show the robustness of Klein-bottle quadrupole insulators against flux variations. In Sec. V, we consider the properties of the second Klein-bottle BBH model with a different \(\pi\)-flux pattern. Finally, we conclude our results with a discussion in Sec.VI. Klein-bottle quadrupole insulators induced by \(\mathbb{Z}_{2}\) gauge fields ### The first Klein-bottle BBH model We consider the first model as sketched in Fig. 1(a). Compared with the original BBH model, there are no uniform \(\pi\) fluxes in the whole 2D lattice. The \(\pi\) fluxes only apply at the even number of columns of plaquettes. The tight-binding Hamiltonian reads \[H_{1}= \sum_{\mathbf{R}}\Big{[}t_{x}(C^{\dagger}_{\mathbf{R},1}C_{ \mathbf{R},3}+C^{\dagger}_{\mathbf{R},2}C_{\mathbf{R},4})\] \[+t_{y}(C^{\dagger}_{\mathbf{R},1}C_{\mathbf{R},4}+C^{\dagger}_{ \mathbf{R},2}C_{\mathbf{R},3})\] \[+(-tC^{\dagger}_{\mathbf{R},1}C_{\mathbf{R}+\hat{x},3}+tC^{ \dagger}_{\mathbf{R},4}C_{\mathbf{R}+\hat{x},2})\] \[+t(C^{\dagger}_{\mathbf{R},1}C_{\mathbf{R}+\hat{y},4}+C^{\dagger }_{\mathbf{R},3}C_{\mathbf{R}+\hat{y},2})\Big{]}+\text{H.c.}, \tag{1}\] where \(t_{x/y}\) and \(t\) are the corresponding hopping amplitudes along \(x/y\) directions, as indicated in Fig. 1(a). The operators \(C^{\dagger}_{\mathbf{R},\zeta}\) (\(C_{\mathbf{R},\zeta}\)) are creation (annihilation) operators at unit cell \(\mathbf{R}\) with \(\zeta\in\{1,2,3,4\}\) being orbital degrees of freedom. Note that the minus sign from the \(\pi\) fluxes is encoded in the term \(-tC^{\dagger}_{\mathbf{R},1}C_{\mathbf{R}+\hat{x},3}+\text{H.c.}\). We have set the lattice constant \(a=1\). In momentum space, the corresponding Bloch Hamiltonian reads \[H_{1}(\mathbf{k})= t_{x}\tau_{1}\sigma_{0}+[-t\cos k_{x}\gamma_{3}+t\sin k_{x} \gamma_{4}\] \[+(t_{y}+t\cos k_{y})\gamma_{1}-t\sin k_{y}\gamma_{2}], \tag{2}\] where \(\gamma_{j}=\tau_{1}\sigma_{j}\) and \(\gamma_{4}=\tau_{2}\sigma_{0}\) are the gamma matrices. The Pauli matrices \(\tau\) and \(\sigma\) correspond to different orbital degrees of freedom in the unit cell, \(\mathbf{k}=(k_{x},k_{y})\) is the momentum in 2D. Different to the original BBH model, besides the four anticommuting Dirac matrices in Eq. (2), there is an extra term \(t_{x}\tau_{1}\sigma_{0}\). The Hamiltonian respects chiral symmetry \(\gamma_{5}H_{1}(\mathbf{k})\gamma_{5}^{-1}=-H_{1}(\mathbf{k})\) with the chiral symmetry operator defined as \(\gamma_{5}\equiv-\gamma_{1}\gamma_{2}\gamma_{3}\gamma_{4}=\tau_{3}\sigma_{0}\). It has time-reversal symmetry \(\mathcal{T}H_{1}(\mathbf{k})\mathcal{T}^{-1}=H_{1}(-\mathbf{k})\) as well, where \(\mathcal{T}=K\) is just the complex conjugate operation. Therefore, particle-hole symmetry is also preserved. With the help of chiral symmetry, the energy spectra can be obtained as \[E^{\pm}_{\eta}(\mathbf{k})=\pm\sqrt{\epsilon^{2}_{y}(k_{y})+t^{2}+t^{2}_{x}+2 \eta t_{x}\sqrt{\epsilon^{2}_{y}(k_{y})+t^{2}\cos^{2}k_{x}}}, \tag{3}\] where \(\epsilon^{2}_{y}(k_{y})\equiv t^{2}_{y}+2t_{y}t\cos k_{y}+t^{2}\), and \(\eta=\pm 1\). The lower (upper) two bands are no longer degenerate unless \(t_{x}=0\), cf. Fig. 1(d). We find that there are insulating phases as well as semimetal phases, as shown in the phase diagram Fig. 1(b), different from that in the BBH model. We focus on the insulating phases in this section and delegate the discussion of the semimetal phase to Sec. IV. ### Klein-bottle nontrivial phases, glide edge spectra, and Wannier bands Due to the gauge degrees of freedom from \(\pi\) fluxes, the hopping amplitudes are allowed to take phases \(\pm 1\). Thus, the \(\pi\) fluxes endow the system with a \(\mathbb{Z}_{2}\) gauge field. This gauge field can projectively modify the algebra of certain symmetry operators [58]. The Klein-bottle BBH model has mirror symmetry along \(x\) direction as \(\mathcal{M}_{x}H_{1}(\mathbf{k})\mathcal{M}_{x}=H_{1}(-k_{x},k_{y})\) with \(\mathcal{M}_{x}=\tau_{1}\sigma_{0}\). While along \(y\) direction, the system does not have an exact mirror symmetry, it only has a mirror symmetry after a gauge transformation acting on the \(\mathbb{Z}_{2}\) gauge fields [2]. That is \(\mathcal{M}_{y}=G(M_{y})M_{y}\) with a gauge transformation \(G(M_{y})\). While we see that the gauge transformation \(G(M_{y})\) is not compatible with the translation operation \(\mathcal{L}_{x}\). The relation between \(\mathcal{M}_{y}\) and \(\mathcal{L}_{x}\) becomes projectively modified as \(\{\mathcal{M}_{y},\mathcal{L}_{x}\}=0\) due to \(\mathbb{Z}_{2}\) gauge fields [45], instead of \([\mathcal{M}_{y},\mathcal{L}_{x}]=0\) without gauge field. This fundamental change of commutation relation introduces the nonsymmorphic symmetry in momentum space and makes the fundamental domain of momentum space a Figure 1: (a) Sketch of the lattice for the first Klein-bottle BBH model with a specific \(\pi\) flux pattern. The dashed lines indicate negative sign to account for the \(\pi\) fluxes. (b) Phase diagram in the parameter space \((t_{x},t_{y})\). The light blue region indicates the nontrivial Klein-bottle quadrupole insulators (KBQI), the light green region indicates the Dirac semimetal (DSM) phase. The region between two dashed lines represents phases with nontrivial Klein-bottle topology. Other regions are normal insulators (NI). (c) Fundamental domain of momentum space in the BZ (blue color). The boundaries marked with same colored arrows should be identified in that sense, thus a Klein bottle. (d) Band structure for the first Klein-bottle BBH model in the insulating phase. There are four non-degenerate bands. The parameters are taken as \(t_{x}=0.6,t_{y}=0.3\) in units of \(t\). Klein bottle [45]. Specifically, we find \[\mathcal{M}_{y}H_{1}(\mathbf{k})\mathcal{M}_{y}^{-1}=H_{1}(k_{x}+\pi,-k_{y}), \tag{4}\] where \(\mathcal{M}_{y}=\tau_{1}\sigma_{1}\) in the chosen basis. This corresponds to a glide-mirror symmetry in momentum space. Hence, the momentum at \((k_{x},k_{y})\) is equivalent to \((k_{x}+\pi,-k_{y})\). Consequently, the original BZ (torus) is reduced to two equivalent fundamental domains [Klein bottles in Fig. 1(c)]. In the following, we use the term _Klein-bottle BZ_ to indicate the fundamental domain of momentum space. Consider a ribbon geometry along \(x\) direction with open boundary conditions along \(y\) direction. We find that there are edge modes residing within the bulk bands. If we resolve their spatial distributions, we find that the two pairs of edge modes emerge at different boundaries. Those edge modes are gapped as shown in the Figs. 2(a) and (b), similar to those in the BBH model. However, there are essential differences. The two pairs of edge spectra have a relative momentum shift \(\delta k_{x}=\pi\), which is due to the glide-mirror symmetry as stated above. Due to the relative momentum shift \(\delta k_{x}=\pi\), two branches of edge modes from different pairs twine around each other from \(k_{x}=-\pi\) to \(k_{x}=\pi\). We call them _twined edge modes_. Moreover, the edge spectra cross the bulk continuum without hybridization. The energy spectra of the twined edge modes can be obtained as [29; 59] \[E_{\mathrm{b}}(k_{x})=\pm\sqrt{t_{x}^{2}+t^{2}+2t_{x}t\cos(k_{x}+\theta)}, \tag{5}\] where \(\theta=0/\pi\) parametrizes the two different pairs of edge modes. The existence of twined edge modes could be attributed to a topological invariant. In Ref. [45], the corresponding topological invariant is defined at the boundary of the Klein-bottle BZ as \(w=\frac{1}{2\pi}[\gamma_{y}(k_{x}=0)+\gamma_{y}(k_{x}=\pi)]\) mod 2 where \(\gamma_{y}(k_{x})\) is the Berry phase for the reduced 1D Hamiltonian \(h(k_{y})\) at a specific \(k_{x}\). This topological invariant is closely related to 1D charge polarization [45]. Note that this invariant becomes ill-defined once the Klein-bottle BZ is broken. This happens when the value of magnetic flux deviates from \(\pi\). Because in that case the relation \(\{\mathcal{M}_{y},\mathcal{L}_{x}\}=0\) does not hold anymore. While the twined edge modes, which serves as a practical indicator of Klein-bottle insulators, may survive under such flux deviations. We alternatively employ the method of Wilson loops to characterize the topology of Klein-bottle phases. At half filling, we find that the bulk polarization of the system vanishes. Since the lowest two bands are not degenerate in this model and the twined edge modes reach the gap between these two bands, we consider the polarization of the _lowest_ energy band (similar results can be obtained for the second band). We consider the ribbon along \(x\) direction. Thus, the bulk polarization \(p_{y}^{\kappa}\) along \(y\) direction determines the existence of twined edge modes. In the nontrivial phase with \(p_{y}^{\kappa}=\frac{1}{2}\), there are twined edge modes. In the trivial phase with \(p_{y}^{\kappa}=0\), no edge modes exist. The polarization \(p_{y}^{\kappa}\) is closely related to the Wannier center. We obtain the Wannier center from the Wilson loop method. To this end, we define the Wilson loop along \(y\) direction at specific \(k_{x}\) in the Klein-bottle BZ, i.e., \(W_{y}(k_{x})\). Then the eigenvalues of \(W_{y}(k_{x})\) yield the Wannier center \(\nu_{y}(k_{x})\). The Wannier center indicates the average position of electrons relative to the center of the unit cell. The set of Wannier centers along \(y\) direction as a function of \(k_{x}\) form the Wannier bands \(\nu_{y}(k_{x})\). The topological invariant for the Klein-bottle insulator can be defined as \[p_{y}^{\kappa}=\frac{2}{L_{x}}\sum_{k_{x}=-\pi}^{0}\nu_{y}(k_{x}), \tag{6}\] which is the bulk polarization of the lowest energy band. Note that we can take the sum with respect to \(k_{x}\) from \(-\pi\) to 0. The other range from 0 to \(\pi\) can be obtained by symmetry. Here, \(L_{x}\) is the number of unit cells in \(x\) direction. The topological invariant \(p_{y}^{\kappa}\) should not be changed by flux perturbations as long as chiral symmetry is preserved. The Wannier band \(\nu_{y}(k_{x})\) for the lowest energy band is plotted in Fig. 2(d). For the nontrivial Klein-bottle insulator phase, the Wannier band \(\nu_{y}(k_{x})\) has to cross \(\nu_{y}=\frac{1}{2}\) Figure 2: (a) Spectra of a ribbon along \(x\) direction. The blue and red color line indicate the twined edge modes. (b) Wave function distribution of the twined edge modes along \(y\) direction of the ribbon. (c) Similar to (a) but in a topological trivial case without edge modes. (d) The Wannier spectrum \(\nu_{y}(k_{x})\) of the lowest energy band. The parameters are taken \(t_{x}=0.6,t_{y}=0.3\) for (a),(b) and (d), and \(t_{x}=1.2,t_{y}=2\) for (c) in units of \(t\). Due to the periodicity of the BZ, \(\nu_{y}(k_{x})\) crosses \(\nu_{y}=\frac{1}{2}\) an even number of times. Therefore, we obtain the bulk polarization \(p_{y}^{\kappa}=\frac{1}{2}\) from the lowest energy bands as a topological invariant for Klein-bottle insulators. Note that the Wannier band crosses \(\nu_{y}=\frac{1}{2}\) once within the domain \(k_{x}\in[0,\pi]\), consistent with the winding number defined in Ref. [45]. For the trivial insulator case, it does not cross the value \(\nu_{y}=\frac{1}{2}\) at all, thus \(p_{y}^{\kappa}=0\). We emphasize that at \(k_{x}=\pm\frac{\pi}{2}\), the Wannier center is fixed at \(0\) or \(\frac{1}{2}\). This is because at these special points the Hamiltonian \(H_{1}(\pm\frac{\pi}{2},k_{y})\) has space-time inversion symmetry, which can quantize the Wannier center [60]. From the topological invariant \(p_{y}^{\kappa}\), we find that the nontrivial Klein-bottle phase exists for \[|t_{y}|<1 \Rightarrow p_{y}^{\kappa}=\frac{1}{2}, \tag{7}\] as indicated in Fig. 1(b). Note that this phase regime contains Klein-bottle insulators (insulating phase characterized by twined edge modes) and as well as Klein-bottle Dirac semimetals (semimetals with twined edge modes). ### Nontrivial Klein-bottle quadrupole insulators Twined edge modes exist in the nontrivial Klein-bottle phases as a consequence of first-order topology. In our model, the twined edge modes are gapped as well. However, there is a relative momentum shift of the spectra at different edges. These spectra can touch, cross or be hidden in the bulk continuum. An intriguing question is whether such twined edge modes can support second-order topology, characterized by corner states and fractional charges. To characterize the system, we first calculate the quadrupole moment \(q_{xy}\). Afterwards, we check the corresponding edge and corner signatures. The quadrupole moment can be obtained in real space as [18; 29; 34; 61; 62], \[q_{xy}=\frac{1}{2\pi}\text{Im}\text{log}\left[\text{det}(U^{\dagger}\hat{Q}U) \sqrt{\text{det}(Q^{\dagger})}\right], \tag{8}\] where \(\hat{Q}\equiv\exp[i2\pi\hat{q}_{xy}]\) with \(\hat{q}_{xy}=\hat{x}\hat{y}/(L_{x}L_{y})\) being quadrupole momentum density operator per unit cell at position \(\mathbf{R}=(x,y)\). Here, \(\hat{x}(\hat{y})\) is the position operator along \(x\) (\(y\)) direction and \(L_{x(y)}\) is the corresponding system size. The matrix \(U\) is constructed by packing all the occupied eigenstates in a column-wise way. The quantization of \(q_{xy}\) is protected by chiral symmetry [29; 34]. For an insulating phase, it is a nontrivial quadrupole insulator when \(q_{xy}=\frac{1}{2}\). We find the nontrivial Klein-bottle quadrupole insulator phase in the regime \(|t_{x}|<1\) and \(|t_{y}|<1\) [see Fig. 1(b)], similar to the BBH model. The edge polarizations \(p_{x}^{\text{edge}}\) and \(p_{y}^{\text{edge}}\) can also help to detect the topologically nontrivial phase. Take \(p_{x}^{\text{edge}}\) as an example. Consider a ribbon along \(x\) direction with width \(L_{y}\) along \(y\) direction. Employing the Wilson loop method, the edge polarization is calculated as [1; 2; 32] \[p_{x}^{\text{edge}}=\sum_{y=1}^{L_{y}/2}p_{x}(y), \tag{9}\] where \(p_{x}(y)\) is the distribution of polarization along \(y\) direction. We calculate this spatial-resolved polarization as \[p_{x}(y)=\sum_{j=1}^{2L_{y}}\rho^{j}(y)\nu_{y}^{j}(k_{x}), \tag{10}\] where \(\rho^{j}(y)=\frac{1}{L_{x}}\sum_{k_{x},\zeta}|\sum_{n}[u_{k_{x}}^{n}]^{y,\zeta }[\nu_{k_{x}}^{j}]^{n}|^{2}\). Here, \([\nu_{k_{x}}^{j}]^{n}\) is the \(n\)-th component of the \(j\)-th Wilson-loop Figure 3: (a) Edge polarization \(p_{x}\) along \(y\), (b) Wannier center \(\nu_{x}\) for different eigenstates. (c) Edge polarization \(p_{y}\) along \(x\), (b) Wannier center \(\nu_{y}\) for different eigenstates. The parameters are taken as \(t_{x}=0.6,t_{y}=0.3\) in units of \(t\). Figure 4: (a) Four zero-energy corner modes in the spectrum. (b) Electron charge density distribution on the lattice. It gives fractional corner charges \(\pm\frac{\varepsilon}{2}\). The parameters are taken as \(t_{x}=0.6,t_{y}=0.3\) in units of \(t\). eigenstate \(|\nu^{j}_{k_{x}}\rangle\) corresponding to the eigenvalue \(\nu^{j}_{y}(k_{x})\), while \([u^{n}_{k_{x}}]_{y,\zeta}\) is the \(n\)-th eigenstate of the Hamiltonian \(H_{y}(k_{x})\) on the ribbon with integer number \(n\in\{1,2,3,...,2L_{y}\}\). The edge polarization \(p^{\rm edge}_{y}\) in \(y\) direction can be calculated in a similar way. For a nontrivial quadrupole insulator, \((p^{\rm edge}_{x},p^{\rm edge}_{y})=(\frac{1}{2},\frac{1}{2})\). Consider a ribbon along \(x\) direction with open boundaries in \(y\) direction. In Figs. 3(a) and 3(b), two topological states are localized on the edge with half-integer Wannier values, while the other states are distributed over the bulk. The edge polarization \(p_{x}(y)\) becomes nonzero at the sample edge. The spatial-resolved polarization yields quantized edge polarization \(p^{\rm edge}_{x}=\frac{1}{2}\) (\(p^{\rm edge}_{y}=\frac{1}{2}\)). Similar results appear for a ribbon along \(y\) direction, as shown in Figs. 3(c) and 3(d). Moreover, there are zero-energy corner modes carrying fractional corner charges. We show in Fig. 4(a) that there are four-fold degenerate zero-energy modes, whose wave functions are sharply localized at corners of the sample. It is also found that the corner charges are fractionalized at \(\pm e/2\), cf. Fig. 4(b). ## III Klein-bottle Dirac semimetals on square lattices Beside the Klein-bottle insulating phase, there exists also a Dirac semimetal phase, as shown in Fig. 5. There are four Dirac points residing in the BZ. From the energy band solutions in Eq. (3), to obtain band touching at zero energy (due to chiral symmetry), we require \(\cos^{2}k_{x}=1\), i.e., \(k_{x}\in\{0,\pi\}\). In this case, the energy spectra are simplified as \[E^{\pm}_{\eta}({\bf k})=\pm\left(\sqrt{\epsilon^{2}_{y}(k_{y})+t^{2}}+2\eta t_ {x}\right), \tag{11}\] Therefore, four Dirac points are located at \[(K_{x},K_{y})=\left(0/\pi,\ \ \pm\arccos\left[\frac{t_{x}^{2}-t_{y}^{2}-2t^{2} }{2t_{y}t}\right]\right). \tag{12}\] The valid solutions for \(K_{y}\) give rise to the Klein-bottle Dirac semimetal phase, determined by the overlap of two hyperbolas in the parameter space \((t_{x},t_{y})\) as \[(t_{y}\pm t)^{2}-t_{x}^{2}=t^{2}, \tag{13}\] as shown in the Fig. 1(b). The Dirac semimetal phase that appears in this model has no counterpart in the original BBH model. The Dirac points are located at the boundary of the Klein-bottle BZ. They come in two dual pairs related by glide-mirror symmetry \(\mathcal{M}_{y}\) as \(\{(0,\pm K_{y})\Leftrightarrow(\pi,\mp K_{y})\}\). We know that the topological protection of Dirac points is typically related to a winding number defined on a path enclosing the Dirac points. Due to chiral symmetry, rewriting the Hamiltonian Eq. (2) in an off-diagonal form, this leads to \[H_{1}({\bf k})=\left(\begin{array}{cc}0&q({\bf k})\\ q^{\dagger}({\bf k})&0\end{array}\right), \tag{14}\] where \[q({\bf k})=\left(\begin{array}{cc}-t_{x}+te^{ik_{x}}&t_{y}+te^{ik_{y}}\\ t_{y}+te^{-ik_{y}}&t_{x}+te^{-ik_{x}}\end{array}\right). \tag{15}\] The winding number for the Dirac points is defined as \(\omega=\frac{1}{2\pi i}\oint_{\ell}d{\bf k}\cdot{\rm Tr}[q^{-1}({\bf k}) \nabla_{\bf k}q({\bf k})]\)[63; 64], where the loop \(\ell\) is chosen such that it encloses a single Dirac point. The twined edge modes from nontrivial Klein-bottle topology also appear in the Dirac semimetal phase, as shown in the Fig. 5(b). The bulk Dirac points coexist with edge Dirac points, located at different energies. For certain parameters, they are hidden in the bulk bands but not directly merge with the continuum. From the wave function distribution, we find that the edge modes are well-localized at boundaries even if they coexist with bulk bands. In the Dirac semimetal phase with a trivial Klein-bottle topology for \(|t_{y}|>1\), the twined edge modes disappear. ## IV Robustness of nontrivial quadrupole insulators against flux perturbations In the previous section, we show that the realization of Klein-bottle quadrupole insulators relies on exact threading of \(\pi\) fluxes on even numbers of plaquettes in the 2D lattice. We now address the question of how robust nontrivial Klein-bottle quadrupole insulators are against flux Figure 5: (a) Four Dirac points in the band structure. (b) Coexistence of edge Dirac points and bulk edge Dirac points in the spectra of a ribbon along \(x\) direction. The parameters are taken as \(t_{x}=1.2,t_{y}=0.6\) in units of \(t\). deviations from the value of \(\pi\). We check the stability of the nontrivial Klein-bottle quadrupole insulators in two cases: In the first one, the flux \(\phi\) deviates from \(\pi\) but is still uniform in the lattice. This helps us to determine how general the nontrivial phases are. In the second one, the flux value is chosen randomly, fluctuating around \(\pi\). ### Uniform flux deviations Let us first check the evolution of twined edge modes for a model with flux \(\phi\) deviating from \(\pi\) uniformly. In the original case, the well-localized twined edge modes cross the bulk bands without hybridization [Fig. 2(a)]. When \(\phi=0.95\pi\), the twined edge modes almost keep their form [Fig. 6(a)] but the overlapping parts start to hybridize. In this case, the Klein-bottle BZ manifold is broken because the glide-mirror symmetry does not hold anymore. Define \(\Delta\phi=\pi-\phi\). As \(\Delta\phi\) increases further, the twined edge modes hybridize with the other bands stronger. This may be explained as the hybridization of Landau levels (flat bands) and twined edge modes. Now, we check the robustness of Klein-bottle quadrupole insulators against flux perturbations. To this end, we employ the quadrupole moments \(q_{xy}\) and the corresponding corner states. To be specific, we take \(t_{x}=0.6\) and \(t_{y}=0.3\) in the calculations. It is demonstrated in Fig. 6(b) that \(q_{xy}\) stays at \(q_{xy}=\frac{1}{2}\) even if \(\Delta\phi\) grows to a relatively large value. This result indicates the Klein-bottle quadrupole insulator is robust against flux deviations. It also suggests that nontrivial quadrupole insulators exist in an extended range of flux values \(\phi\), not just at \(\phi=\pi\). Correspondingly, there are four zero-energy corner states in the gap of the spectrum [see Fig. 6(c)] with quantized fractional charge at each corner. The robustness of nontrivial Klein-bottle quadrupole insulators can be attributed to the persistence of twined edge modes under flux perturbations. They keep almost intact and gapped [Fig. 6(a)]. ### Random flux The magnetic flux can also be chosen randomly at each plaquette [65]. We assume that the magnetic flux \(\phi\) fluctuates around \(\pi\). The flux deviation at each plaquette takes a random value from the uniformly distributed range \([-U_{d},U_{d}]\) with \(U_{d}\) a disorder strength. We also check the quadrupole moments \(q_{xy}\) under random flux. From Fig. 6(d), the \(q_{xy}\) remains at \(\frac{1}{2}\) as \(U_{d}\) increases from 0 up to \(0.2\pi\). When calculating \(q_{xy}\), we can also investigate a single disorder configuration. We then find that the zero-energy midgap states and corner charges remain robust. Together with \(q_{xy}\), these findings suggest the strong robustness of nontrivial quadrupole insulators against random flux. ## V Trivial Klein-bottle quadrupole insulators with corner states Now, we consider the second Klein-bottle BBH model as sketched in Fig. 7(a). The \(\pi\) fluxes apply only to the odd number of columns of plaquettes, instead of the even number of columns. This subtle change of applying \(\pi\)-flux patterns makes a strong difference. It gives rise to a totally different Hamiltonian. The nontrivial quadrupole insulators do not appear anymore. There are also insulators and Dirac semimetals in the phase diagram [Fig. 7(b)], but the insulator is a trivial insulator with \(q_{xy}=0\), although supporting corner charges. The tight-binding Hamiltonian of the second Klein-bottle BBH model reads \[H_{2}= \sum_{\mathbf{R}}\Big{[}(-t_{x}C_{\mathbf{R},1}^{\dagger}C_{ \mathbf{R},3}+t_{x}C_{\mathbf{R},2}^{\dagger}C_{\mathbf{R},4})\] \[+t_{y}(C_{\mathbf{R},1}^{\dagger}C_{\mathbf{R},4}+C_{\mathbf{R},2 }^{\dagger}C_{\mathbf{R},3})\] \[+t(C_{\mathbf{R},1}^{\dagger}C_{\mathbf{R}+\hat{x},3}+C_{\mathbf{ R},4}^{\dagger}C_{\mathbf{R}+\hat{x},2})\] \[+t(C_{\mathbf{R},1}^{\dagger}C_{\mathbf{R}+\hat{y},4}+C_{\mathbf{ R},3}^{\dagger}C_{\mathbf{R}+\hat{y},2})\Big{]}+\text{H.c.}, \tag{16}\] where the minus sign from the \(\pi\) fluxes is taken into account in the term \(-t_{x}C_{\mathbf{R},1}^{\dagger}C_{\mathbf{R},3}+\text{H.c.}\). In momentum space, the Bloch Hamiltonian corresponding to Eq. (16) becomes Figure 6: (a) Energy spectrum on a ribbon along \(x\) direction when \(\phi=0.95\pi\). (b) Quadrupole moment \(q_{xy}\) as a function of flux deviation \(\Delta\phi=\pi-\phi\). (c) Eigenstates around zero energy for different \(\Delta\phi\). (d) Quadrupole moment \(q_{xy}\) as a function of random flux strength \(U_{d}\). The parameters are taken as \(t_{x}=0.6\), \(t_{y}=0.3\) in units of \(t\). \[H_{2}(\mathbf{k})= -t_{x}\tau_{1}\sigma_{3}+t\cos k_{x}\tau_{1}\sigma_{0}-t\sin k_{x} \tau_{2}\sigma_{3}\] \[+(t_{y}+t\cos k_{y})\tau_{1}\sigma_{1}-t\sin k_{y}\tau_{1}\sigma_{ 2}. \tag{17}\] The bulk spectrum of Eq. (17) reads \[E_{\eta}^{\pm}(\mathbf{k})=\pm\sqrt{\epsilon_{y}^{2}(k_{y})+t^{2}+t_{x}^{2}+2 \eta t\sqrt{\epsilon_{y}^{2}(k_{y})+t_{x}^{2}\cos^{2}k_{x}}}, \tag{18}\] where \(\epsilon_{y}^{2}(k_{y})\equiv t_{y}^{2}+2t_{y}t\cos k_{y}+t^{2}\) is defined in the same way as before. This model also respects chiral symmetry \(\gamma_{5}H_{1}(\mathbf{k})\gamma_{5}^{-1}=-H_{1}(\mathbf{k})\). The \(\pi\)-flux gauge field gives rise to nonsymmorphic symmetry in momentum space as \[\mathcal{M}_{y}^{{}^{\prime}}H_{2}(\mathbf{k})\mathcal{M}_{y}^{{}^{\prime}-1} =H_{2}(k_{x}+\pi,-k_{y}), \tag{19}\] where \(\mathcal{M}_{y}^{{}^{\prime}}=\tau_{2}\sigma_{2}\) in the chosen basis. The phase diagram of the second Klein-bottle BBH model is plotted in Fig. 7(b). The Dirac semimetal phase is located inside the two circles in parameter space \((t_{x},t_{y})\) \[t_{x}^{2}+(t_{y}\pm t)^{2}=t^{2}. \tag{20}\] The Dirac points are at \[(K_{x},K_{y})=\left(0/\pi,\ \ \pm\arccos\left[-\frac{t_{x}^{2}+t_{y}^{2}}{2t_{y }t}\right]\right). \tag{21}\] In the Klein-bottle Dirac semimetal phases, there are twined edge modes on a ribbon geometry with open boundary. The other regions are insulating phases. The nontrivial Klein-bottle phase is bounded by \(|t_{y}|<1\). Comparing with the first model in Eq. (2), the different \(\pi\)-flux pattern in the second Klein-bottle BBH model leads to totally different matrix structures in Eq. (17). Thus, the corresponding energy bands are quite different, giving rise to significantly different phase diagrams. Moreover, we notice the exchange of variables \(t\longleftrightarrow t_{x}\) in the energy bands compared to Eq. (3). This can be effectively viewed as the exchange of dimerized hopping strength of the SSH model along \(x\) direction, which makes the topological properties different from the first model. In an insulator phase, we find the edge polarizations take the values \((p_{x}^{\rm edge},p_{y}^{\rm edge})=(\frac{1}{2},0)\). This anisotropic property of edge polarizations bears similarity to weak topological insulators. In Fig. 8(a), we plot the edge polarizations \(p_{y}^{\rm edge}\), together with the Wannier values of eigenstates. The edge polarization \(p_{y}^{\rm edge}\) is zero [the nontrivial \(p_{x}^{\rm edge}=\frac{1}{2}\) is not shown here]. Consider open boundary conditions along both \(x\) and \(y\) directions. Then, the edge polarizations are terminated at corners. This leads to charges \(Q^{\rm corner}=\pm\frac{e}{2}\) localized at the corners [see Fig. 8(b)]. There are four zero-energy midgap states in the energy spectra when \((t_{x},t_{y})\) is located in the light green region of the phase diagram shown in Fig. 7(b). However, if we calculate the topological invariant \(q_{xy}\), we find that the second Klein-bottle BBH model is a trivial Klein-bottle insulator. Remarkably, it exhibits twined edge modes (first-order) and corner-localized charges, but it has a vanishing quadrupole moment \(q_{xy}=0\) (second-order topological invariant). The defining properties of a quadrupole insulator \(|q_{xy}|=|p_{x}^{\rm edge}|=|p_{y}^{\rm edge}|=|Q^{\rm corner}|\) is not satisfied [2]. The corner charges follow \(Q^{\rm corner}=p_{x}^{\rm edge}+p_{y}^{\rm edge}\). Figure 8: (a) Edge polarization \(p_{y}\) along \(x\) (left panel) and Wannier center \(\nu_{y}\) for different eigenstates (right panel) in the second Klein-bottle BBH model. (b) Electron density distribution in the lattice. The parameters are taken \(t_{x}=0.8,t_{y}=0.2\) in units of \(t\) for all plots. Figure 7: (a) Sketch of the lattice for the second Klein-bottle BBH model with a different \(\pi\)-flux pattern. The dashed lines indicate negative sign to account for the \(\pi\) fluxes. (b) Phase diagram in the parameter space \((t_{x},t_{y})\). The light blue region indicates the Dirac semimetal (DSM) phase, the light green region indicate the trivial quadrupole insulators (TQI). The region between two dashed lines represent phases with nontrivial Klein-bottle topology. Other regions are the normal insulators (NI). Thus the corner charges and edge polarizations are pure surface effects, unrelated to bulk quadrupole moments [2]. We further analyze the robustness of twined edge modes in the Klein-bottle phases when the magnetic flux deviates from \(\pi\). Consider a ribbon along \(x\) direction. In the Klein-bottle Dirac semimetal phases, the twined edge modes reside between the bulk bands and can be detached from them. When we gradually change \(\phi\) from \(\pi\), the twined edge modes persist and are detached from the bulk modes even up to a relatively large \(\Delta\phi\), as shown in Fig. 9(b). ## VI Discussion and Conclusions We show that the variation of \(\pi\)-flux patterns changes the topology of the considered system dramatically. Hence, particular \(\pi\)-flux patterns may help to search for novel topological phases. The Klein-bottle quadrupole insulator requires only half of the total \(\pi\) fluxes as compared to the original BBH model, simplifying experimental realizations of nontrivial quadrupole insulators. The manipulation of magnetic flux is possible in different synthetic systems. Therefore, our predictions are experimentally relevant. In summary, we have proposed the existence of nontrivial Klein-bottle quadrupole insulators and Dirac semimetal in 2D. The twined edge modes, which support the second-order topology, appear as a characteristic signature of Klein-bottle systems. We have verified the robustness of the nontrivial quadrupole insulators against flux perturbations. In the Klein-bottle Dirac semimetal phases, we discover the the coexistence of edge Dirac points with bulk Dirac points. ## VII Acknowledgments C.A.L. thanks Jan Budich and Bo Fu for helpful discussion. This work has been financially supported by the Wurzburg-Dresden Cluster of Excellence ct.qmat, Project-id 390858490, the DFG (SFB 1170), and the Bavarian Ministry of Economic Affairs, Regional Development and Energy within the High Tech Agenda Project "Bausteine fur das Quanten Computing auf Basis Topologischer Materialen". H. G. acknowledges support from the NSFC grant No. 12074022. ## Appendix A Revisit of the Klein-bottle insulator model For a better understanding of our results, we revisit the Klein-bottle insulator proposed in Ref. [45]. The Hamiltonian in 2D reads \[H_{0}(\mathbf{k})=\left(\begin{array}{cccc}\epsilon&[q_{1}^{x}(k_{x})]^{*}& [q_{+}^{y}(k_{y})]^{*}&0\\ q_{1}^{x}(k_{x})&\epsilon&0&[q_{-}^{y}(k_{y})]^{*}\\ q_{+}^{y}(k_{y})&0&-\epsilon&[q_{2}^{x}(k_{x})]^{*}\\ 0&q_{-}^{y}(k_{y})&q_{2}^{x}(k_{x})&-\epsilon\end{array}\right), \tag{10}\] Figure 10: (a-e) and (f) Energy spectra of the model specified in Eq. (10) on a ribbon along \(y\) direction. The red color lines indicate the twined edge modes. (f) Wannier bands of the lowest energy band (blue) and the second energy band (red) corresponding to panel (c). (h) and (i) correspond to (g) but with the flux deviation from \(\pi\). The other parameters are \(t_{11}^{x}=t_{22}^{x}=1\), \(t_{12}^{x}=t_{21}^{x}=3.5\), \(t_{1}^{y}=2\), and \(t_{2}^{y}=1.5\), the same as in Ref. [45]. The values of \(\epsilon\) and \(\lambda\) are labeled on each plot. Figure 9: (a) Energy spectrum of the second Klein-bottle BBH model on a ribbon along \(x\) direction. The edge and bulk Dirac points coexist. (b) Same as panel (a) but with the flux \(\phi=0.9\pi\). The parameters are taken \(t_{x}=t_{y}=0.2\) in units of \(t\). where the parameters are defined as \(q_{\ell}^{x}(k_{x})=t_{\ell 1}^{x}+t_{\ell 2}^{x}e^{ik_{x}}\) with \(\ell=1,2\) and \(q_{\pm}^{y}(k_{y})=t_{1}^{y}\pm t_{2}^{y}e^{ik_{y}}\). The staggered on-site potential \(\pm\epsilon\) opens a band gap at the finite-energy Dirac points. The inclusion of the term \(H^{\prime}(\mathbf{k})=\lambda\cos k_{y}\sigma_{1}\tau_{2}+\lambda\sin k_{y} \sigma_{2}\tau_{2}\) breaks time-reversal symmetry. In Fig. 10, we plot the spectrum of the model. In the limit \(\epsilon=0\) and \(\lambda=0\), there is an energy gap close to the zero energy. Then, Dirac points, formed by first and second (third and fourth) bands, emerge. In this case, the nontrivial polarization gives zero energy edge modes on a ribbon along \(y\) direction [see Fig. 10(a)], similar to the zero-energy modes in zigzag graphene ribbons [66; 67]. When considering the ribbon along \(x\) direction, it shows a trivial gap without edge modes. This anisotropic property is the same as in the inclined 2D SSH model [44]. If we turn on the \(\lambda\) term, the flat edge modes become dispersive. If we turn on the on-site potential \(\epsilon\), we find the Dirac points at finite energy are gapped out. Then, there are two pairs of twined edge modes: one pair close to zero energy and the other pair at finite energy. The appearance of twined edge modes can be understood from the Wannier spectra in Fig. 10(f). One branch of Wanner bands exhibits nontrivial winding around \(\nu_{x}=\frac{1}{2}\) and the other one exhibits trivial winding around \(\nu_{x}=0\) instead. The total polarization is \(p_{x}=\frac{1}{2}\). In Fig. 10(d), we turn on both \(\lambda\) and \(\epsilon\) terms. It yields the same result as shown in Ref. [45], but now we observe two pairs of twined edge modes within a larger energy window. Tuning the parameter \(\epsilon\), this can change the position of twined edge modes [see Fig. 10(e)]. The twined edge modes also show robustness against flux perturbations. We consider a case in Fig. 10(g), where one pair of twined edge modes are detached from the bulk bands close to zero energy and the other pair is attached to the bulk continuum at finite energy. When the flux gradually deviates from \(\pi\), we find the detached twined edge modes persist in the spectra, as shown in Figs. 10(h) and 10(i). The other pair of edge modes at finite energy start to hybridize with the bulk bands. ## Appendix B Overview of Benalcazar-Bernevig-Hughes model For the convenience of comparison with Klein-bottle BBH models presented in the main text, let us first briefly review the BBH model in 2D [1; 2]. The tight-binding Hamiltonian in real space is described as \[H_{0}= \sum_{\mathbf{R}}\Big{[}t_{x}(C_{\mathbf{R},1}^{\dagger}C_{ \mathbf{R},3}+C_{\mathbf{R},2}^{\dagger}C_{\mathbf{R},4})\] \[+t_{y}(C_{\mathbf{R},1}^{\dagger}C_{\mathbf{R},4}-C_{\mathbf{R}, 2}^{\dagger}C_{\mathbf{R},3})\] \[+t(C_{\mathbf{R},1}^{\dagger}C_{\mathbf{R}+\hat{x},3}+C_{\mathbf{ R},4}^{\dagger}C_{\mathbf{R}+\hat{x},2})\] \[+t(C_{\mathbf{R},1}^{\dagger}C_{\mathbf{R}+\hat{y},4}-C_{\mathbf{ R},3}^{\dagger}C_{\mathbf{R}+\hat{y},2})\Big{]}+\text{H.c.}. \tag{10}\] The corresponding Bloch Hamiltonian in momentum space is \[H_{0}(\mathbf{k}) =[t_{x}+t\cos k_{x}]\Gamma_{4}+t\sin k_{x}\Gamma_{3}\] \[+[t_{y}+t\cos k_{y}]\Gamma_{2}+t\sin k_{y}\Gamma_{1}. \tag{11}\] The Gamma matrices are defined as \(\Gamma_{j}\equiv-\tau_{2}\sigma_{j}\), and \(\Gamma_{4}\equiv\tau_{1}\sigma_{0}\). The bulk bands of Eq. (11) are gapped unless \(t_{s}/t=\pm 1\) (\(s=x,y\)). Hence, it is an insulator at half-filling. The nonspatial symmetries of the BBH model are chiral symmetry, time-reversal symmetry, and particle-hole symmetry. The nontrivial phase of quadrupole insulator is characterized by quantized quadrupole moments \(q_{xy}=\frac{1}{2}\), which induces quantized corner charge \(Q^{\text{corner}}\) and edge polarization \(p^{\text{edge}}\) of its equal magnitude \(|q_{xy}|=|p_{x}^{\text{edge}}|=|p_{y}^{\text{edge}}|=|Q^{\text{corner}}|\). The quantization of \(q_{xy}\) relies on chiral symmetry [29; 34]. The quadrupole insulators in 2D have boundaries that are stand-alone 1D topological insulators. The nontrivial topological quadrupole phase is located in the parameter region \(|t_{s}/t|<1\)[1; 2]. ## Appendix C Bound states by \(\pi\)-flux defects in the original Benalcazar-Bernevig-Hughes model In this section, we demonstrate that a single \(\pi\)-flux defect in the original BBH model may trap two bound states. The original BBH model needs \(\pi\) fluxes on all plaquettes of the 2D lattice. A \(\pi\)-flux defect means that at a specific plaquette the \(\pi\) flux is removed. Consider a single \(\pi\)-flux defect in the 2D BBH model lattice. In the nontrivial quadrupole insulator phase, the \(\pi\)-flux defect can induce bound states in the energy gap. As shown in Fig. 11(a), there are totally six zero-energy modes in the bulk gap: four of them are corner modes and the extra two are bound states at the \(\pi\)-flux defect. These two bound states are degenerate at zero energy. Their wave function is shown in Fig. 11(b). Another possibility is that the bound states have finite energy, as shown in Fig. 11(c). Their wave function localizes at the position of the \(\pi\)-flux defect [see Fig. 11(d)].
2309.13929
Opposition flow control for reducing skin-friction drag of a turbulent boundary layer
This work explores the dynamic response of a turbulent boundary layer to large-scale reactive opposition control, at a friction Reynolds number of $Re_\tau \approx 2\,240$. A hot-film is employed as the input sensor, capturing large-scale fluctuations in the wall-shear stress, and actuation is performed with a single on/off wall-normal blowing jet positioned $2.4\delta$ downstream of the input sensor, operating with an exit velocity of $v_{\rm j} = 0.4U_\infty$. Our control efforts follow the work by Abbassi et al. [Int. J. Heat Fluid Fl. 67, 2017], but includes a control-calibration experiment and a performance assessment using PIV- and PTV-based flow field analyses. The controller targets large-scale high-speed zones when operating in ``opposing" mode and low-speed zones in the ``reinforcing" mode. An energy-attenuation of about 40% is observed for the opposing control mode in the frequency band corresponding to the passage of large-scale motions. This proves the effectiveness of the control in targeting large-scale motions, since an energy-intensification of roughly 45% occurs for the reinforcing control mode instead. Skin friction coefficients are inferred from PTV data to yield a direct measurement of the wall-shear stress. Results indicate that the opposing control logic can lower the wall-shear stress by about 3% with respect to a desynchronised control strategy, and by roughly 10% with respect to the uncontrolled flow. A FIK-decomposition of the skin-friction coefficient was performed, revealing that the off-the-wall turbulence follows a consistent trend with the PTV-based wall-shear stress measurements, although biased by an increased shear in the wake of the boundary layer given the formation of a plume due to the jet-in-crossflow actuation.
Giulio Dacome, Robin Mörsch, Marios Kotsonis, Woutijn J. Baars
2023-09-25T07:58:37Z
http://arxiv.org/abs/2309.13929v1
# Opposition flow control for reducing skin-friction drag of a turbulent boundary layer ###### Abstract This work explores the dynamic response of a turbulent boundary layer to large-scale reactive opposition control, at a friction Reynolds number of \(Re_{\tau}\approx 2\,240\). A surface-mounted hot-film is employed as the input sensor, capturing large-scale fluctuations in the wall-shear stress, and actuation is performed with a single on/off wall-normal blowing jet positioned \(2.4\delta\) downstream of the input sensor, operating with an exit velocity of \(v_{\parallel}=0.4U_{\infty}\). Our control efforts follow the work by Abbassi _et al._ [Int. J. Heat Fluid Fl. 67, 30-41, 2017; 1], but includes a control-calibration experiment and a performance assessment using PIV- and PTV-based flow field analyses. With the control-off calibration-experiment conducted a-_priori_, a transfer kernel is identified so that the velocity fluctuations targeted by control are estimated based on the upstream hot-film's signal. The controller targets large-scale high-speed zones when operating in "opposing" mode and low-speed zones in the "reinforcing" mode. A desynchronised mode was tested for reference and consisted of a statistically similar control mode, but without synchronization to the incoming velocity fluctuations. An energy-attenuation of about \(40\,\%\) is observed for the opposing control mode in the frequency band corresponding to the passage of large-scale motions. This proves the effectiveness of the control in targeting large-scale motions, since an energy-intensification of roughly \(45\,\%\) occurs for the reinforcing control mode instead. And relatively no change in energy, within the wall-normal range targeted, appears with the desynchronized control mode. Skin friction coefficients are inferred from PTV data to yield a direct measurement of the wall-shear stress. Results indicate that the opposing control logic is able to lower the wall-shear stress by about \(3\,\%\) with respect to desynchronised control, and by roughly \(10\,\%\) with respect to the uncontrolled flow. A FIK-decomposition of the skin-friction coefficient was performed, revealing that the off-the-wall turbulence follows a consistent trend with the PTV-based wall-shear stress measurements, although biased by an increased shear in the wake of the boundary layer given the formation of a plume due to the jet-in-crossflow actuation. Turbulent boundary layer, flow control, skin-friction drag ## I Introduction Reducing turbulent skin-friction drag has always been an intense topic of research, since wall-bounded flows are found in many energy-intensive engineering applications, such as transportation and transport of fluids through pipelines [2]. Strategies to control wall-bounded turbulence rely on the fundamental understanding of boundary layer flows and their friction-generating mechanisms. Research has revealed how different coherent structures exist, and how the structures' characteristics vary as a function of wall-normal distance, particularly when considering their sizing and spatio-temporal dynamics [3; 4; 5; 6; 7]. In the inner region, structures scale with the viscous length scale \(\nu/U_{\tau}\), and time scale \(\nu/U_{\tau}^{2}\), with \(\nu\) being the kinematic viscosity and \(U_{\tau}\equiv\sqrt{\tau_{w}/\rho}\) being the friction velocity (\(\tau_{w}\) is the wall-shear stress and \(\rho\) is the fluid density). In the outer region, instead, structures scale with the boundary layer thickness \(\delta\) as the characteristic length scale and \(\delta/U_{\infty}\) as time scale, with \(U_{\infty}\) now being the free-stream velocity. The ratio of outer to inner length scales is provided by the friction Reynolds number, which is defined as \(Re_{\tau}\equiv\delta U_{\tau}/\nu\). When focusing on control, a large number of studies aim at manipulating the near-wall cycle (NWC) dynamics [8; 9; 10; 11; 12] and this generally leads to a disruption of the turbulence production cycle in the inner region [13; 14; 15]. For engineering systems of practical relevance, friction Reynolds numbers are in the order of \(\mathcal{O}\left(10^{3}\right)\) to \(\mathcal{O}\left(10^{6}\right)\). Corresponding physical time and length scales would result in an unfeasible number of streamwise control stations for achieving streamwise-persistent control when targeting the NWC scales. For this reason, our work focuses on control of large-scale structures with the aid of a discrete sensor-actuator layout. Schoppa and Hussain [16] were the first to introduce such large-scale control. They showed that large-scale spanwise velocity forcing could lead to a \(50\,\%\) reduction in friction drag. However, the relatively low Reynolds number of \(Re_{\tau}\approx 180\) implied that: (1) control was effectively targeting a weak instability as turbulence was only marginally sustained [17; 18], and (2) large-scale control at those low values of \(Re_{\tau}\) was matching the NWC dynamics 'equally-well'. That is, a large-scale control strategy requires a high-enough Reynolds number for sufficient scale separation to appear. With increasing \(Re_{\tau}\), Large-Scale Motions (LSMs) contribute more and more to the total Turbulence Kinetic Energy (TKE), while the contribution of small scales remains constant [19]. LSMs refer to regions of lower velocity induced between the legs of hairpin packets, and regions of higher velocity outside of said packets. Due to the streamwise momentum difference between high- and low-speed zones, large-scale rollers are formed with a downwash and upwash in the zones with a momentum surplus and deficit, respectively [20]. Recently, predetermined control with large-scale forcing was proven effective at high \(Re_{\tau}\)[21; 19] and the inherent lower-frequency nature of outer scales also renders LSMs a more approachable target than inner scales. Even when exclusively actuating upon larger scales, the intensity of the NWC can still be affected through a modulation phenomenon: large-scale outer layer structures condition the dynamics in the near-wall region of high-Reynolds-number flows [22; 23]. Subsequently, the mean velocity gradient at the wall is altered and so is the mean wall-shear stress, \(\tau_{w}=\mu\partial\overline{u}/\partial y|_{y=0}\) (here \(\mu\) is the dynamic viscosity of the flow and \(\overline{u}\) the mean streamwise velocity). Opposition control is a type of _stabilizing control_[24] that was recently pioneered for turbulent flows [25; 26]. Abbassi _et al._[1] demonstrated a selective opposition control system to target LSMs carrying higher streamwise momentum than average, in an attempt to reduce the skin-friction drag induced by large-scale events. A spanwise array of jet actuators, together with hot-film input sensors located upstream, counteracted the naturally-occurring drag-producing LSMs. With the real-time control actuation timed in such a way that the actuator flow penetrated in high-speed zones of the TBL at \(Re_{\tau}\approx 14\,000\) (while leaving low-speed zones intact), velocity fluctuations in the logarithmic region were lowered in intensity and the wall-shear stress was reduced by \(\sim 3\,\%\). Our current work builds upon the approach taken by Abbassi _et al._[1]: the momentum surplus that characterizes high-speed zones is to be counteracted by lower-streamwise-momentum fluid, generated by a wall-normal blowing jet actuator. To minimize the parasitic drag associated with the control system, wall-embedded flush-mounted hardware is required. This constraint leads to an estimation problem of the flow state at a point away from the input sensor when aiming at the manipulation of large-scale structures at their wall-normal location of maximum intensity (_e.g._, in the geometric center of the logarithmic region). Fortunately, LSMs present a large degree of wall-coherence [27], _e.g._, a measurable imprint on the wall in the form of a low-frequency component. Still, the question remains as to how accurate estimations of the flow state at a location above the actuator can be performed. Abbassi _et al._[1] took Gaussian kernels as transfer functions and convolved those with the input signals. In the present work, we employ a data-driven approach to obtain the input-output relation, namely to relate changes in wall-shear fluctuations to velocity fluctuations in the logarithmic region. Using spectral Linear Stochastic Estimation (LSE) [28; 29], we are able to generate such a physics-informed kernel. In further contrast to the work of Abbassi _et al._[1], we aim at relating changes in the mean skin-friction drag to changes in the turbulence statistical integral measures of the TBL flow as a result of control in an attempt to unravel the physical mechanisms underlying changes in skin-friction as a result of control. For Zero-Pressure-Gradient (ZPG) uncontrolled TBL flows this has been detailed by Renard and Deck [30] and Deck _et al._[31]. Also, in the foregoing we present direct measurements of the wall-shear stress for the controlled TBL, as well as the integral measures of the TKE production and FIK-decomposition [32]. The article is outlined as follows. The experimental arrangement is presented in SS II, after which the control system is described in SS III. Details of the response of the TBL flow to several control modes are analysed in SS IV, and SS V follows with an assessment of the skin-friction drag, as well as its relation to integral properties of the TBL flow. ## II Experimental Methodology ### Turbulent boundary layer setup Experiments were carried out in an open-return wind tunnel facility (W-Tunnel) at the Delft University of Technology. This facility has a contraction ratio of 4:1, with a square cross-sectional area of \(0.6\times 0.6\,\mathrm{m}^{2}\) at the inlet of the test section. Driven by a centrifugal fan, the flow at the test section's inlet can reach a velocity of up to \(\sim 16.5\,\mathrm{m/s}\). For generating a TBL at a Reynolds number of practical significance, a test section with a relatively long flat plate was used of \(3.75\,\mathrm{m}\) in length and \(0.60\,\mathrm{m}\) in width. The boundary layer is tripped on all four walls of the test section's inlet, with a \(0.12\,\mathrm{m}\) long strip of P40-grain sandpaper. The downstream end of the test section features three panels to allow for modular setups (annotated \(\mathcal{A}\) to \(\mathcal{C}\) in the schematic impression of the floor layout shown in Fig. 1a). A global right-handed Cartesian coordinate system \((x^{\prime},y^{\prime},z^{\prime})\) is defined with its origin at the wall, in the spanwise center of the test section and coinciding with the downstream edge of the trip. A second coordinate system \((x,y,z)\) is used for presenting results in later sections, and has its origin at the jet actuator's center. Control hardware, comprising a surface-mounted hot-film and a wall-normal blowing jet actuator, were integrated in floor panel \(\mathcal{C}\). The hot-film was placed at \(x^{\prime}=2.73\,\mathrm{m}\) (\(x=-0.17\,\mathrm{m}\)), while the actuator was situated downstream of that at \(x^{\prime}=2.90\,\mathrm{m}\) (\(x=0\)). Specifications of the sensor and actuator, and reasons for their placement, are provided in SS III. A Pitot-static probe is integrated on a side wall of the test section to provide a velocity reading at \(x^{\prime}=2.90\,\mathrm{m}\) and \(y^{\prime}=0.40\,\mathrm{m}\). The tunnel's ceiling was made adjustable in height over the full length of the test section in order to modify the streamwise pressure gradient, \(\partial p/\partial x\). The ceiling consists of a \(4\,\mathrm{mm}\) thick polycarbonate plate with a smooth curvature. Through an iterative process, the ceiling was configured for a ZPG that was characterized using two streamwise rows of static pressure taps in the floor (at \(z^{\prime}=\pm 0.20\,\mathrm{m}\)). For the nominal free-stream velocity of the current study (\(U_{\infty}\approx 15\,\mathrm{m/s}\)), the acceleration parameter \(K\equiv(\nu/U_{e}^{2})(\mathrm{d}U_{e}/\mathrm{d}x)\)[33] remained in an acceptable range for a ZPG condition, since \(K<1.6\cdot 10^{-7}\) for the entire length of the test section. In the definition of \(K\), the velocity at the edge of the boundary layer, \(U_{e}\), equals \(U_{\infty}\); its value was inferred from the measured static pressure at the floor by assuming \(\partial p/\partial y\approx 0\). Finally, the free-stream turbulence intensity was found to be \(\sqrt{\overline{u^{2}}}/U_{\infty}\approx 0.35\,\%\) at the primary measurement region around \(x=0\) (this was inferred using hot-wire anemometry, described later). ### Measurement instrumentation Time-series of the streamwise velocity component were acquired using Hot-Wire Anemometry (HWA). A TSI IFAC-300 Constant Temperature Anemometer (CTA) was used, with a standard Dantec 55P15 boundary layer probe. Data were sampled at a rate of \(f_{\mathrm{HW}}^{+}=3.16\) (\(f_{\mathrm{HW}}=51.2\,\mathrm{kHz}\)) with a 24-bit A/D conversion for an uninterrupted duration of \(T_{a}=150\,\mathrm{s}\) at each measurement point. This acquisition duration equates to \(T_{a}U_{\infty}/\delta\approx 32\,000\) boundary layer turnover times; this was checked to be sufficient for converged spectral statistics at the lowest frequencies of interest. The hot-wire was calibrated in-situ by fitting King's Law to 17 points of increasing velocity. Measurement time-series were corrected for temperature drift following the procedure outlined by Hultmark and Smits [34]. By mounting the probe to a dual-axis traversing system, with a step accuracy of \(10\,\mathrm{\SIUnitSymbolMicro m}\) (smaller than \(0.3\) viscous units), a wall-normal profile consisting of 40 logarithmically-spaced points was acquired at \(x=2\delta\). A streamwise profile was also measured within the geometric center of the logarithmic region, at a location of \(y_{L}^{+}=3.9\sqrt{Re_{\tau}}\approx 190\) (\(y_{L}=6.3\,\mathrm{mm}\)), see Fig. 1b. The uncertainty in the hot-wire measurements was computed following the procedure of Smith _et al._[35], and resulted in uncertainties in the estimation of the average velocity and standard deviation (at \(y^{+}\approx 15\)) of \(0.26\,\%\) and \(1.02\,\%\), respectively. Planar Particle Tracking Velocimetry (PTV) data were acquired with a Field Of View (FOV) of approximately \(0.33\delta\times 0.28\delta\). A relatively small FOV was chosen to maximise the resolution in the viscous sub-layer, such that the wall-shear stress could be inferred directly from the velocity gradient at the wall, \(\tau_{w}=\mu\partial\overline{u}/\partial y|_{y=0}\) (see SS V.1). Particle Image Velocimetry (PIV) was also employed in a planar Two-Dimensional Two-Component (2D2C) configuration, with a larger FOV spanning approximately \(3.6\delta\times 0.8\delta\) (divided over two cameras). This PIV campaign was tailored to studying the flow well into the wake of the boundary layer. For both the PTV and PIV measurements, data Figure 1: (a) Schematic representation of the turbulent boundary layer test section at the Delft University of Technology. (b) Schematic indicating the locations of hot-wire measurements, with a streamwise profile taken at \(y_{L}^{+}=3.9\sqrt{Re_{\tau}}\) and a wall-normal profile taken at \(x=2\delta\). (c) Positions of the fields of view for PTV (filled blue) and PIV (open green) measurements. were acquired at several streamwise locations, indicated in Fig. 1c with the blue filled rectangles (for PTV) and the green open rectangles (for PIV). Table 1 lists the acquisition parameters for both the PTV and PIV campaigns. LaVision Imager sCMOS cameras with a sensor size of 2650\(\times\)2160 pix\({}^{2}\) were used in both types of acquisitions. All measurement sets comprised a total of 2000 statistically independent image pairs that were recorded at a frequency of 15 Hz. Illumination was provided by a Quantel Evergreen 200 Nd:YAG laser, operating in double-pulse mode with a maximum energy per pulse of 125 mJ. Finally, seeding was generated by an atomized glycol-water mixture, yielding an average particle size of \(\sim 1\,\upmu\)m. ### Turbulent boundary layer characteristics A characterization of the uncontrolled TBL flow at the primary measurement location of \(x^{\prime}=2.90\,\mathrm{m}\) is here reported, based on first- and second-order statistics computed from hot-wire data. Fig. 2a presents profiles of both the streamwise mean velocity and TKE. A set of canonical boundary layer parameters was inferred through a composite fit procedure on the mean velocity profile [36], with logarithmic layer constants of \(\kappa=0.38\) and \(B=4.7\). Parameters are reported in Table 2. Here, \(\theta\) is the momentum thickness and \(\Pi\) is the wake parameter. The viscous length scale is denoted with symbol \(l^{*}\). The measured streamwise TKE is subject to a well-known attenuation of small-scale energy due to the finite resolution of the sensing element (exposed hot-wire length is \(l^{+}\approx 41\)) [37]. This missing energy can be accounted for as seen from the corrected measurement profile [following 38]. For both the mean velocity and corrected streamwise TKE profiles, the measurement data compare well to those of a Direct Numerical Simulation (DNS) of channel flow [39] in the inner region and at a comparable value of \(Re_{\tau}\). This provides reassurance of a representative baseline flow. For spectral analyses of the velocity \(u(y,t)\), the one-sided spectrum is taken as \(\phi_{uu}(y;f)=2\langle U(y;f)U^{*}(y;f)\rangle\), where \(U(y;f)=\mathcal{F}\left[u(y,t)\right]\) is the temporal Fast Fourier Transform (FFT). Here the angular brackets \(\langle\cdot\rangle\) denote ensemble-averaging and superscript \(*\) signifies the complex conjugate. Ensembles of \(N=2^{17}\) samples were subject to a Hanning windowing procedure, and resulted in a spectral resolution of \(\mathrm{d}f=0.39\,\mathrm{Hz}\). In addition, a 50 % overlap was implemented to yield a total of 120 ensembles for averaging. Energy spectra throughout the TBL flow are premultiplied and are presented as an inner-scaled spectrogram, \(f^{+}\phi_{uu}^{+}(y;f)\), in Fig. 2b. The Reynolds number being relatively low does not yet allow for a noticeable outer-spectral peak to appear, but the inner peak is apparent at \((y^{+};f^{+})\approx(15;0.01)\). Moreover, a significant scale separation is present between energetic motions in the outer layer (say at \(f^{+}\approx 10^{-3}\)) and the NWC peak at \(f^{+}\approx 10^{-2}\). The uncontrolled TBL conditions reported in Table 2 and Fig. 2 represent the baseline (uncontrolled) case, to which the controlled flow will be compared in subsequent sections. ## III Control system architecture From a high-level perspective, the control system consists of a wall-embedded sensor and actuator, and a real-time target machine. Downstream flow measurements are performed to assess the controller's performance. For the controller to be effective, it is critical for the input sensor to provide sufficient information to estimate the state of the to-be-controlled plant (_i.e._, the TBL flow). Similarly, the actuator is required to have enough control authority to generate a significant effect in the logarithmic region, where the large-scale structures are most energetic. \begin{table} \begin{tabular}{c c c c c c c c c} \(U_{\infty}\) (m/s) & \(\delta\) (mm) & \(\theta\) (mm) & \(Re_{\theta}\) & \(U_{\tau}\) (m/s) & \(Re_{\tau}\) & \(\Pi\) & \(l^{*}\equiv\nu/U_{\tau}\) (\(\upmu\)m) & \(\nu/U_{\tau}^{2}\) (\(\upmu\)s) \\ \hline \hline 15 & 69.9 & 6.83 & 6 830 & 0.49 & 2 237 & 0.61 & 31.25 & 65.10 \\ \end{tabular} \end{table} Table 2: Experimental parameters of the baseline TBL flow in the W-Tunnel facility at \(x^{\prime}=2.90\,\mathrm{m}\) (\(x=0\)). \begin{table} \begin{tabular}{c c c c c c c} Campaign & FOV size & \# of cameras & \(\mathrm{d}t\) (\(\upmu\)s) & \(\mathrm{l_{f}}\) (mm) & \(\mathrm{f_{\#}}\) & Pixel res. (pix/mm) & Particle size (pix) \\ \hline \hline PTV & \(0.33\delta\times 0.28\delta\) & 1 & 15 & 200 & 11 & 114 & 5 \\ PIV & \(3.6\delta\times 0.8\delta\) & 2 & 35 & 105 & 8 & 18 & 3 \\ \hline \hline \end{tabular} \end{table} Table 1: Image acquisition parameters for the PTV and PIV campaigns, with \(\mathrm{d}t\) being the time separation between images in one pair, and \(\mathrm{l_{f}}\) and \(\mathrm{f_{\#}}\) the focal length and f-stop of the camera lens. ### Wall-based sensing and actuation Our control logic aims at actuating upon structures that convert in the logarithmic region of the TBL and that leave a footprint at the wall [22, 29]. Similar to Abbassi _et al._[1], a Dantec 55R47 glue-on hot-film was selected as the surface-mounted input sensor. Its sensing element measures \(0.1\,\mathrm{mm}\) in the streamwise direction (\(\delta x_{\mathrm{HF}}^{+}=3.2\)) and \(0.9\,\mathrm{mm}\) (\(\delta z_{\mathrm{HF}}^{+}=28.8\)) in the spanwise one. The sensor is deposited on a \(\sim 50\,\mathrm{\SIUnitSymbolMicro m}\) thick (\(1.6l^{*}\)) KaptonT foil. This thickness makes the hot-film a non-intrusive sensor, since it can be considered hydrodynamically smooth. The sensor was glued to a polycarbonate insert within floor panel \(\mathcal{C}\) (see Fig. 1), and its lead-wires were routed downstream and out of the tunnel through \(0.4\,\mathrm{mm}\) diameter holes. As mentioned in SS II.1, the hot-film sensor was mounted in the spanwise center at \(x=-2.4\delta\) (\(x=-0.17\,\mathrm{m}\)). The hot-film was operated using a second CTA channel in the TSI IFA-300 anemometer, also used for operating the hot-wire (SS II.2). The sensor operating temperature was set at \(90\,\mathrm{\SIUnitSymbolCelsius}\), yielding an overheat ratio of 1.18. No sensor calibration was performed or applied, and so the raw voltage-output of the CTA bridge was fed directly into the controller. Working with the raw voltage as proxy for the wall-shear stress is justified, given that the control action is binary (controlling an on/off jet) and only involves thresholding around the mean value of the input signal. Moreover, the system identification procedure described in SS III.2 is performed with the raw voltage signal, and it was verified that coherence characteristics are retained even without calibrating the sensor. For actuation, a non-zero net mass flux blowing jet is used. Since its exit velocity and frequency response can be tuned with relatively simple adjustments to the hardware, this actuator is ideal for tuning the region of interaction between the jet flow and the grazing crossflow. The development of a jet in crossflow is such that an upwash is created downstream of the injection point as a result of wall-normal momentum injection. Additionally, the steady jet in crossflow creates a Counter-rotating Vortex Pair (CVP), originating from the roll-up of the jet plume as the mean shear of the crossflow transfers streamwise momentum to it. Off-centerline, this CVP generates a downwash [40, 41, 42]. Note that an investigation of the off-centerline behaviour of the boundary layer is not part of the current study; we solely focus on the impact of control directly downstream of the injection point (\(z=0\)). The jet flow exhausts in the grazing TBL flow through a rectangular exit slit. Given the requirement of the control system to be persistent downstream, the slit was strongly elongated in the streamwise direction and comprised dimensions of \(15\,\mathrm{mm}\)\(\times\)\(1.5\,\mathrm{mm}\) (in the \(x\) and \(z\) directions, respectively), or approximately \(0.2\delta\times 0.02\delta\). This kind of elongation ensures the formation Figure 2: Boundary layer profiles of the (a) mean streamwise velocity and (b) streamwise TKE, based on hot-wire data and compared to DNS data of channel flow at \(Re_{\tau}=2000\)[39]. TKE corrected for attenuation due to sensor resolution [38]. (b) Premultiplied energy spectrogram of the streamwise velocity; filled iso-contours correspond to magnitudes of \(0.2\):\(0.2\):\(2.2\). (c) Energy spectrum of the streamwise velocity fluctuations in the geometric center of the logarithmic region, \(y_{L}^{+}=3.9\sqrt{Re_{\tau}}\). of a stronger pair of vortices in comparison to an orifice with the same surface area being circular in shape [43; 44]. Compressed dry air feeds into the actuator, which is operated in an on/off state using an electrically actuated, nominally closed, FESTO MHJ10-S-2 solenoid valve. Via the execution of PIV characterization experiments, described in Appendix A, the frequency response was quantified as well as the jet trajectory into the TBL crossflow. The jet exit velocity was set at \(v_{\mathrm{j}}=0.4U_{\infty}\) (\(v_{\mathrm{j}}=6\,\mathrm{m/s}\)) to ensure the trajectory of the jet plume remained within the bounds of the logarithmic region for a distance of \(\sim\delta\) downstream of the injection point. Moreover, latency's were inferred from the characterization experiments, and are associated with the time it takes for fluid to accelerate through the pneumatic components (\(\tau_{a,1}\approx 3\,\mathrm{ms}\)), for the jet plume to reach the logarithmic region (\(\tau_{a,2}\approx 3\,\mathrm{ms}\)), and for the jet to shut-down (\(\tau_{a,3}\approx 10\,\mathrm{ms}\)). Even though the solenoid valve has a maximum switching frequency of \(1\,\mathrm{kHz}\), the maximum operating frequency for which on- and off-states are reached is lower due to the latency's and equals \(f_{\mathrm{act}}\approx 63\,\mathrm{Hz}\), given the \(6\,\mathrm{ms}\) start-up time and \(10\,\mathrm{ms}\) shut-down time. ### System identification Both the input sensor and actuator of the control system interact with the grazing flow (see Fig. 3 for a schematic representation of the control system). The streamwise sensor-actuator spacing, \(s\), has important implications given that an increase in \(s\) will result in a progressive loss-of-coherence between the turbulence velocities at both stations. Practically, there is a minimum (non-zero) spacing that is realizable for two primary reasons: (1) coherent structures in TBL flow possess an average streamwise inclination angle of \(14^{\circ}\) to \(16^{\circ}\) due to the mean shear [45; 27]; their footprints are only visible to the wall-based sensor after their signature has passed in the logarithmic region, and (2) input processing introduces latency's in addition to the one of the actuator described earlier. Hence, only with a non-zero distance \(s\) it can be guaranteed that there is enough time to act upon LSMs in real-time. In order to inspect whether a sufficient correlation remains present between sensor and actuator for a non-zero spacing \(s\), a Single-Input/Single-Output (SISO) linear time-invariant system analysis was applied as reported in Appendix B. A sufficient level of linear coherence was observed between the input and target locations (points \(\mathcal{I}\) and \(\mathcal{T}\) in Fig. 3), particularly for a sensor-actuator spacing of \(s=2.4\delta\) that is used in the current study. A motivation for this spacing is presented later on. Given the significant coherence, a linear transfer kernel, \(H_{L}\), relating the streamwise velocity \(u(t)\) in the logarithmic region (the target point) to the voltage signal \(e(t)\) of the hot-film (the input point) was determined through an LSE procedure based on data of a control-off experiment (Appendix B). A bode plot of the frequency-dependent kernel \(H_{L}(f)\) is shown in Figs. 4a,b. A maximum gain of \(|H_{L}|\approx 2.6\,\mathrm{ms}^{-1}/\mathrm{V}\) occurs at \(f\delta/U_{\infty}\approx 0.06\). The gain decays at higher frequency and is retained up to a cut-off frequency of \(f\delta/U_{\infty}\approx 0.7\), at which the coherence drops below a threshold of \(\gamma_{L}^{2}=0.05\). Beyond this frequency, the scales are incoherent and the kernel's phase becomes random. Instead of performing an estimation in spectral space following Eq. (14), the time-domain convolution equivalent is embedded on a real-time controller. The convolutional estimate is \(\widetilde{u}(t)=(h\,\,\mathrm{\char 37}\,e)(t)\), with \(h\) being the inverse FFT of the frequency-domain kernel, \(h(\tau)=\mathcal{F}^{-1}\left[H_{L}(f)\right]\) (and \(h\) thus resembles a Finite Impulse Response (FIR) filter for the input data). The inverse FFT over the full range of frequencies for generating \(h(\tau)\) from \(H_{L}(f)\) yields a kernel in the physical domain with the length of one ensemble size (\(N=2^{17}\), thus \(\Delta t_{N}=2.56\,\mathrm{s}\)). However, given that an FIR convolution in real-time introduces an inherent delay of half the filter-width, the kernel \(h(\tau)\) is only retained over a temporal horizon of \(\tau_{H}/2=7.5\,\mathrm{ms}\) (centered at the peak-instance of the FIR filter, see the temporal extent of the dotted kernel in Fig. 4c). The shortened kernel length ensures that the sensor-actuator spacing of \(s=2.4\delta\) is attainable in real-time. Note that omitting the tails of the kernel is justified given the negligible contribution to the Figure 3: Schematic of the control system for real-time boundary layer manipulation, integrated in the W-Tunnel facility. estimate. Future improvements of a short kernel can be based on the Wiener-Hopf framework so that causality of the kernel is taken into account [46]. Finally, the control loop was implemented on a National Instruments Compact Reconfigurable Input-Output (NI-cRIO-9122) machine with an embedded Field Programmable Gate Array (FPGA) chassis (cRIO-9022). The control logic was implemented in LabVIEW on the FPGA chip with a loop frequency of \(f_{\text{FPGA}}=2\,\)kHz, and FPGA processing was conducted with a 16-bit fixed-point precision. The kernel \(h(\tau)\) was down-sampled to the loop frequency of the FPGA controller (\(f_{\text{HW}}\to f_{\text{FPGA}}\)). When operating in real-time, the input signal was also sampled at the loop frequency with the aid of an analog-to-digital NI-9234 input module. Trigger commands were provided to the solenoid valve with the aid of a 5V analog signal that was relayed through a NI-9472 digital output module. ### Control logic definition For our control problem the actuator interacts with the high- and low-speed LSMs. Based on the input sensor and the pre-identified transfer kernel, the controller is able to estimate the flow state \(\widehat{u}\) at the target-point through the convolution mentioned before: \(\widehat{u}(t)=(h\otimes e)\,(t)\). Note that input signal \(e(t)\) is a zero-mean signal since the controller only acts upon the fluctuations. The zero-mean signal was obtained in real-time by the subtraction of a converged running mean over a \(2\,\)s interval duration [this accounts for a potential drift in the hot-film reading, 47]. Based on the real-time estimate \(\widehat{u}(t)\), high- and low-speed zones are then targeted following a nominal control law: \[v_{\text{j}}(t)=\begin{cases}0.4U_{\infty},&\text{if }\widehat{u}(t)\geq 0\\ 0,&\text{if }\widehat{u}(t)<0\end{cases} \tag{1}\] with \(v_{\text{j}}\) being the binary velocity state of the jet actuator. This _opposition_ controller will thus only actuate on those large-scale events which are estimated to be more drag-producing than the mean flow. A _reinforcing_ controller was also implemented, where the control law is inverted and the actuator targets a low-speed region instead. In order to also isolate the effect of operating the jet in a synchronized manner with respect to the incoming LSM structures, versus a desynchronized manner (in essence no real-time control), a _desynchronized_ control law was also implemented following Abbassi _et al._[1]. For the desynchronized control, an on/off signal from the opposition control case was used for actuation, irrespective of the input signal. Given the sensor-actuator spacing \(s\), the control system needs to digitize the analog voltage-input signal, convolve it with the transfer kernel and generate the control-output within the time it takes for the LSM structures to convect to the target point. With \(s=2.4\delta\) (\(s=0.17\,\)m) and \(U_{c}=9.9\,\)m/s, this duration is \(17.2\,\)ms. The sensor-actuator spacing was chosen based on an analysis of the delays inherent to a real-time controller. First, as mentioned in SS III.2, the real-time convolution of the input signal with the FIR-like Figure 4: (a,b) Bode plot of kernel \(H_{L}(f)\) with the frequency-dependent gain and phase. The gain is shown with both the raw data and a bandwidth-moving filtered version (25% bandwidth). (c) Kernel in physical time, both at the sampling frequency of the calibration experiment (solid line; \(f_{\text{HW}}=51.2\,\)kHz) and at the controller frequency (round markers; \(f_{\text{FPGA}}=2\,\)kHz). kernel requires half the temporal horizon, thus \(\tau_{H}/2=7.5\,\mathrm{ms}\). Additionally, a delay of \(\tau_{\mathrm{FPGA}}=0.5\,\mathrm{ms}\) is added due to the controller looping at \(f_{\mathrm{FPGA}}=2\,\mathrm{kHz}\). As explained in SSIII.1, the actuator itself also introduces two sources of lag: \(\tau_{a,1}\approx 3\,\mathrm{ms}\) and \(\tau_{a,2}\approx 3\,\mathrm{ms}\). In total, the controller requires the following time for providing a control output: \[\tau_{C}=\tau_{H}/2+\tau_{\mathrm{FPGA}}+\tau_{a,1}+\tau_{a,2}\approx 14\, \mathrm{ms}. \tag{2}\] For a duration of \(\tau_{C}\), the LSM structures convect over a streamwise distance of \(\Delta x=0.137\,\mathrm{m}\). Our sensor-actuator spacing \(s\) is thus slightly conservative with \(s=0.17\,\mathrm{m}\) (so that control that would be 'too early' could also be investigated). For nominal opposition control an extra delay of 7 control loops (_i.e.,_ 3.5\(\,\mathrm{ms}\)) was implemented for correct timing of the opposition and reinforcing control modes. ### Sensing performance evaluation The state of the boundary layer that the controller activates upon, \(\widehat{u}(t)\), is an estimate. To gauge the performance of the controller, we resort to computing the _binary accuracy_ of the estimator. Fig. 5a displays the measured streamwise velocity \(u(t)\), as well as the LSE-based estimate simulating real-time conditions. Note that the estimate \(\widehat{u}(t)\) would be shifted by half the kernel's horizon length as a result of the real-time convolution, but this shift is omitted for evaluating the binary accuracy. Since the controller only actuates based upon the estimated signal's sign, it is possible to binarize \(u(t)\) and \(\widehat{u}(t)\) and compare them directly. At every instant, a true positive (TP) prediction is made when both signals are positive, whereas both signals being negative will yield a true negative (TN) prediction. Additionally, false positive (FP) and false negative (FN) outputs will occur if \(u(t)<0\) and \(\widehat{u}(t)\geq 0\), or vice versa, respectively. The binary accuracy (BACC) is then defined as, \[\mathrm{BACC}=\frac{T_{\mathrm{TP}}+T_{\mathrm{TN}}}{T_{a}}, \tag{3}\] with the numerator representing the cumulative time that the estimate is true positive (\(T_{\mathrm{TP}}\)) and true negative (\(T_{\mathrm{TN}}\)). Note that a BACC of unity does not mean that the \(\widehat{u}(t)=u(t)\), but only that \(\mathrm{sgn}\left[\widehat{u}(t)\right]=\mathrm{sgn}\left[u(t)\right]\forall t\). Fig. 5b reports the binary performance and the BACC equals 72.1%. This value is significantly larger than 50 % (which would indicate a random process) and justifies the wall-based sensing approach for reactive real-time control. ## IV Response of the turbulent boundary layer ### Mean flow and turbulence kinetic energy Wall-normal profiles of the mean velocity and streamwise TKE (based on the hot-wire profile taken at \(x=2.4\delta\)) aid in explaining the effect of control on the TBL, and allow for a direct comparison to the work of Abbassi _et al._[1]. Fig. 6a presents the profiles for both the uncontrolled flow and the opposing, reinforcing and desynchronized Figure 5: (a) Sample portion of the measured streamwise velocity \(u(t)\), at \(x=2.4\delta\) and \(y=y_{L}\), compared to the estimated velocity \(\widehat{u}(t)\). (b) Pie chart visualizing the binary performance of the estimation. In blue the true positive (TP) and true negative (TN) predictions; in red the false positive (FP) and false negative (FN) ones (values are in percentage). control cases. It is evident that only in the logarithmic region a velocity deficit manifests itself for the control cases, in comparison to the uncontrolled flow. This is consistent with the jet injecting momentum in the wall-normal direction, thereby removing streamwise momentum from the grazing TBL flow [42, 48]. At \(x=2\delta\), the jet plume penetrates primarily within the logarithmic region (recall Fig. 15a and its discussion), while the mean velocity in the inner region already recovered to the uncontrolled flow condition. Since the jet is always activated for the same fraction of time (\(50\,\%\)), the wall-normal momentum being injected into the boundary layer is, on average, equal. This explains the collapse of the profiles in Fig. 6a. The difference between control modes becomes apparent from the streamwise TKE, \(\overline{u^{2}}\), presented in Fig. 6b. All profiles collapse in the wake and show that the control influence is confined to the inner region. Near the upper edge of the logarithmic region, a hump of \(\overline{u^{2}}\) occurs for all control modes and increases when moving from the opposing, to the desynchronised, and finally to the reinforcing control case. This trend is reflective of the presence of an internal shear layer between the upper side of the jet plume and the grazing TBL flow [41]. The opposing control case reduced the velocity variance the most below \(y/\delta\approx 0.1\). This reduction is not only apparent in the logarithmic region, but persists down to the wall. ### Streamwise velocity energy spectra To analyse how the energy across all turbulent scales is changed as a result of control, premultiplied energy spectra are considered in a similar manner as in Fig. 2b. Fig. 7a displays the spectrogram \(f^{+}\phi_{uu,\text{des}}^{+}(f,y)\) for the desynchronized case with the black iso-contours, overlayed on a filled contour that represents the percentage difference in spectrograms between said case (\(\phi_{uu,\text{des}}\)) and the uncontrolled flow (\(\phi_{uu,\text{unc}}\)), following \[\Delta\phi_{uu}=\frac{\phi_{uu,\text{des}}-\phi_{uu,\text{unc}}}{\phi_{uu, \text{unc}}}\times 100. \tag{4}\] A region of remarkably higher energy is observed above \(y/\delta\approx 0.1\) and for all frequencies. This relates to the location where an increase in streamwise TKE was also observed in Fig. 6b (note that \(\overline{u^{2}}=\int\phi_{uu}\text{d}f\)). Given the nature of the jet actuator, not only wall-normal momentum is imparted to the flow, but the shear layer developing between the jet plume and the TBL flow enhances the turbulence fluctuations. This increase in TKE is thus broadband in nature and is unavoidable with the current type of actuator flow. Below \(y/\delta\approx 0.1\), a slight decrease in energy is observed. The superposition effect that the jet actuator was suggested to have in SIV.1 also appears to be at play over a vast frequency band in proximity to the wall (\(0.1<f\delta/U_{\infty}<10\)). To highlight the changes in spectral energy that the turbulence in the boundary layer undergoes as a result of purely the control logic, and not of the presence of the jet flow itself, reactive control cases are compared to the desynchronised Figure 6: (a) Wall-normal profiles of mean streamwise velocity, \(\overline{u}/U_{\infty}\), and (b) streamwise TKE, \(\overline{u^{2}}/U_{\infty}^{2}\), for the uncontrolled case, as well as for the three control modes, at \(x=2\delta\). one. Figs. 7b and 7c present the percentage difference in the spectrograms with respect to the desynchronised control mode for the TBL targeted by opposing and reinforcing control, respectively. An almost perfect symmetry is visible: a region of reduced energy for opposing control is juxtaposed to one of increased energy for the reinforcing control mode, with a maximum effect residing around the geometric center of the logarithmic region at \(y_{L}^{+}=3.9\sqrt{Re_{\tau}}\) (indicated by the dashed line). A reduction of \(\sim\)40 % in \(\phi_{uu}\) in the opposing mode is accompanied by a \(\sim\)45 % increase in spectral energy for the reinforcing case. For both modes, the largest change in energy is concentrated at \(f\delta/U_{\infty}<0.1\), which indicates a successful targeting of the low-frequency (_i.e.,_ large wavelength) structures in the logarithmic region. Recall from SS III.2 that the higher-frequency (smaller-scales) cannot be targeted with the real-time controller due to the absence of input/output coherence at these scales. Energy spectra in the geometric center of the logarithmic region (displayed explicitly in the insets below the contour plots in Fig. 7) furthermore reveal how the percentage difference collapses to zero for \(f>0.3U_{\infty}/\delta\). ### Conditionally averaged velocity fluctuations Conditional averages of the streamwise velocity were constructed for examining the local response of the TBL flow. Time-series of the streamwise velocity acquired using HWA at all 40 wall-normal locations in the boundary layer were conditioned on the positive-gradient zero-crossings of the estimated velocity signal, \(\widehat{u}(t)\), following \[\widetilde{u}(y,\tau)=\langle u(y,t)\ |\ \left(\widehat{u}(y,t)=0\ \wedge\ \partial \widehat{u}/\partial t(y,t)>0\right)\rangle, \tag{5}\] with \(\tau\) being the time coordinate of the conditional average and \(\tau=0\) corresponding to the positive-time-gradient zero-crossing. The present work considers the conditional average in a variable time interval (VTI) formulation [49]. This means that given a signal \(\alpha(t)\) and a conditioning signal \(\beta(t)\) the VTI approach only averages the portions of \(\alpha(t)\) that is present between conditioning points: \[\widetilde{\alpha}(\tau)=\sum_{i=1}^{N_{c}}\frac{\left[\alpha(t)\ |\ (t_{i}-t_{i-1}<t<t_{i}+t_{i+1})\right]}{N_{c}}, \tag{6}\] with \(N_{c}\) being the number of total conditioning points in \(\beta(t)\), \(t_{i}\) the current conditioning point, \(t_{i-1}\) the previous one and \(t_{i+1}\) the following one. For the results shown in the foregoing, the temporal span of the conditional averages Figure 7: (a) Filled contours: percentage difference in the premultiplied energy spectrograms of the streamwise velocity, between the TBL affected by desynchronised control and the uncontrolled case. Black contours: spectrogram of the TBL flow subject to desynchronised control (contour levels at 0.4:0.4:2.0). Light blue contours: spectrogram of the uncontrolled TBL flow (contour levels at 0.4:0.4:2.0). (b) Percentage difference spectrograms between the flow subject to opposing control and the desynchronised case. (c) Percentage difference spectrograms between the flow subject to reinforcing control and the desynchronised case. Difference in the energy spectrum at \(y_{L}\) shown below the contour plots in (a,b,c) for the corresponding case. All spectrograms were acquired at \(x=2\delta\) and filtered with a bandwidth-moving filter of 25% in width. is confined to an interval of \(50\,\mathrm{ms}\) on either side of \(\tau=0\). The conditionally averaged velocity, \(\widetilde{u}(y,\tau)\), is shown in Fig. 8 as a contour of \(\widetilde{u}(y,\tau)/U_{\infty}\) for the uncontrolled, opposing and reinforcing control cases. The time coordinate \(\tau\) is non-dimensionalized using the factor \(U_{\infty}/\delta\), making it representative of the non-dimensional distance from the streamwise position of the actuator to the downstream position of the hot-wire probe (\(x=2\delta\)). Thus, the zero-crossing occurs at \(\tau U_{\infty}/\delta=(2\delta/U_{c})U_{\infty}/\delta\approx 3\). For lower convection velocities, such as at locations close to the wall, the time needed for the response to be measured is longer. Hence, the time-instant of the zero-crossing in \(\widetilde{u}(y,\tau)\) gradually shifts towards increasing values of \(\tau\) when approaching the wall. The uncontrolled case in Fig. 8a reports the baseline velocity fluctuations the controller will target. The effect of control becomes obvious by inspection of conditional averages for the opposing and reinforcing cases. The former causes an overall reduction of more than \(60\,\mathrm{\char 37}\) in the amplitude of the oscillation observed in the uncontrolled case, while the latter clearly amplifies it by approximately \(60\,\mathrm{\char 37}\). This effect is particularly visible in the bottom insets, showing the conditionally-averaged time-series at \(y_{L}\). Conditional average velocity fluctuations as shown in Fig. 8 only report the fluctuations in streamwise velocity as a result of control, but fail to capture the global interaction of the actuator flow with the grazing TBL flow. This void can be filled by utilizing PIV velocity fields, which are here presented for a domain with dimensions of \(\approx 1.8\delta\times 0.8\delta\) (half of the FOV listed in the bottom line of Table 1). Processing was performed with the aid of LaVision DaVis 10.2 utilizing a multi-pass approach, leading to a final vector resolution of \(2.25\) vectors/mm. The acquisition was synchronized to the controller and was triggered with a certain delay, relative to the instance of an actuator on-command. This was repeated for a sequence of delays, allowing a visualization of the entrance of the jet plume into the grazing TBL flow. The fields shown in Fig. 9 display the percentage variation of the streamwise velocity component with respect to the uncontrolled flow for three temporal delays, \(\widetilde{\tau}\), relative to the actuator-on command. Results for the opposing control mode are shown in the top row, whereas the results corresponding to the reinforcing strategy are shown in the bottom one. The low-velocity (blue) region on the bottom of each contour plot corresponds to the jet plume entering the domain. At an early stage (\(\widetilde{\tau}=5\,\mathrm{ms}\)), a higher-than-average velocity is observed for the opposing case, which also persists at \(\widetilde{\tau}=10\,\mathrm{ms}\), suggesting that the controller successfully targets high-speed events. The opposite condition is, instead, measured in the reinforcing control scenario, where the controller is observed to intervene on low-speed events rather than high-speed ones. As more time is elapsed, at \(\widetilde{\tau}=20\,\mathrm{ms}\), the flow stabilizes to a condition where lower and higher streamwise velocity are observed for the opposing and reinforcing cases, respectively. Figure 8: Conditionally averaged response, \(\Delta u/U_{\infty}(\tau,y)\), of the streamwise velocity, conditioned on the zero-crossing positive-time-gradient of the estimated signal \(\widetilde{u}(t)\). Detail of response at \(y_{L}\) shown below the filled contour plot. (a) Uncontrolled flow, (b) opposing and (c) reinforcing control modes. ## V Effect of control on turbulent skin-friction drag The principal goal of our control strategy is to investigate the change in turbulent skin-friction drag in relation to the actuator flow interacting with the grazing TBL. The dimensional form of the wall-shear stress, \(\tau_{w}=\mu\partial\overline{u}/\partial y|_{y=0}\), is analysed in terms of the skin-friction coefficient, \[C_{f}=\frac{\tau_{w}}{q_{\infty}\mathcal{S}}=2\frac{U_{\tau}^{2}}{U_{\infty}^{ 2}}. \tag{7}\] Here, \(q_{\infty}=\rho_{\infty}U_{\infty}^{2}/2\) is the free-stream dynamic pressure and \(S\) a reference surface, which is taken equal to unity. However, Due to heat transfer effects from a hot-wire to the tunnel's surface, when attempting to measure the velocity in the linear region, the HWA technique cannot reliably capture the velocity in that region [50, 51]. As such, traditional estimates of \(C_{f}\) from hot-wire measurements typically rely on boundary layer scaling laws and mathematical fits to the streamwise velocity profile (_e.g.,_ the composite fit [36] or the well-known Clauser fit procedures). With a boundary layer strongly affected by control and additional momentum injection from the jet actuator, the assumptions behind these methods are violated. Therefore, a direct measurement of the velocity gradient at the wall, based on PTV, is pursued in our current work. ### Skin friction inference from PTV The PTV technique was applied to a relatively small FOV of size \(0.33\delta\times 0.28\delta\) (recall Table 1). The acquisition was not time-resolved, thus all PTV tracks consist of only two points. A PTV-based approach was implemented with an unparalleled resolution of velocity vectors over a PIV-based technique. With the given pixel resolution and typical interrogation window sizes, only very few valid vectors would be obtained from PIV in the linear region (\(y^{+}<5\)) for our Reynolds number of \(Re_{\tau}\approx 2\,240\). Given the linear dependence \(\overline{u}^{+}=y^{+}\) for the velocity profile in this region, only two measurement points would theoretically be required to compute the gradient, \(\partial\overline{u}/\partial y\). However, more information to increase the robustness of the measurement-based estimation because of stochastic noise and uncertainty. The following six post-processing steps were implemented to infer the skin-friction data from the raw images. Figure 9: Phase-averaged field of the percentage variation of streamwise velocity component \(u\) for opposing (top row) and reinforcing (bottom row) control cases with respect to the uncontrolled flow. Phase-averaged acquisition acquired at \(\widetilde{\tau}=5\,\mathrm{ms}\), \(\widetilde{\tau}=10\,\mathrm{ms}\), and \(\widetilde{\tau}=20\,\mathrm{ms}\). All fields were filtered with a Gaussian filter having a kernel width of \(0.25\delta\times 0.25\delta\) and \(\sigma=0.1\delta\). Dashed lines indicate the position of hot-wire profile. Shown in red is the position of the jet exit slit. **Particle track computation.** 2D Lagrangian particle tracks are computed with the aid of LaVision DaVis, version 10.2. Only a small subset of the original FOV is retained, that encompasses the wall and a small region above and below it (\(\Delta y=0.05\delta\) and the full streamwise extent). 2. **Wall identification.** Reflections of the particles in the flow result in mirrored particle tracks "below" the wall. This reflection allows for a precise identification of the wall. The ensemble-averaged mean velocity field is computed through traditional PIV processing on a subset of image pairs (500 out of the 2 000 in total), after which the wall position is found by utilizing the wall-mirrored field [52; 53]. That is, a parabola fitted to points in the linear region (both above and below the reflection line) yields \(\overline{u}=f(y)\). Its minimum velocity point is taken to be the \(y\)-position of the wall, denoted as \(y_{w}\). This procedure is performed over 330 streamwise positions spanning the entire FOV (corresponding to the vector spacing of the coarse PIV processing), resulting in a functional form for the wall position, \(y_{w}(x)\). 3. **Particle track correction.** Each \(y\)-coordinate from the particle tracks found in step 1 is corrected for the wall inclination and position. This is thus performed based on each \(x\) position of the particle track, for which the wall position \(y_{w}(x)\) is known. After this correction, the wall-normal profiles of streamwise velocity are symmetric around \(y=0\), as seen in Fig. 9(a). 4. **Binning definition.** All corrected tracks are binned spatially. Streamwise-elongated bins of size \(128\times 1\) pix\({}^{2}\) are initialized. Given the pixel resolution of the images, this equates to a size of \(1.08\times 0.008\,\mathrm{mm}^{2}\) (\(34.7l^{*}\times 0.27l^{*}\)). Fig 9(b) shows the spatial distribution of bins, with white edges, for only a portion of the FOV. Note that the FOV spans 20 bins in \(x\) (given the 2 650 pixels in the streamwise direction, and the 128 pixel bin size). The degree of elongation is only feasible if the wall is parallel to the major axis of the bin, which was ensured through steps 2 and 3. 5. **Binning procedure.** Each individual particle track is collected in the bins defined in step 4 according to the coordinates of their mid-points. 6. **Velocity profile generation.** Particle tracks in each bin are averaged to compute the mean streamwise velocity per bin. Knowing the vertical bin spacing, the gradient \(\partial\overline{u}/\partial y\) can be determined to infer \(C_{f}\). The greater the number of particle tracks, the more statistically reliable the estimation of the mean \(C_{f}\) becomes. A convergence analysis was performed by considering one single bin at \(y^{+}\approx 15\), where the greatest fluctuations are expected to occur. For convergence of the mean streamwise velocity \(\overline{u}\), it was found that at least 1 500 image pairs are required for an estimate within 0.8 % of its final value (determined from all 2 000 image pairs). For each of the vertical profiles (each corresponding to one column of bins), 18 velocity vectors reside within the range \(y^{+}<5\). Fig. 9(a) displays 9 wall-normal profiles of \(\overline{u}\). The (corrected) wall is positioned at \(y=0\) and is shown with the blue dashed line. Due to some noise in the particle images in proximity to the wall, the points that were selected for determining the gradient \(\partial\overline{u}/\partial y\) were within the range \(2<y^{+}<4.5\) (a buffer of \(0.5l^{*}\) is taken between the linear and the buffer regions). This results in 11 points (black markers in Fig. 9(a)) being available for fitting the linear relation, shown in light blue. To enforce the no-slip condition at the wall, a physics-informed constraint is imposed on the fitting procedure by further including the point \((\overline{u},y)=(0,0)\). The final value of the wall-shear stress is taken as the average of the individual gradients computed from each of the 20 wall-normal profiles in one FOV, thus assuming streamwise-invariance. The procedure thus far allows for an estimation of \(C_{f}\) for each FOV of the PTV campaign and thus for the three FOV's centered at \(x/\delta=1.0\), \(x/\delta=2.0\) and \(x/\delta=3.0\) (recall Fig. 9(c)). At the same time, four control modes are considered (uncontrolled flow, and desynchronized, opposing and reinforcing control). Fig. 9(a) displays the percentage difference between the \(C_{f}\) of the desynchronized case and the one of the other three cases (thus \(\Delta C_{f}=100(C_{f,i}-C_{f,\mathrm{des}})/C_{f,\mathrm{des}}\), with \(i\) being the control mode in consideration and \(C_{f,\mathrm{des}}\) corresponds to the desynchronized case). The choice of the desynchronized control as the reference case follows the same reasoning that was presented for the spectrograms in SS IV.2. Opposing control shows a reduction of 7-11 % in \(C_{f}\), whereas the reinforcing case reduces friction by 3-7 %, depending on the streamwise location. All control modes appear to reduce friction drag with respect to the uncontrolled flow, which is mainly the consequence of the jet injecting wall-normal momentum, which reduces the streamwise momentum of the grazing TBL flow. Reinforcing control has a comparable effect on the TBL to the desynchronised mode in this regard, but it is evident that the opposing mode reduces \(C_{f}\) by 2-3 % for all streamwise locations. Fig. 9(b) displays the absolute skin friction coefficient. The displayed error bars were computed by assuming that each vertical profile of the streamwise velocity from binned PTV tracks generates a statistically independent result, and thus indicate the estimation uncertainty. This uncertainty can be attributed to two main factors: (1) the uncertainty in the convergence of the average streamwise velocity from the PTV measurements, and (2) the uncertainty in the linear fitting procedure described earlier. The former can be computed by considering the number of tracks in each bin and can be defined as \(\varepsilon=\sigma_{u}/\sqrt{N_{t}}\), where \(\sigma_{u}\) is the standard deviation of the streamwise velocity samples in the considered bin and \(N_{t}\) the number of tracks. The latter source of error stems from the linear fitting procedure at each streamwise location and is defined as the average RMS residual across all fitted curves. The skin friction found from the empirical Coles-Fernholz relation (\(C_{f}=2\left[\frac{1}{\kappa}\ln Re_{\theta}+C\right]^{-2}\), with \(\kappa=0.38\), \(C=3.7\) and \(Re_{\theta}=6\,830\)) is also plotted in Fig. 11b. Our uncontrolled flow experiment underestimates \(C_{f}\) by roughly \(15\,\%\). This discrepancy is relatively minor, given that the Coles-Fernholz relation is valid for fully developed and smooth-wall TBL flow. Our experiment may include a residual signature from the upstream trip [54] and the jet exit slit embedded within the wall. As such, our uncontrolled and desynchronized cases serve as the actual baseline scenarios. ### Turbulent skin friction integrals Assessments of skin friction do not capture the mechanisms behind skin-friction generation. Analysis of PIV data on a larger field of view allows for a computation of the TBL integral measures, and to relate them to the changes in skin friction for different control modes. The TKE production term can be informative for investigating the relation between the mean \(C_{f}\) and fluctuations of velocity in the TBL flow. TKE production is defined as the product of the Reynolds stress component \(R_{xy}=\overline{u^{\prime}v^{\prime}}\) and the wall-normal gradient of \(\overline{u}\)[55]: \[P\left(y\right)=-\overline{u^{\prime}v^{\prime}}\frac{\partial\overline{u}}{ \partial y}. \tag{8}\] Figure 11: (a) Percentage difference between the skin-friction coefficients of uncontrolled, opposing and reinforcing cases with respect to the desynchronized one. (b) Absolute skin-friction coefficient for the four control modes with error bars showing the uncertainty. Dashed line plotted for the value of \(C_{f}\) estimated from the Coles-Fernholz relation. Figure 10: (a) PTV-based profiles of the streamwise velocity \(\overline{u}\) with increasing wall-normal distance \(y\) for the uncontrolled flow at the FOV centered at \(x/\delta=2.0\). Profiles are sequentially separated in the horizontal direction by \(\Delta\overline{u}=2\,\)m/s for ease of inspection. (b) Contour of \(\overline{u}\) in the linear region of the turbulent boundary layer, both above and below the wall (positioned at \(y=0\)). Overlaid are white edges that demarcate the bins of \(128\times 1\) pix\({}^{2}\). A bulk TKE production following \(\widetilde{P}=\int P(y)\mathrm{d}y\) is an indicator of the total turbulent shear stress [56, 31]. Essential to the computation of \(P\) is the Reynolds shear stress \(R_{xy}\). A comparison of \(R_{xy}\) is shown in Fig. 12, for the uncontrolled flow and the desynchronised control case. While for the uncontrolled flow the Reynolds stress monotonically decreases with increasing \(y\) (and resembles a streamwise invariant behaviour), the desynchronised case is associated with a band of high-magnitude \(R_{xy}\) around \(y/\delta\approx 0.35\) at \(x/\delta=2\). A large increase in the magnitude of \(\overline{u^{\prime}v^{\prime}}\) is visible where the jet enters the domain. This indicates that the internal shear layer between the jet flow and the grazing TBL flow remains distinctly present all the way up to the downstream end of the FOV. Fig. 13a displays the premultiplied TKE production term as a function of wall-normal distance, for the four control modes and for \(x=2\delta\). Streamwise averaging over the interval \(1.9<x/\delta<2.1\) was performed to attenuate measurement noise. The production curve for the uncontrolled case rises to a maximum around \(y/\delta\approx 0.01\), before plateauing in the logarithmic region and further decreasing into the wake; this is consistent with the literature [55, 56]. The TKE production curves for all other cases show a lower magnitude, up to \(y/\delta\approx 0.2\). The region of strong \(R_{xy}\) on the upper side of the jet flow-trajectory (seen in Fig. 12b) is responsible for the drastic increase in \(P\). In order to assess changes in the bulk TKE production, integrals of the curves shown in Fig. 13a are considered Here, the integration is split into two different domains. A first contribution comes from the integral up to \(y/\delta=0.2\), where the control cases show a decrease in TKE production. Then, the remaining part of the curve at \(y/\delta>0.2\) is integrated up to the upper edge of the FOV, near \(y/\delta\approx 0.8\). Fig. 13b displays the two portions of the bulk TKE production as a percentage change with respect to the uncontrolled flow. The first integral shows a trend that resembles the behaviour observed in Fig. 11. Namely, the uncontrolled case shows the highest value, followed by reinforcing, desynchronized and opposing control modes. The integral in the outer region of the boundary layer, however, shows a drastic increase of the control modes' production term with respect to the uncontrolled flow, which is again ascribed to the region of strong \(R_{xy}\) in Fig. 12b. Fukagata _et al._[32] derived the so-called _FIK identity_ to decompose the turbulent skin friction into three components, each of which is responsible for a different mechanism of skin friction generation [57, 32]. The first component (\(C_{f,1}\)) is dependent on the displacement thickness of the boundary layer and is known as the "laminar component". For a TBL flow, \(C_{f,1}\) only accounts for a marginal fraction of the total friction coefficient and is therefore omitted in this work. Likewise, a third component (\(C_{f,3}\)) can be neglected for a fully developed TBL flow [31]. Albeit we still observe the influence of the actuator-induced flow at \(x/\delta=2\), the computation of \(C_{f,3}\) from PIV data reveals an almost null contribution. As such, the second component of the FIK identify, \(C_{f,2}\), is the one considered for relating the turbulence activity to the wall-shear stress generation. Component \(C_{f,2}\) can be computed according to the following integral relation, \[C_{f,2}=-\frac{4}{U_{\infty}^{2}}\int_{0}^{1}\left[\overline{u^{\prime}v^{ \prime}}\left(1-\frac{y}{\delta}\right)\right]\mathrm{d}\left(\frac{y}{\delta }\right). \tag{9}\] The integrand of \(C_{f,2}\) is to some degree similar to the premultiplied TKE production term, where \(R_{xy}\) is scaled with the wall-normal coordinate. However, the former scales directly with \(y\), while the latter scales with \(1-y/\delta\) Figure 12: Contours of the Reynolds stress \(R_{xy}\) obtained from PIV data for (a) the uncontrolled flow, and (b) the desynchronised control case; the jet trajectory is duplicated from Fig. 15a. Shown in red is the position of the jet exit slit. thus being multiplied by a factor that decreases with wall-normal distance. Fig. 9a displays the integrand of \(C_{f,2}\) as a function of \(y/\delta\) and the four control modes. A steady rise is observed up to \(y/\delta\approx 0.01\), followed by a plateauing region. In the wake of the boundary layer, again a sharp peak is reminiscent of the Reynolds shear stress caused by the jet plume. Fig. 9b shows Eq. (9) split up in two contributions: again, up to \(y/\delta=0.2\) (blue) and from \(0.2\) to the edge of the FOV (grey). Also for this metric, the observation can be made that the opposing control law is the most effective in quenching shear stress (at least when integrated up to the \(y/\delta=0.2\)), following a similar trend as the one reported in Fig. 13. The analysis of turbulent skin-friction drag integrals assists in identifying where important mechanisms occur within the TBL that contribute to the generation of skin-friction drag. Both the integrated TKE production (Fig. 13) and \(C_{f,2}\) (Fig. 14) reveal how the spatio-temporal dynamics of the TBL are altered as a function of \(y\). The full integrals over the available wall-normal range suggest an increase in skin-friction drag, but this contradicts the measured skin-friction coefficients presented in SS V.1. This discrepancy can be explained by analysing the trends of the curves in the wake of the TBL flow. By inspection of the TKE profiles in Fig. 6b, as a well of the distribution of \(R_{xy}\) in Fig. 12b, it is evident how the jet plume created by the actuator is responsible for this sudden rise. Instabilities induced by the break-up of the shear layer of the jet in the TBL are superimposed on top of the naturally-occurring Reynolds Figure 14: (a) Integrand of the \(C_{f,2}\) expression of Eq. (9) as a function of wall-normal distance, for \(x=2\delta\) and the four control modes. (b) Bar chart displaying the percentage difference of the integrated \(\partial C_{f,2}/\partial y\) term with respect to the uncontrolled flow, for \(0\leq y/\delta\leq 0.2\) (blue bars) and for \(0.2<y/\delta\leq 0.8\) (grey bars). Figure 13: (a) Plot of normalized, premultiplied TKE production \(y\,\overline{u^{\prime}v^{\prime}}(\partial\overline{n}/\partial y)/U_{\infty}^ {3}\), as a function of wall-normal distance at \(x=2\delta\) and the four control modes. Wall-normal profiles of \(P\) were computed by averaging PIV data in the streamwise direction in the interval \(1.9<x/\delta<2.1\). (b) Bar chart displaying the percentage difference of the integrated TKE production term with respect to the uncontrolled flow for \(0\leq y/\delta\leq 0.2\) (blue) and for \(0.2<y/\delta\leq 0.8\). stresses, thus biasing the integral values of \(\widetilde{P}\) and \(C_{f,2}\). Still, the reduction in the integrand curves observed in the logarithmic region and below show that the main wall-shear producing dynamics are, in fact, suppressed below the wall-normal coordinate where \(R_{xy}\) suddenly rises. ## VI Conclusions and outlook Successful experimental real-time targeting of large-scale motions has been accomplished by means of a control system comprising a surface-mounted hot-film as the input sensor and a wall-embedded blowing jet actuator, located downstream of the sensing element. An opposition control logic was implemented for which the controller activated the actuator at regions of streamwise momentum surplus. The inverse control law, reinforcing control, was also implemented, where the jet fired into regions of momentum deficit with the goal of enhancing turbulence instead of suppressing it. The response of the TBL to control in terms of first-order velocity statistics follows an expected trend as a result of wall-normal momentum injection imparted by the actuator, with the mean velocity profile experiencing a downward shift when control is applied with respect to uncontrolled flow conditions. Additionally, the inner peak observed in the velocity variance decreases in intensity by \(\sim 12\%\). The analysis of energy spectrograms of streamwise velocity reveals that, for the opposing and reinforcing control cases, a reduction of \(40\%\) and an increase of \(50\%\) in energy, respectively, occur in the geometric center of the logarithmic region. The skin-friction coefficient was directly inferred from PTV measurements. It is observed that all control modes in consideration cause a reduction in turbulent skin-friction, with opposing control reducing skin-friction by 2-3% with respect to the desynchronised and by 7-11% with respect to the uncontrolled case. The principal objective of this work was to analyze skin-friction drag-generating mechanisms by considering statistical integral measures. The bulk turbulence kinetic energy production term decreases up to a wall-normal location where the actuator-induced fluctuations are strong. The sharp increase in the Reynolds shear stresses \(R_{xy}\) induces a bias in this integral measure as well as the second component of skin-friction following FIK-decomposition: \(C_{f,2}\). The applicability of the FIK decomposition relies on the assumption of zero-pressure-gradient fully developed turbulence, which might be violated in proximity to sites where flow control is performed by means of fluidic actuators. However, when focusing on the region around \(x=2\delta\) downstream of the actuator, the Reynolds shear-stresses show streamwise-invariant behavior in the logarithmic region, where the LSMs were targeted. When evaluating \(C_{f,2}\) in this region, an identical trend in the change of the skin-friction was found as compared to the direct PTV-based measurements. This opens up an avenue for using off-the-wall flow field information downstream of control for the purpose of optimizing a drag-reducing control scheme. Still, the observation that statistical integrands directly reflect changes in PTV-inferred skin-friction coefficient supports the conclusion that the controller presented in this work is able to alter skin-friction generating mechanisms not only in the logarithmic region, but also in the near-wall region, where small viscous scales are energetically dominant. To conclude, this study proves the effectiveness of an opposition control applied to the manipulation of large-scale motions in a turbulent boundary layer. Further improvements to the control logic are currently being considered by the authors in terms of a closed-loop control architecture and adaptive control strategies with the goal to optimize its performance. ## Appendix A Characterization of the wall-normal blowing jet actuator Since the aim of control is to manipulate large-scale structures in the logarithmic region, the actuator should have enough control authority in the logarithmic region of the TBL, where LSMs are most energetic. Thus, the actuator jet in crossflow needs to trail within this region to achieve a proper interaction. The jet flow may not reach a sufficient height when the jet exit velocity, \(v_{\rm j}\), is too low, while if \(v_{\rm j}\) is too high the jet's trajectory may penetrate the edge of the boundary layer, thereby altering the free-stream flow. To study how the jet trajectory depends on its exit velocity, a characterization experiment was conducted. The wall-normal jet flow was operated in a continuous on-state at several velocity ratios, \(r=v_{\rm j}/U_{\infty}\). The mean velocity field was inferred from 2D2C PIV performed with 2000 image pairs, and over a FOV spanning roughly \(1.8\delta\) in \(x\) and \(0.35\delta\) in \(y\). The trajectory of the jet is taken as the streamline emanating from the center of the jet exit plane, as shown in Fig. 15a for several velocity ratios. It is evident that the two highest velocity ratios of \(r=0.5\) and \(0.6\) result in trajectories penetrating the upper edge of the logarithmic region (here indicated with the dashed line at \(y/\delta=0.2\)) within \(x/\delta<0.5\). As expected, with a lower velocity ratio of \(r=0.4\), the jet trajectory remains within the logarithmic region for a prolonged distance (\(\sim 1\delta\)) and is therefore adopted in the current study. The momentum coefficient for \(r=0.4\) is \(C_{\mu}=(\rho_{\rm j}U_{\rm j}^{2}l_{\rm j})/(\rho_{\infty}U_{\infty}^{2} \delta)=0.75\), with \(l_{\rm j}=0.15\,\)mm being the length of the jet exit-slit. Velocity ratios lower than \(r\leq 0.3\) cause the plume to remain within the logarithmic region for a longer streamwise extent. However, the feed system, including a pressure regulator, was not able to produce a stationary flow across the slit, as relatively large velocity fluctuations were observed over time. Latency is essential to consider in designing an opposition control system, for timing purposes as well as the frequency response of the jet actuator. The hardware latency associated with the time it takes for the compressed air to accelerate through the valve, pneumatic tubing and flow conditioners, was quantified with an experiment: the valve was operated at a constant frequency \(f_{\mathrm{j}}\), with a \(50\,\mathrm{\char 37}\) duty cycle (for a total duration of 2048 periods). This frequency was based on a statistical time-scale of the LSMs, following \(f_{\mathrm{j}}=U_{c}/(2l_{\mathrm{LSM}})\approx 12\,\mathrm{Hz}\), where \(U_{c}\) is the convection velocity taken at the geometric centre of the logarithmic region, \(y_{L}^{+}=3.9\sqrt{Re_{\tau}}\), and \(l_{\mathrm{LSM}}=6\delta\) is the statistical length of a high- or low-speed region [20, 58]. A time-series of the jet exit velocity was measured with HWA as described in SSII.2, but with the difference being the use of a Dantec 55P11 probe. This probe was placed at \(y=3\,\mathrm{mm}\) above the centre of the jet exit slit. Phase-averaged responses of the jet exit velocity (with \(t=0\) being the instant of the valve on-command) over all periods are presented in Fig. 15b. The velocity sharply rises approximately \(1\,\mathrm{ms}\) after the on-command, and overshoots its steady-state value after roughly \(3\,\mathrm{ms}\). Subsequently, steady-state is reached at roughly \(6\,\mathrm{ms}\). When the valve receives the off-command, the shut down phase lasts for approximately \(10\,\mathrm{ms}\) before the exit velocity returns to zero. It was confirmed that shortening the period of actuation did not alter the start- and shut-down transients. Hence, the maximum frequency for which an on- and off-state is reached is constrained by a \(6\,\mathrm{ms}\) start-up time and a \(10\,\mathrm{ms}\) shut-down time; this yields a frequency response of \(f_{\mathrm{act}}\approx 63\,\mathrm{Hz}\). ## Appendix B System identification procedure The Linear Coherence Spectrum (LCS) evaluates the stochastic degree of coupling between the voltage fluctuations of the wall-mounted hot film, \(e(t)\) [the input] and the streamwise velocity fluctuations within the logarithmic region, \(u(t)\) [the output], as a function of the streamwise separation distance \(s\). The LCS is defined as [59], \[\gamma_{L}^{2}(f,s)=\frac{|\langle E(f)U^{*}(f,s)\rangle|^{2}}{|\langle E(f) \rangle|^{2}|\langle U(f,s)\rangle|^{2}}, \tag{10}\] where \(|\cdot|\) denotes the modulus. Here \(E(f)\) and \(U(f,s)\) are the temporal FFT's of the input and output signals, respectively. The coherence is bounded by \(0\) (no coherence) and \(1\) (perfectly coherent) and is presented in Fig. 16a as a function of \(f\delta/U_{\infty}\) and separation distance, \(s/\delta\). With an increase in \(s\), the coherence decays only marginally and its maximum value at low frequencies still remains at a level beyond \(0.35\) at the most downstream position. Fig. 16b shows the LCS for \(s=2.4\delta\) in specific, which corresponds to the sensor-actuator spacing that was implemented (the Figure 15: (a) Trajectories of the wall-normal jet actuator flow within the grazing TBL flow, for three different velocity ratios. Lower and upper bounds of the logarithmic region are also indicated. A filled contour in the background shows the magnitude of in-plane velocity for the \(r=0.4\) case; the jet exit slit is indicated with a red line. (b) Phase-averaged jet exit velocity over 2048 on/off cycles. (c) Periodic valve command over one period of \(T_{j}=1/f_{j}\approx 83\,\mathrm{ms}\). reasoning for this is provided in SSIII.3). Fig. 16b shows an initial trend of coherence that is nearly constant for low frequencies up to \(f\delta/U_{\infty}\approx 0.1\) with \(\gamma_{L}^{2}\approx 0.3\), which is proven to be a sufficient coherence-magnitude for an opposition control scheme on the large-scale energy (in terms of its binary accuracy, see SSIII.4). Coherence drops sharply for smaller scales beyond \(f\delta/U_{\infty}>0.1\), which renders it impossible to actuate upon the turbulence scales within the logarithmic region that are within that frequency range. An input-output relation can be inferred from a calibration experiment in order to relate the input's voltage fluctuations to the velocity fluctuations at the downstream target-point (see Fig. 3); allowing for an LSE of the latter during real-time control. Given the presence of significant coherence, the linear transfer kernel, \(H_{L}(f)\), will relate an estimate of the output (denoted with a hat) and the input signal in the frequency domain, following \[\widehat{U}(f,s)=H_{L}(f,s)E(f). \tag{10}\] The complex-valued kernel has a frequency-dependent gain and phase, given by \[|H_{L}(f,s)|=\frac{|\langle E(f)U^{*}(f,s)\rangle|}{\langle|E(f)|^{2}\rangle}, \text{ and} \tag{11}\] \[\phi_{H}(f,s)=\arctan\left\{\frac{\mathcal{I}\left[\langle E(f)U^{*}(f,s) \rangle\right]}{\mathcal{R}\left[\langle E(f)U^{*}(f,s)\rangle\right]}\right\}, \tag{12}\] where \(\langle E(f)U^{*}(f,s)\rangle\) is the input-output cross-spectrum. ## Acknowledgments We wish to gratefully acknowledge the Department of Flow Physics & Technology of the Faculty of Aerospace Engineering at the Delft University of Technology, for financial support in establishing the experimental setup. We would also like to give special thanks to Stefan Bernardy, Peter Duyndam, Dennis Bruikman and Frits Donker Duyvis for their technical assistance.
2309.15380
Investigating excited $Ω_c$ states from pentaquark perspective
Inspired by the recent observation of new $\Omega_c^0$ states by the LHCb Collaboration, we explore the excited $\Omega_{c}$ states from the pentaquark perspective in the quark delocalization color screening model. Our results indicate that the $\Omega_c(3185)$ can be well interpreted as a molecular $\Xi D$ predominated resonance state with $J^P=1/2^-$. The $\Omega_c(3120)$ can also be interpreted as a molecular $\Xi_c^* \bar{K}$ state with $J^P=3/2^-$ and a new molecular state $\Xi^*_c \bar{K}^*$ with $J^P=5/2^-$ and a mass of 3527 MeV is predicted, which is worth searching in the future. Other reported $\Omega_c$ states cannot be well described in the framework of pentaquark systems in present work. The three-quark excited state, or the unquenched picture may be a good explanation, which is worth further exploration.
Ye Yan, Xiaohuang Hu, Hongxia Huang, Jialun Ping
2023-09-27T03:17:03Z
http://arxiv.org/abs/2309.15380v1
# Investigating excited \(\Omega_{c}\) states from pentaquark perspective ###### Abstract Inspired by the recent observation of new \(\Omega_{c}^{0}\) states by the LHCb Collaboration, we explore the excited \(\Omega_{c}\) states from the pentaquark perspective in the quark delocalization color screening model. Our results indicate that the \(\Omega_{c}(3185)\) can be well interpreted as a molecular \(\Xi D\) predominated resonance state with \(J^{P}=1/2^{-}\). The \(\Omega_{c}(3120)\) can also be interpreted as a molecular \(\Xi_{c}^{*}\bar{K}\) state with \(J^{P}=3/2^{-}\) and a new molecular state \(\Xi_{c}^{*}\bar{K}^{*}\) with \(J^{P}=5/2^{-}\) and a mass of \(3527\) MeV is predicted, which is worth searching in the future. Other reported \(\Omega_{c}\) states cannot be well described in the framework of pentaquark systems in present work. The three-quark excited state, or the unquenched picture may be a good explanation, which is worth further exploration. ## I Introduction In the last few decades, significant experimental progress has been made in the sector of heavy baryons. Many heavy baryons have been reported, such as \(\Lambda_{c}\) and \(\Sigma_{c}\) family [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14], \(\Xi_{c}\) family [15; 16; 17; 18; 19; 20; 21; 22; 23] and \(\Omega_{c}\) family [24; 25; 26; 27; 28]. These observations have stimulated extensive interest in understanding the structures of these baryons. For one thing, verifying these heavy baryons could deepen our understanding of the non-perturbative behavior of quantum chromodynamics (QCD) [29; 30; 31; 32; 33; 34; 35; 36]. For another thing, with the appearance of heavy baryons that are difficult to be interpreted simply as traditional three-quark baryons, the study of multi-quark explanation has become a non-negligible subject. Among them, the excited \(\Omega_{c}\) baryons were considerably enriched by the LHCb Collaboration in 2017 [28]. Five narrow \(\Omega_{c}^{0}\) states were observed in the \(\Xi_{c}^{+}K^{-}\) invariant mass spectrum, which are \(\Omega_{c}^{0}(3000)\), \(\Omega_{c}^{0}(3050)\), \(\Omega_{c}^{0}(3065)\), \(\Omega_{c}^{0}(3090)\) and \(\Omega_{c}^{0}(3120)\). The narrow width of these states, along with their unknown quantum numbers and structures, has attracted broad interest in theoretical work. A classical way to describe these \(\Omega_{c}\) baryons is considering that they are conventional three-quark excitations, and another way is treating them as multi-quark states. On the basis of three-quark configuration, \(\Omega_{c}\) states have been studied in the framework of the Lattice QCD [37; 38], the QCD sum rules [39; 40; 41; 42; 43], the light cone QCD sum rules [44; 45], the heavy hadron chiral perturbation theory [46], the Regge phenomenology [47; 48], the chiral quark model [49], the constituent quark model [50; 51; 52], the quark-diquark model [53; 54], the quark pair creation model [55], the \({}^{3}P_{0}\) model [56; 57], the chiral quark-soliton model [58; 59], the holographic model [60], the string model [61], the harmonic oscillator based model [62], the light-front quark model [63], the relativized potential quark model [64], the non-relativistic potential model [86], the relativistic flux tube model [92] and other quark models [67; 68]. On the other hand, the pentaquark interpretation of \(\Omega_{c}\) states has been investigated in the framework of the QCD sum rules [69; 70], the chiral quark model [49; 71], the constituent quark model [72], the diquark-diquark-antiquark model [73], the heavy-quark spin symmetry model [74], the one boson exchange model [75], the vector meson exchange model [76; 77], the meson-baryon interaction model [78], the chiral quark-soliton model [79], the extended local hidden gauge approach [80; 81], the Bethe-Salpeter formalism [82], the effective Lagrangian approach [83] and the quasipotential Bethe-Salpeter equation approach [84]. Very recently, two new excited states, \(\Omega_{c}^{0}(3185)\) and \(\Omega_{c}^{0}(3327)\) are observed in the \(\Xi_{c}^{+}K^{-}\) invariant-mass spectrum by the LHCb Collaboration [85]. Bisides, the five narrow \(\Omega_{c}\) states obtained before [28] are also confirmed. The measured masses and widths of the two newly found states are \[M_{\Omega_{c}(3185)} = 3185.1\pm 1.7^{+7.4}_{-0.9}\pm 0.2\text{MeV},\] \[\Gamma_{\Omega_{c}(3185)} = 50\pm 7^{+10}_{-20}\text{MeV},\] \[M_{\Omega_{c}(3327)} = 3327.1\pm 1.2^{+0.1}_{-1.3}\pm 0.2\text{MeV},\] \[\Gamma_{\Omega_{c}(3327)} = 20\pm 5^{+13}_{-1}\text{MeV}. \tag{1}\] So far, there have been a few studies on the two newly discovered states. In Ref. [86], in the framework of the non-relativistic potential model with Gaussian Expansion Method, the authors' results imply that the \(\Omega_{c}^{0}(3327)\) is a good candidate of \(\Omega_{c}^{0}(1D)\) state with \(J^{P}=5/2^{+}\). In Ref. [87], the \({}^{3}P_{0}\) model calculation results support assigning the observed \(\Omega_{c}^{0}(3185)\) and \(\Omega_{c}^{0}(3327)\) as the \(2S(3/2^{+})\) and \(1D(3/2^{+})\) states, respectively. In Ref. [88], using the QCD sum rules, the \(\Omega_{c}^{0}(3327)\) is assigned to be \(D\)-wave \(\Omega_{c}\) state with \(J^{P}=1/2^{+},3/2^{+}\) or \(5/2^{+}\). In Ref. [89], \(\Omega_{c}^{0}(3185)\) and \(\Omega_{c}^{0}(3327)\) are studied in the effective Lagrangian approach by assuming they are molecular states. The results support \(\Omega_{c}^{0}(3327)\) as a \(D^{*}\Xi\) molecular state, and the \(\Omega_{c}^{0}(3185)\) may be a meson-baryon molecule with a big \(D\Xi\) component. In Ref. [90], the assignment of \(\Omega_{c}^{0}(3185)\) and \(\Omega_{c}^{0}(3327)\) to \(2S_{1/2}\) and \(2S_{3/2}\) is discussed. In Ref. [91], within a simple contact-range theory in which the couplings are saturated by light-meson exchanges, \(\Omega_{c}^{0}(3185)\) and \(\Omega_{c}^{0}(3327)\) match the masses of \(J=1/2\,\Xi D\) and \(J=3/2\,\Xi D^{*}\), respectively. In Ref. [92], based on the quark-diquark configuration with relativistic flux tube model, \(\Omega_{c}^{0}(3185)\) and \(\Omega_{c}^{0}(3327)\) is assigned to be \(|2S,3/2^{+}\rangle\) and \(|1D,3/2^{+}\rangle\). In Ref. [93], via the QCD sum rules, the numerical results favor assigning \(\Omega_{c}^{0}(3185)\) as the \(D\Xi\) molecular state with the \(J^{P}=1/2^{-}\), assigning \(\Omega_{c}^{0}(3327)\) as the \(D^{*}\Xi\) molecular state with the \(J^{P}=3/2^{-}\). In addition to the above theoretical methods, quark delocalization color screening model (QDCSM) is a reliable approach, which was developed in the 1990s with the aim of explaining the similarities between nuclear and molecular forces [94]. The model gives a good description of \(NN\) and \(YN\) interactions and the properties of deuteron [95; 96; 97; 98]. It is also employed to calculate the baryon-baryon and baryon-meson scattering phase shifts, and the exotic hadronic states are also studied in this model. Studies show that color screening is an effective description of the hidden-color channel coupling [99; 100]. So it is feasible and meaningful to extend this model to investigate the pentaquark interpretation of excited \(\Omega_{c}\) states. In this work, we systematically investigate the \(ssc\bar{q}q\) systems in order to find out if there are \(\Omega_{c}\) states that are possible to be interpreted as pentaquark states. The five-body system is calculated by means of the resonating group method to search for bound states. The strong decay channels of the \(ssc\bar{q}q\) systems are investigated to determine the resonance states, based on the conservation of the quantum numbers and the limit of phase space. In order to ensure the reliability and stability of the calculation results, the parameters used in this work are the same as those used in the previous work [101]. This paper is organized as follows. After introduction, the details of QDCSM are presented in section II. The calculation of the bound state and scattering phase shift is presented in Section III, along with the discussion and analysis of the results. Finally, the paper ends with summary in Section IV. ## II Quark delocalization color screening model (QDCSM) Herein, the QDCSM is employed to investigate the properties of \(ssc\bar{q}q\) systems. The QDCSM is an extension of the native quark cluster model [102; 103; 104; 105]. It has been developed to address multi-quark systems. The detail of the QDCSM can be found in Refs. [94; 95; 96; 97; 98; 99; 100; 106]. In this sector, we mainly introduce the salient features of this model. The general form of the pentaquark Hamiltonian is given by \[H= \sum_{i=1}^{5}\left(m_{i}+\frac{\mathbf{p}_{i}^{2}}{2m_{i}}\right)-T_{ CM}+\sum_{j>i=1}^{5}V(\mathbf{r}_{ij}), \tag{2}\] where \(m_{i}\) is the quark mass, \(\mathbf{p}_{i}\) is the momentum of the quark, and \(T_{CM}\) is the center-of-mass kinetic energy. The dynamics of the pentaquark system is driven by a two-body potential \[V(\mathbf{r}_{ij})= V_{CON}(\mathbf{r}_{ij})+V_{OGE}(\mathbf{r}_{ij})+V_{\chi}(\mathbf{r}_{ij}). \tag{3}\] The most relevant features of QCD at its low energy regime: color confinement (\(V_{CON}\)), perturbative one-gluon exchange interaction (\(V_{OGE}\)), and dynamical chiral symmetry breaking (\(V_{\chi}\)) have been taken into consideration. Here, a phenomenological color screening confinement potential (\(V_{CON}\)) is used as \[V_{CON}(\mathbf{r}_{ij})= -a_{c}\mathbf{\lambda}_{i}^{c}\cdot\mathbf{\lambda}_{j}^{c}\left[f(\mathbf{r} _{ij})+V_{0}\right], \tag{4}\] \[f(\mathbf{r}_{ij})= \left\{\begin{array}{ll}\mathbf{r}_{ij}^{2},&\quad i,j\text{ occur in the same cluster}\\ \frac{1-e^{-\mu_{q_{i}q_{j}}\mathbf{r}_{ij}^{2}}}{\mu_{q_{i}q_{j}}},&\quad i,j\text{ occur in different cluster}\end{array}\right.\] where \(a_{c}\), \(V_{0}\) and \(\mu_{q_{i}q_{j}}\) are model parameters, and \(\mathbf{\lambda}^{c}\) stands for the SU(3) color Gell-Mann matrices. Among them, the color screening parameter \(\mu_{q_{i}q_{j}}\) is determined by fitting the deuteron properties, nucleon-nucleon scattering phase shifts, and hyperon-nucleon scattering phase shifts, respectively, with \(\mu_{qq}=0.45\) fm\({}^{-2}\), \(\mu_{qs}=0.19\) fm\({}^{-2}\) and \(\mu_{ss}=0.08\) fm\({}^{-2}\), satisfying the relation, \(\mu_{qs}^{2}=\mu_{qq}\mu_{ss}\)[108]. Besides, we found that the heavier the quark, the smaller this parameter \(\mu_{q_{i}q_{j}}\). When extending to the heavy quark system, the hidden-charm pentaquark system, we took \(\mu_{cc}\) as a adjustable parameter from \(0.01\) fm\({}^{-2}\) to \(0.001\) fm\({}^{-2}\), and found that the results were insensitive to the value of \(\mu_{cc}\)[109]. Moreover, the \(P_{c}\) states were well predicted in the work of Refs. [109; 110]. So here we take \(\mu_{cc}=0.01\) fm\({}^{-2}\) and \(\mu_{qc}=0.067\) fm\({}^{-2}\), also satisfy the relation \(\mu_{qc}^{2}=\mu_{qq}\mu_{qc}\). In the present work, we mainly focus on the low-lying negative parity \(ssc\bar{q}q\) pentaquark states of \(S\)-wave, so the spin-orbit and tensor interactions are not included. The one-gluon exchange potential (\(V_{OGE}\)), which includes coulomb and chromomagnetic interactions, is written as \[V_{OGE}(\mathbf{r}_{ij})= \frac{1}{4}\alpha_{s_{i}q_{j}}\mathbf{\lambda}_{i}^{c}\cdot\mathbf{\lambda }_{j}^{c} \tag{5}\] \[\cdot\left[\frac{1}{r_{ij}}-\frac{\pi}{2}\delta\left(\mathbf{r}_{ij} \right)\left(\frac{1}{m_{i}^{2}}+\frac{1}{m_{j}^{2}}+\frac{4\mathbf{\sigma}_{i} \cdot\mathbf{\sigma}_{j}}{3m_{i}m_{j}}\right)\right],\] where \(\mathbf{\sigma}\) is the Pauli matrices and \(\alpha_{s_{i}q_{i}}\) is the quark-gluon coupling constant. However, the quark-gluon coupling constant between quark and anti-quark, which offers a consistent description of mesons from light to heavy-quark sector, is determined by the mass differences between pseudoscalar mesons (spin-parity \(J^{P}=0^{-}\)) and vector (spin-parity \(J^{P}=1^{-}\)), respectively. For example, from the model Hamiltonian, the mass difference between \(D\) and \(D^{*}\) is determined by the chromomagnetic interaction in Eq. (6), so the parameter \(\alpha_{s_{vc}}\) is determined by fitting the mass difference between \(D\) and \(D^{*}\). The dynamical breaking of chiral symmetry results in the SU(3) Goldstone boson exchange interactions appear between constituent light quarks \(u,d\) and \(s\). Hence, the chiral interaction is expressed as \[V_{\chi}(\mathbf{r}_{ij})= V_{\pi}(\mathbf{r}_{ij})+V_{K}(\mathbf{r}_{ij})+V_{\eta}(\mathbf{r}_{ij}). \tag{6}\] Among them \[V_{\pi}\left(\mathbf{r}_{ij}\right)= \frac{g_{ch}^{2}}{4\pi}\frac{m_{\pi}^{2}}{12m_{i}m_{j}}\frac{ \Lambda_{\pi}^{2}}{\Lambda_{\pi}^{2}-m_{\pi}^{2}}m_{\pi}\left[Y\left(m_{\pi} \mathbf{r}_{ij}\right)\right.\] \[\left.-\frac{\Lambda_{\pi}^{3}}{m_{\pi}^{3}}Y\left(\Lambda_{\pi} \mathbf{r}_{ij}\right)\right]\left(\mathbf{\sigma}_{i}\cdot\mathbf{\sigma}_{j}\right) \sum_{a=1}^{3}\left(\mathbf{\lambda}_{i}^{a}\cdot\mathbf{\lambda}_{j}^{a}\right), \tag{7}\] \[V_{K}\left(\mathbf{r}_{ij}\right)= \frac{g_{ch}^{2}}{4\pi}\frac{m_{K}^{2}}{12m_{i}m_{j}}\frac{ \Lambda_{K}^{2}}{\Lambda_{K}^{2}-m_{K}^{2}}m_{K}\left[Y\left(m_{K}\mathbf{r}_{ij} \right)\right.\] \[\left.-\frac{\Lambda_{K}^{3}}{m_{K}^{3}}Y\left(\Lambda_{K}\mathbf{r} _{ij}\right)\right]\left(\mathbf{\sigma}_{i}\cdot\mathbf{\sigma}_{j}\right)\sum_{a=4} ^{7}\left(\mathbf{\lambda}_{i}^{a}\cdot\mathbf{\lambda}_{j}^{a}\right), \tag{8}\] \[V_{\eta}\left(\mathbf{r}_{ij}\right)= \frac{g_{ch}^{2}}{4\pi}\frac{m_{\eta}^{2}}{12m_{i}m_{j}}\frac{ \Lambda_{\eta}^{2}}{\Lambda_{\eta}^{2}-m_{\eta}^{2}}m_{\eta}\left[Y\left(m_{ \eta}\mathbf{r}_{ij}\right)\right.\] \[\left.-\frac{\Lambda_{\eta}^{3}}{m_{\eta}^{3}}Y\left(\Lambda_{ \eta}\mathbf{r}_{ij}\right)\right]\left(\mathbf{\sigma}_{i}\cdot\mathbf{\sigma}_{j}\right) \left[\cos\theta_{p}\left(\mathbf{\lambda}_{i}^{8}\cdot\mathbf{\lambda}_{j}^{8}\right)\right.\] \[\left.-\sin\theta_{p}\left(\mathbf{\lambda}_{i}^{0}\cdot\mathbf{\lambda}_ {j}^{0}\right)\right], \tag{9}\] where \(Y(x)=e^{-x}/x\) is the standard Yukawa function. The physical \(\eta\) meson is considered by introducing the angle \(\theta_{p}\) instead of the octet one. The \(\mathbf{\lambda}^{a}\) are the SU(3) flavor Gell-Mann matrices. The values of \(m_{\pi}\), \(m_{k}\) and \(m_{\eta}\) are the masses of the SU(3) Goldstone bosons, which adopt the experimental values [111]. The chiral coupling constant \(g_{ch}\), is determined from the \(\pi NN\) coupling constant through \[\frac{g_{ch}^{2}}{4\pi}=\left(\frac{3}{5}\right)^{2}\frac{g_{\pi NN}^{2}}{4\pi} \frac{m_{u,d}^{2}}{m_{N}^{2}}. \tag{10}\] Assuming that flavor SU(3) is an exact symmetry, it will only be broken by the different mass of the strange quark. The other symbols in the above expressions have their usual meanings. All the parameters shown in Table 1 are fixed by masses of the ground baryons and mesons. Table 2 shows the masses of the baryons and mesons used in this work. In the QDCSM, quark delocalization was introduced to enlarge the model variational space to take into account the mutual distortion or the internal excitations of nucleons in the course of interaction. It is realized by specifying the single particle orbital wave function of the QDCSM as a linear combination of left and right Gaussians, the single particle orbital wave functions used in the ordinary quark cluster model \[\psi_{\alpha}(\mathbf{S}_{i},\epsilon) = \left(\phi_{\alpha}(\mathbf{S}_{i})+\epsilon\phi_{\alpha}(-\mathbf{S}_{i} )\right)/N(\epsilon),\] \[\psi_{\beta}(-\mathbf{S}_{i},\epsilon) = \left(\phi_{\beta}(-\mathbf{S}_{i})+\epsilon\phi_{\beta}(\mathbf{S}_{i}) \right)/N(\epsilon),\] \[N(S_{i},\epsilon) = \sqrt{1+\epsilon^{2}+2\epsilon e^{-S_{i}^{2}/4b^{2}}}. \tag{11}\] It is worth noting that the mixing parameter \(\epsilon\) is not an adjusted one but determined variationally by the dynamics of the multi-quark system itself. In this way, the multi-quark system chooses its favorable configuration in the interacting process. This mechanism has been used to explain the cross-over transition between hadron phase and quark-gluon plasma phase [112]. In addition, the dynamical calculation is carried out using the resonating group method and the generating coordinates method. The details of the two methods can be seen in Appendix A, and the way of constructing wave functions are presented in Appendix B. ## III The results and discussions In this work, we investigate the \(S-\)wave \(ssc\bar{q}q\) pentaquark systems in the framework of QDCSM with resonating group method. The quantum numbers of the pentaquark system are \(I=0\), \(J^{P}=1/2^{-},3/2^{-}\) and \(5/2^{-}\). Three structures \(qss-\bar{q}c\), \(qsc-\bar{q}s\) and \(ssc-\bar{q}q\), as well as the coupling of these structures are taken into consideration. To find out if there exists any bound state, we carry out a dynamic bound-state calculation. The scattering process is also studied to obtain the genuine resonance state. The introduction of the bound state calculation and scattering process can be seen in Appendix A. Moreover, the calculation of root mean square (RMS) of cluster spacing is helpful to explore the structure of the bound state or resonance state on the one hand, and to further estimate whether the observed states are resonance state or scattering state on the other hand. The single-channel results of different systems are listed in Tables 3, V and 7, respectively. The first column headed with Structure includes \(qss-\bar{q}c\), \(qsc-\bar{q}s\) and \(ssc-\bar{q}q\) three kinds. The second and third columns, headed with \(\chi^{f_{i}}\) and \(\chi^{\sigma_{j}}\), denote the way how wave functions constructed, which can be seen in Appendix B. The forth column headed with Channel gives the physical channels involved in the present work. The fifth column headed with \(E_{th}^{Pheo}\) refers to the theoretical value of non-interacting baryon-meson threshold. The sixth column headed with \(E_{sc}\) shows the energy of each single channel. The values of binding energies \(E_{B}=E_{sc}-E_{th}^{Pheo}\) are listed in the eighth column only if \(E_{B}<0\) MeV. Finally, the experimental thresholds \(E_{th}^{Exp}\) (the sum of the experimental masses of the corresponding baryon and meson) along with corrected energies \(E^{\prime}=E_{th}^{Exp}+E_{B}\) are given in last two columns. As for coupled-channel, the results are listed in the Tables 4, 6 and 8, respectively. The first column represents the structures involved in the channel coupling and the second column is the theoretical value of the lowest threshold. The third column headed with \(E_{cc}\) shows the energy of coupled-channel. The definitions of \(E_{B}\), \(E_{th}^{Exp}\) and \(E^{\prime}\) in coupled-channel calculation are similar to their definitions in single-channel calculation. ### \(J^{p}=\frac{1}{2}^{-}\) sector First of all, an intuitive analysis can be based on the results of single-channel calculation, which is shown in the Table 3. Except for the \(\Omega_{c}\omega\) and \(\Omega_{c}^{*}\omega\), the energies of other single-channels are all higher than the corresponding thresholds. The binding energies of the \(\Omega_{c}\omega\) and \(\Omega_{c}^{*}\omega\) are -3 MeV and -4 MeV, respectively. Since that different channels of the system are influenced by each other, so it is unavoidable to take into account the channel coupling effect. In order to better understand channel coupling, we first couple the channels with the same spatial structure, and then coupled the channels with different spatial structures. By solving the Schrodinger equation with channel coupling, we can obtain a series of eigenvalues theoretically. Only the lowest energy is presented in Table 4, because whether the system can form a bound state depends on whether the lowest energy is below the lowest threshold. After coupling the channels with the same spatial structure, the lowest energies of \(qss-\bar{q}c\) and \(qsc-\bar{q}s\) systems are still higher than their respective thresholds. Besides, the \(ssc-\bar{q}q\) system forms a bound state with the binding energy of 3 MeV. Further, we couple the channels with two different spatial structures, \(qss-\bar{q}c\) and \(ssc-\bar{q}q\), and then add the third spatial structure \(ssc-\bar{q}q\) into the coupling. The result shows that the coupling of \(qss-\bar{q}c\) and \(ssc-\bar{q}q\) depresses the lowest energy and makes it 5 MeV below the threshold \(\Xi D\). After an overall coupling of all channels, the lowest energy of the system is still higher than the lowest threshold of channel \(\Xi_{c}\bar{K}\), indicating that the \(J^{P}=1/2^{-}\)\(ssc\bar{q}q\) pentaquark system does not form a genuine bound state. According to the results above, some quasi-bound states are obtained in the single-channel calculation and structure coupling. By coupling to open channels, these states can decay to the corresponding open channels and may become resonance states. Yet it is not excluded that these states become scattered states under the coupling effect of open channels and closed channels. So to determine whether resonance states would exist, we continue to study the scattering phase shifts of possible open channels in the QDCSM. The resonance masses and the decay widths of the resonance states are also calculated. The current calculation applies only to the decay of \(S-\)wave open channels. First, in order to determine whether \(\Xi D\) forms a resonance state, we study the scattering process of open channels \(\Xi_{c}\bar{K}\) and \(\Xi_{c}^{\prime}\bar{K}\), because the two channels have lower thresholds than the energy of the \(\Xi D\) state. The phase shifts of \(\Xi_{c}\bar{K}\) and \(\Xi_{c}^{\prime}\bar{K}\) are shown in Fig. 1 and Fig. 2, respectively. It is obvious that both phase shifts show a sharp increase around the corresponding resonance mass, which indicates that the \(\Xi D\) state becomes a resonance state in both \(\Xi_{c}\bar{K}\) and \(\Xi_{c}^{\prime}\bar{K}\) scattering process. The resonance mass, corrected mass and the decay \begin{table} \begin{tabular}{c c c c} \hline \hline Hadron & \(I(J^{P})\) & \(M_{Exp}\) & \(M_{Theo}\) \\ \hline \(N\) & \(1/2(1/2^{+})\) & 939 & 939 \\ \(\Delta\) & \(3/2(3/2^{+})\) & 1232 & 1232 \\ \(\Sigma_{c}\) & \(1(1/2^{+})\) & 2455 & 2465 \\ \(\Sigma_{c}^{*}\) & \(1(3/2^{+})\) & 2490 & 2518 \\ \(\Lambda_{c}\) & \(0(1/2^{+})\) & 2286 & 2286 \\ \(\Xi\) & \(1/2(1/2^{+})\) & 1318 & 1375 \\ \(\Xi^{*}\) & \(1/2(3/2^{+})\) & 1536 & 1496 \\ \(\Xi_{c}\) & \(1/2(1/2^{+})\) & 2467 & 2551 \\ \(\Xi_{c}^{\prime}\) & \(1/2(1/2^{+})\) & 2577 & 2621 \\ \(\Xi_{c}^{*}\) & \(1/2(3/2^{+})\) & 2645 & 2638 \\ \(\Omega_{c}\) & \(0(1/2^{+})\) & 2695 & 2785 \\ \(\Omega_{c}^{*}\) & \(0(3/2^{+})\) & 2766 & 2796 \\ \(\pi\) & \(1(0^{-})\) & 139 & 139 \\ \(\rho\) & \(1(1^{-})\) & 770 & 770 \\ \(\omega\) & \(0(1^{-})\) & 782 & 722 \\ \(\bar{K}\) & \(1/2(0^{-})\) & 495 & 495 \\ \(\bar{K}^{*}\) & \(1/2(1^{-})\) & 892 & 814 \\ \(D\) & \(1/2(0^{-})\) & 1869 & 1869 \\ \(D^{*}\) & \(1/2(1^{-})\) & 2007 & 1952 \\ \hline \hline \end{tabular} \end{table} Table 2: The masses (in MeV) of the baryons and mesons. Experimental values are taken from the Particle Data Group (PDG) [111]. width are summarized as follows: \[\begin{array}{ll}\mbox{In }\Xi_{c}\bar{K}\mbox{ channel}:&M_{res}^{Theo}=3230\mbox{ MeV},\\ &M_{res}^{\prime}=3182\mbox{ MeV},\\ &\Gamma_{res}=8.4\mbox{ MeV},\\ \mbox{In }\Xi_{c}^{\prime}\bar{K}\mbox{ channel}:&M_{res}^{Theo}=3221\mbox{ MeV},\\ &M_{res}^{\prime}=3174\mbox{ MeV},\\ &\Gamma_{res}=33.6\mbox{ MeV}.\end{array}\] Thus, a resonance state dominated by \(\Xi D\) with \(J^{P}=1/2^{-}\) in the decay channel \(\Xi_{c}\bar{K}\) and \(\Xi_{c}^{\prime}\bar{K}\), with the corrected resonance mass 3174\(\sim\)3182 MeV and decay width 42 MeV, is confirmed. This is consistent with the newly reported \(\Omega_{c}(3185)\), the mass and decay width of which are \(3185.1\pm 1.7^{+7.4}_{-0.9}\pm 0.2\) MeV and \(50\pm 7^{+10}_{-20}\) MeV, respectively. Therefore, in our quark model calculation, the \(\Omega_{c}(3185)\) can be well interpreted as a \(\Xi D\) resonance state with \(J^{P}=1/2^{-}\). In addition, one may be curious about the cusps around 50 MeV and 250 MeV in the \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multicolumn{1}{c}{ Coupled-structure} & \(E_{th}^{Theo}\) (Channel) & \(E_{cc}\) & \(E_{B}\) & \(E_{th}^{Exp}\) & \(E^{\prime}\) \\ \hline \(qss-\bar{q}c\) & 3235 (\(\Xi D\)) & 3237 & ub & 3187 & 3190 \\ \(qsc-\bar{q}s\) & 3060 (\(\Xi_{c}\bar{K}\)) & 3065 & ub & 2962 & 2967 \\ \(ssc-\bar{q}q\) & 3548 (\(\Omega_{c}\omega\)) & 3545 & -3 & 3477 & 3474 \\ \(qss-\bar{q}c,\ ssc-\bar{q}q\) & 3235 (\(\Xi D\)) & 3230 & -5 & 3187 & 3192 \\ \(qss-\bar{q}c,\ qsc-\bar{q}s,\ ssc-\bar{q}q\) & 3060 (\(\Xi_{c}\bar{K}\)) & 3064 & ub & 2962 & 2966 \\ \hline \hline \end{tabular} \end{table} Table 4: The coupled-channel energies of the \(ssc\bar{q}q\) pentaquark system with \(J^{P}=\frac{1}{2}^{-}\) (unit: MeV). Fig. 1. They are caused by the thresholds of the corresponding single channels and the similar situation can also be found in Fig. 2. In order to investigate the structure of this \(\Xi D\) resonance, we further calculate its RMS. It is worth noting that, the scattering state has no real RMS since the relative motion wave functions of the scattered states are non-integrable in the infinite space. If we calculate the RMS of a scattering state in a limited space, we can only obtain a value that increases with the expansion of computing space. Although the wave function of a resonance state is also non-integrable, we can calculate the RMS of the main component of the resonance state, whose wave function is integrable. In this way, we can calculate the RMS of various states to identify the nature of these states by keep expanding the computing space. According to the numerical result, the RMS of the resonance state \(\Xi D\) is 1.9 fm, indicating that it is likely to be a molecular state. \(\Omega_{c}\omega\), \(\Omega_{c}^{*}\omega\) and their coupling also form quasi-bound states in the previous calculations. The energies of the \(\Omega_{c}\omega\) and \(\Omega_{c}^{*}\omega\) single channels are about 485\(\sim\)495 MeV above the threshold of \(\Xi_{c}\bar{K}\) and 415\(\sim\)425 MeV above the threshold of \(\Xi_{c}^{\prime}\bar{K}\). However, in Fig. 1 and Fig. 2, the sharp increase structure of phase shift representing the resonance state, does not appear around the energies of \(\Omega_{c}\omega\) and \(\Omega_{c}^{*}\omega\) channels. Besides, it is still possible that \(\Omega_{c}\omega\) and \(\Omega_{c}^{*}\omega\) decay to other open channels than \(\Xi_{c}\bar{K}\) or \(\Xi_{c}^{\prime}\bar{K}\) channels. Therefore, we also calculate the phase shifts of other different open channels with channel coupling, which are shown in Fig. 3. The ranges of incident energy for different open channels are determined to fit the energy of \(\Omega_{c}^{*}\omega\), which is the highest energy of the system. As a result, the ranges of incident energy for different open channels are not the same in Fig. 3. After considering the different decay channels, no resonance state of \(\Omega_{c}\omega\) or \(\Omega_{c}^{*}\omega\) is found. This can be explained by the effect of channel coupling, which should be fully considered. As listed in the Table 4, the energy of \(qss-\bar{q}c\) structure coupling is above the corresponding threshold \(\Xi D\). Nevertheless, the energy of \(qss-\bar{q}c\) structure is pushed below the threshold \(\Xi D\), after being coupled to the \(ssc-\bar{q}q\) structure. As a result, the energy of the previous quasi-bound state \(\Omega_{c}\omega\) is pushed above the corresponding threshold by a reaction, which causes the narrow resonance state to disappear after coupling with the \(qss-\bar{q}c\) structure. The same thing happens in the phase shift of the open channel \(\Xi_{c}^{\prime}\bar{K}\). Therefore, the narrow resonance state we just discussed is not a genuine resonance state. ### \(J^{p}=\frac{3}{2}^{-}\) sector The single-channel energies of \(ssc\bar{q}q\) system with \(J^{P}=3/2^{-}\) are listed in the Table 5. Two bound states are obtained in the \(\Omega_{c}\omega\) and \(\Omega_{c}^{*}\omega\) channel, while the energies of other channels are all above the corresponding thresholds. The binding energies of the \(\Omega_{c}\omega\) and \(\Omega_{c}^{*}\omega\) state are -2 MeV and -4 MeV, respectively. For the \(ssc\bar{q}q\) system with \(J^{P}=3/2^{-}\), channel coupling of various structures is also considered, which is listed in Table 6. Similar to the previous section with \(J^{P}=1/2^{-}\), we first carry out the channel coupling with the same spatial structure. Single structure coupling \(qss-\bar{q}c\) and \(qsc-\bar{q}s\) are all unbound according to the numerical results. In addition, the \(ssc-\bar{q}q\) structure coupling slightly depresses the energy of \(\Omega_{c}\omega\), although it is not numerically significant. After calculating two different structure coupling, we continue to add the third structure into the coupling. As one can see, after coupling all three structures, the lowest energy of the whole system is 2 MeV lower than the energy of the threshold \(\Xi_{c}^{*}\bar{K}\). Since the \(\Xi_{c}^{*}\bar{K}\) is the lowest threshold of the \(ssc\bar{q}q\) system with \(J^{P}=3/2^{-}\), a stable bound state is obtained and its corrected mass is 3138 MeV. Besides, the bound state conclusion can also be confirmed in the scattering process. In Fig 4, as the incident energy approaches 0 MeV, the phase shift of the open channel \(\Xi_{c}^{*}\bar{K}\) tends to 180 degrees, which conforms to the characteristics of a bound state. According to the further calculation, this state is dominated by \(\Xi_{c}^{*}\bar{K}\) and the RMS calculation is 1.8 fm. The mass is close to the mass of \(\Omega_{c}(3120)\), which is \(3119.1\pm 0.3\pm 0.9\pm 0.3\) MeV. In addition, the bound state \(\Xi_{c}^{*}\bar{K}\) can still decay to \(\Xi_{c}^{*}\bar{K}\). Figure 3: The phase shifts of different open channels with \(J^{P}=\frac{1}{2}^{-}\). some \(D\)-wave channels, such as \(\Xi_{c}\bar{K}\), through the tensor force coupling, which is the next step of our research in the future. However, the decay width of this type of decay is usually very narrow, according to our previous research [113]. This corresponds to the decay width of \(\Omega_{c}(3120)\), which is \(0.60\pm 0.63\,\)MeV. In this case, \(\Omega_{c}(3120)\) could be interpreted as a \(\Xi_{c}^{*}\bar{K}\) molecular state with \(J^{P}=3/2^{-}\) in present calculation. Furthermore, the scattering process is studied to examine whether \(\Omega_{c}\omega\) and \(\Omega_{c}^{*}\omega\) can form resonance states. The phase shifts of different \(S-\)wave open channels with channel coupling are shown in Fig 4 and Fig 5. However, the phase shifts of all open channels do not show a sharp increase around the energies of the quasi-bound state \(\Omega_{c}\omega\) or \(\Omega_{c}^{*}\omega\). The result shows that the \(\Omega_{c}\omega\) and \(\Omega_{c}^{*}\omega\) become scattering states rather than resonance states after being coupled to other channels. ### \(J^{p}=\frac{5}{2}^{-}\) sector For the \(ssc\bar{q}q\) system with \(J^{P}=5/2^{-}\), there is three channels \(\Xi^{*}D^{*}\), \(\Xi_{c}^{*}\bar{K}^{*}\) and \(\Omega_{c}^{*}\omega\). The energies obtained in the single-channel calculation are presented in Table 7. The \(\Omega_{c}^{*}\omega\) forms a bound state, which will be examined later to see if it is a resonance state. As is shown in Table 8, since each structure has only one channel, the channel coupling of one structure is not needed here. After coupling the \(qss-\bar{q}c\) and the \(ssc-\bar{q}q\) structure, the energy obtained is still above the threshold. Besides, after coupling all three channels, a bound state is formed. The corrected mass of this state is 3527 MeV and the value of RMS of this state is 1.7 fm. According to the RMS of this state, it tends to be molecular structure and its main composition is \(\Xi_{c}^{*}\bar{K}^{*}\). Therefore, a \(J^{P}=5/2^{-}\)\(ssc\bar{q}q\) pentaquark state is predicted here, whose mass is 3527 MeV. Although it can decay to some \(D\)-wave channels, such as \(\Xi D\) and \(\Xi_{c}^{\prime}\bar{K}\), it is still possible to be a resonance, which is worthy of experimental search and research. The scattering process is studied to examine whether \(\Omega_{c}^{*}\omega\) could be a resonance state. The phase shifts of the \(S-\)wave open channels \(\Xi^{*}D^{*}\) and \(\Xi_{c}^{*}\bar{K}^{*}\) are shown in \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Structure & \(\chi^{fi}\) & \(\chi^{\sigma j}\) & Channel & \(E_{th}^{Tho}\) & \(E_{sc}\) & \(E_{B}\) & \(E_{th}^{Exp}\) & \(E^{\prime}\) \\ \hline \(qss-\bar{q}c\) & \(i=2\) & \(j=4\) & \(\Xi D^{*}\) & 3319 & 3323 & ub & 3325 & 3329 \\ & \(i=2\) & \(j=5\) & \(\Xi^{*}D\) & 3357 & 3362 & ub & 3405 & 3410 \\ & \(i=2\) & \(j=6\) & \(\Xi^{*}D^{*}\) & 3441 & 3446 & ub & 3543 & 3548 \\ \(qsc-\bar{q}s\) & \(i=2\) & \(j=4\) & \(\Xi_{c}^{*}\bar{K}^{*}\) & 3449 & 3457 & ub & 3469 & 3477 \\ & \(i=2\) & \(j=4\) & \(\Xi_{c}\bar{K}^{*}\) & 3379 & 3386 & ub & 3359 & 3366 \\ & \(i=2\) & \(j=5\) & \(\Xi_{c}^{*}\bar{K}\) & 3147 & 3153 & ub & 3140 & 3146 \\ & \(i=2\) & \(j=6\) & \(\Xi_{c}^{*}\bar{K}^{*}\) & 3466 & 3472 & ub & 3537 & 3543 \\ \(ssc-\bar{q}q\) & \(i=1\) & \(j=4\) & \(\Omega_{c}\omega\) & 3548 & 3546 & -2 & 3477 & 3475 \\ & \(i=1\) & \(j=6\) & \(\Omega_{c}^{*}\omega\) & 3558 & 3554 & -4 & 3548 & 3544 \\ \hline \hline \end{tabular} \end{table} Table 5: The single-channel energies of the \(ssc\bar{q}q\) pentaquark system with \(J^{P}=\frac{3}{2}^{-}\) (unit: MeV). Fig 6. However, there is no sharp increase structure of phase shift around the energy of the \(\Omega_{c}^{*}\omega\) single channel. This indicates that the \(\Omega_{c}^{*}\omega\) does not form a resonance state, but rather a scattering state. In addition, when the incident energy approaches zero, the behavior phase shift also confirms the existence of the bound state. Considering that there have been a few theoretical works on the newly reported \(\Omega_{c}^{0}(3185)\) and \(\Omega_{c}^{0}(3327)\) states, we make a brief review here. For the \(\Omega_{c}^{0}(3185)\) state, Refs. [87; 90; 92] interpret it as a three-quark excited state. The quantum number assignment \(J^{P}\) (\(nL\)) for the \(\Omega_{c}^{0}(3185)\) state could be: \(J^{P}=3/2^{+}\) (\(2S\)) [87; 92] and \(J^{P}=1/2^{+}\) (\(2S\)) [90]. On the other hand, the explanation of the \(\Omega_{c}^{0}(3185)\) as a \(\Xi D\) molecular state with \(J^{P}=1/2^{-}\) can be found in Refs. [91; 93]. As for the \(\Omega_{c}^{0}(3327)\) state, the three-quark excitation explanation can be found in Refs. [86; 87; 88; 90; 92]. The quantum number assignment for the \(\Omega_{c}^{0}(3327)\) state could be: \(J^{P}=5/2^{+}\) (\(1D\)) [86], \(J^{P}=3/2^{+}\) (\(1D\)) [87; 92], \(J^{P}=1/2^{+},3/2^{+}\) or \(5/2^{+}\) (\(D-\)wave) [88] and \(\hat{J}^{P}=3/2^{+}\) (\(2S\)) [90]. Refs. [89; 91; 93] also support the \(\Omega_{c}^{0}(3327)\) state to be interpreted as a \(\Xi D^{*}\) molecular state with \(J^{P}=3/2^{-}\). The conclusions of the studies on the two newly discovered \(\Omega_{c}\) states from different theoretical groups are summarized in Table 9. According to our calculation, the \(\Omega_{c}^{0}(3185)\) can be well interpreted as a \(\Xi D\) molecular state with \(J^{P}=1/2^{-}\), whereas \(\Omega_{c}(3327)\) is not found within the multi-quark framework. Therefore, we propose to explain \(\Omega_{c}(3327)\) from the perspective of the three-quark excitation, and at the same time, the investigation of these states in an unquenched picture could be beneficial. In addition to exploring the pentaquark explanation \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \(\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\ of the two newly discovered \(\Omega_{c}\) states, the study of the \(ssc\bar{q}q\) system has also led to some other results. In this work, three states are obtained, including one resonance state and two bound states. We have summarized the obtained states in Table 10. Considering that the resonance energy of \(\Xi D\) is obtained in the scattering phase shift, the resonance energies obtained in different open channels are not exactly the same. Therefore, the mass of \(\Xi D\) state has a range. One may notice that there is no value for the decay widths of \(\Xi_{c}^{*}\bar{K}\) and \(\Xi_{c}^{*}\bar{K}^{*}\) in Table 10. Since \(\Xi_{c}^{*}\bar{K}\) and \(\Xi_{c}^{*}\bar{K}^{*}\) cannot decay to \(S\)-wave channels, the decay process of the two states to \(D\)-wave channels will be studied in our future works. In addition, the decay width to the \(D\)-wave channels is usually narrow according to our previous research [113]. ## IV Summary In this work, we investigate the excited \(\Omega_{c}\) states from the pentaquark perspective. The \(S\)-wave pentaquark systems \(ssc\bar{q}q\) with \(I=0\), \(J^{P}=1/2^{-}\), \(3/2^{-}\) and \(5/2^{-}\) are studied in the framework of the QDCSM. The dynamic bound state calculation is carried out to search for bound states in the \(ssc\bar{q}q\) systems. Both the single-channel and the coupled-channel calculations are performed to explore the effect of the multi-channel coupling. Meanwhile, the study of the scattering process of the open channels is carried out to confirm possible resonance states. We also calculate the RMS of cluster spacing to further study the structure of the obtained states. The numerical results show that a \(\Xi D\) resonance state with \(J^{P}=1/2^{-}\) and two bound states with \(J^{P}=3/2^{-}\) and \(5/2^{-}\) are obtained. The mass and the decay width of the \(\Xi D\) resonance state is 3174\(\sim\)3182 MeV and 42 MeV, respectively, which are close to the reported \(\Omega_{c}^{0}(3185)\). The RMS of the \(\Xi D\) supports the molecular structure of this state. So the recently reported \(\Omega_{c}^{0}(3185)\) can be explained as the molecular \(\Xi D\) state with \(J^{P}=1/2^{-}\). It would be very anticipated to see the next experimental steps to determine the spin and parity of it. A bound molecular state we obtained is \(\Xi_{c}^{*}\bar{K}\) with \(J^{P}=3/2^{-}\) and a mass of 3138 MeV, which can be used to interpret the reported \(\Omega_{c}^{0}(3120)\). Besides, a new molecular state \(\Xi_{c}^{*}\bar{K}^{*}\) with \(J^{P}=5/2^{-}\) and a mass of 3527 MeV is predicted to exist, which is worth searching in the future. However, other reported \(\Omega_{c}\) states cannot be well described in the framework of pentaquark systems in present work. The three-quark excitation, or the unquenched picture may be a good explanation, which is worth further exploration. In addition, the present study shows that the channel coupling effect has to be considered in describing the multi-quark system. Especially for the possible resonance state, the coupling to the open channels will shift the mass of the resonance state, or even destroy it. The \(\Omega_{c}\omega\) and \(\Omega_{c}^{*}\omega\) bound states are obtained in the single-channel calculation. However, the energies of \(\Omega_{c}\omega\) and \(\Omega_{c}^{*}\omega\) are elevated by coupling to open channels, leading to the disappearance of these two states. Based on this, we would like to emphasize the importance of channel coupling effect in studying exotic hadron states. ###### Acknowledgements. This work is supported partly by the National Science Foundation of China under Contract Nos. 11675080, 11775118, 11535005 and 11865019. ## Appendix A Resonating group method for bound-state and scattering process The resonating group method (RGM) [114; 115] and generating coordinates method [116; 117] are used to carry out a dynamical calculation. The main feature of the RGM for two-cluster systems is that it assumes that two clusters are frozen inside, and only considers the relative motion between the two clusters. So the conventional ansatz for the two-cluster wave functions is \[\psi_{\Im q}=\mathcal{A}\left[[\phi_{B}\phi_{M}]^{[\sigma]IS}\otimes\chi( \mathbf{R})\right]^{J}, \tag{10}\] where the symbol \(\mathcal{A}\) is the anti-symmetrization operator, and \(\mathcal{A}=1-P_{14}-P_{24}-P_{34}\). \([\sigma]=[222]\) gives the total color symmetry and all other symbols have their usual meanings. \(\phi_{B}\) and \(\phi_{M}\) are the \(q^{3}\) and \(\bar{q}q\) cluster wave functions, respectively. From the variational principle, after variation with respect to the relative motion \begin{table} \begin{tabular}{c c c c} \(J^{P}\) & Main Composition & Corrected Mass & decay width \\ \hline \(1/2^{-}\) & \(\Xi D\) & 3174\(\sim\)3182 MeV & 42 MeV \\ \(3/2^{-}\) & \(\Xi_{c}^{*}\bar{K}\) & 3138 MeV & \\ \(5/2^{-}\) & \(\Xi_{c}^{*}\bar{K}^{*}\) & 3527 MeV & \\ \end{tabular} \end{table} Table 10: The states obtained in this work. \begin{table} \begin{tabular}{c c c c c} & \multicolumn{2}{c}{\(\Omega_{c}^{0}(3185)\)} & \multicolumn{2}{c}{\(\Omega_{c}^{0}(3327)\)} \\ Ref & three-quark & molecular & three-quark & molecular \\ \hline Ref. [86] & & & & ✓ & \\ Ref. [87] & ✓ & & ✓ & \\ Ref. [88] & & & ✓ & \\ Ref. [89] & & ✓ & & ✓ \\ Ref. [90] & ✓ & & ✓ & \\ Ref. [91] & & ✓ & & ✓ \\ Ref. [92] & ✓ & & ✓ & \\ Ref. [93] & & ✓ & & ✓ \\ This Work & & ✓ & & \\ \end{tabular} \end{table} Table 9: The conclusions of the studies on the two newly discovered \(\Omega_{c}\) states. wave function \(\chi(\mathbf{R})=\sum_{L}\chi_{L}(\mathbf{R})\), one obtains the RGM equation: \[\int H(\mathbf{R},\mathbf{R}^{\prime})\chi(\mathbf{R}^{\prime})d\mathbf{R}^{ \prime}=E\int N(\mathbf{R},\mathbf{R}^{\prime})\chi(\mathbf{R}^{\prime})d \mathbf{R}^{\prime}, \tag{10}\] where \(H(\mathbf{R},\mathbf{R}^{\prime})\) and \(N(\mathbf{R},\mathbf{R}^{\prime})\) are Hamiltonian and norm kernels. By solving the RGM equation, we can get the energies \(E\) and the wave functions. In fact, it is not convenient to work with the RGM expressions. Then, we expand the relative motion wave function \(\chi(\mathbf{R})\) by using a set of gaussians with different centers \[\chi(\mathbf{R})= \frac{1}{\sqrt{4\pi}}\left(\frac{6}{5\pi b^{2}}\right)^{3/4}\sum_ {i,L,M}C_{i,L}\] \[\cdot\int\exp\left[-\frac{3}{5b^{2}}\left(\mathbf{R}-\mathbf{S}_{i} \right)^{2}\right]Y_{L,M}\left(\hat{\mathbf{S}}_{i}\right)d\Omega_{\mathbf{S}_{i}} \tag{11}\] where \(L\) is the orbital angular momentum between two clusters, and \(\mathbf{S}_{i}\), \(i=1,2,...,n\) are the generator coordinates, which are introduced to expand the relative motion wave function. By including the center of mass motion: \[\phi_{C}(\mathbf{R}_{C})=(\frac{5}{\pi b^{2}})^{3/4}e^{-\frac{5\mathbf{R}_{C}^{2}}{2b^ {2}}}, \tag{12}\] the ansatz Eq. (10) can be rewritten as \[\psi_{5q}= \mathcal{A}\sum_{i,L}C_{i,L}\int\frac{d\Omega_{\mathbf{S}_{i}}}{\sqrt {4\pi}}\prod_{\alpha=1}^{3}\phi_{\alpha}\left(\mathbf{S}_{i}\right)\prod_{\beta=4 }^{5}\phi_{\beta}\left(-\mathbf{S}_{i}\right)\] \[\cdot\left[\left[\chi_{I_{1}S_{1}}\left(B\right)\chi_{I_{2}S_{2}} \left(M\right)\right]^{IS}Y_{LM}\left(\hat{\mathbf{S}}_{i}\right)\right]^{J}\] \[\cdot\left[\chi_{c}\left(B\right)\chi_{c}\left(M\right)\right]^{ \left[\sigma\right]}, \tag{13}\] where \(\chi_{I_{1}S_{1}}\) and \(\chi_{I_{2}S_{2}}\) are the product of the flavor and spin wave functions, and \(\chi_{c}\) is the color wave function. These will be shown in detail later. \(\phi_{\alpha}(\mathbf{S}_{i})\) and \(\phi_{\beta}(-\mathbf{S}_{i})\) are the single-particle orbital wave functions with different reference centers: \[\phi_{\alpha}\left(\mathbf{S}_{i}\right) =\left(\frac{1}{\pi b^{2}}\right)^{3/4}e^{-\frac{1}{2b^{2}}\left( r_{\alpha}-\frac{2}{b}\mathbf{S}_{i}\right)^{2}},\] \[\phi_{\beta}\left(-\mathbf{S}_{i}\right) =\left(\frac{1}{\pi b^{2}}\right)^{3/4}e^{-\frac{1}{2b^{2}}\left( r_{\beta}+\frac{3}{b}\mathbf{S}_{i}\right)^{2}}. \tag{14}\] With the reformulated ansatz Eq. (13), the RGM Eq. (10) becomes an algebraic eigenvalue equation: \[\sum_{j}C_{j}H_{i,j}=E\sum_{j}C_{j}N_{i,j}, \tag{15}\] where \(H_{i,j}\) and \(N_{i,j}\) are the Hamiltonian matrix elements and overlaps, respectively. By solving the generalized eigen problem, we can obtain the energy and the corresponding wave functions of the pentaquark systems. For a scattering problem, the relative wave function is expanded as \[\chi_{L}(\mathbf{R})=\sum_{i}C_{i}\frac{\tilde{u}_{L}\left(\mathbf{R},\mathbf{S}_{i} \right)}{\mathbf{R}}Y_{L,M}(\hat{\mathbf{R}}), \tag{16}\] with \[\tilde{u}_{L}\left(\mathbf{R},\mathbf{S}_{i}\right)= \left\{\begin{array}{ll}\alpha_{i}u_{L}\left(\mathbf{R},\mathbf{S}_{i} \right),&\mathbf{R}\leq\mathbf{R}_{C}\\ \left[h_{L}^{-}(\mathbf{k},\mathbf{R})-s_{i}h_{L}^{+}(\mathbf{k},\mathbf{R})\right]R_{AB},&\mathbf{ R}\geq\mathbf{R}_{C}\end{array}\right. \tag{17}\] where \[u_{L}\left(\mathbf{R},\mathbf{S}_{i}\right)= \sqrt{4\pi}\left(\frac{6}{5\pi b^{2}}\right)^{3/4}\mathbf{R}e^{- \frac{3}{5b^{2}}\left(\mathbf{R}-\mathbf{S}_{i}\right)^{2}}\] \[\cdot i^{L}j_{L}\left(-i\frac{6}{5b^{2}}S_{i}\right). \tag{18}\] \(h_{L}^{\pm}\) is the \(L\)-th spherical Hankel functions, \(k\) is the momentum of the relative motion with \(k=\sqrt{2\mu E_{ie}}\), \(\mu\) is the reduced mass of two hadrons of the open channel, \(E_{ie}\) is the incident energy of the relevant open channels, which can be written as \(E_{ie}=E_{total}-E_{th}\) where \(E_{total}\) denotes the total energy and \(E_{th}\) represents the threshold of open channel. \(R_{C}\) is a cutoff radius beyond which all the strong interaction can be disregarded. Besides, \(\alpha_{i}\) and \(s_{i}\) are complex parameters that are determined by the smoothness condition at \(R=R_{C}\) and \(C_{i}\) satisfy \(\sum_{i}C_{i}=1\). After performing the variational procedure, a \(L\)-th partial-wave equationfor the scattering problem can be deduced as \[\sum_{j}\mathcal{L}_{ij}^{L}C_{j}=\mathcal{M}_{i}^{L}(i=0,1,\ldots,n-1), \tag{19}\] with \[\mathcal{L}_{ij}^{L} =\mathcal{K}_{ij}^{L}-\mathcal{K}_{i0}^{L}-\mathcal{K}_{0j}^{L}+ \mathcal{K}_{00}^{L},\] \[\mathcal{M}_{i}^{L} =\mathcal{K}_{00}^{L}-\mathcal{K}_{i0}^{L}, \tag{20}\] and \[\mathcal{K}_{ij}^{L}= \left\langle\hat{\phi}_{A}\hat{\phi}_{B}\frac{\tilde{u}_{L}\left( \mathbf{R}^{\prime},\mathbf{S}_{i}\right)}{\mathbf{R}^{\prime}}Y_{L,M}\left(\mathbf{R}^{ \prime}\right)\left|H-E\right|\right.\] \[\left.\cdot\mathcal{A}\left[\hat{\phi}_{A}\hat{\phi}_{B}\frac{ \tilde{u}_{L}\left(\mathbf{R},\mathbf{S}_{j}\right)}{\mathbf{R}}Y_{L,M}(\mathbf{R})\right] \right\rangle. \tag{21}\] By solving Eq. (19), we can obtain the expansion coefficients \(C_{i}\), then the \(S-\)matrix element \(S_{L}\) and the phase shifts \(\delta_{L}\) are given by \[S_{L}=e^{2i\delta_{L}}=\sum_{i}C_{i}s_{i}. \tag{22}\] Resonances are unstable particles usually observed as bell-shaped structures in scattering cross sections of their open channels. For a simple narrow resonance, its fundamental properties correspond to the visible cross-section features: mass \(M\) is at the peak position, and decay width \(\Gamma\) is the half-width of the bell shape. The cross-section \(\sigma_{L}\) and the scattering phase shifts \(\delta_{L}\) have relations: \[\sigma_{L}=\frac{4\pi}{k^{2}}(2L+1)\sin^{2}\delta_{L}. \tag{115}\] Therefore, resonances can also usually be observed in the scattering phase shift, where the phase shift of the scattering channels rises through \(\frac{\pi}{2}\) at a resonance mass. We can obtain a resonance mass at the position of the phase shift of \(\frac{\pi}{2}\). The decay width is the mass difference between the phase shift of \(\frac{3\pi}{4}\) and \(\frac{\pi}{4}\). ## Appendix B Constructing wave functions For the spin wave function, we first construct the spin wave functions of the \(q^{3}\) and \(\bar{q}q\) clusters with SU(2) algebra, and then the total spin wave function of the pentaquark system is obtained by coupling the spin wave functions of two clusters together. The spin wave functions of the \(q^{3}\) and \(\bar{q}q\) clusters are Eq. (116) and Eq. (117), respectively \[\chi^{\sigma}_{\frac{3}{2},\frac{3}{2}}(3) =\alpha\alpha\alpha,\] \[\chi^{\sigma}_{\frac{3}{2},\frac{1}{2}}(3) =\alpha\alpha\beta,\] \[\chi^{\sigma}_{\frac{3}{2},-\frac{3}{2}}(3) =\beta\beta\beta,\] \[\chi^{\sigma 1}_{\frac{3}{2},\frac{1}{2}}(3) =\frac{1}{\sqrt{6}}(2\alpha\alpha\beta-\alpha\beta\alpha-\beta \alpha\alpha),\] \[\chi^{\sigma 2}_{\frac{3}{2},\frac{1}{2}}(3) =\frac{1}{\sqrt{2}}(\alpha\beta\alpha-\beta\alpha\alpha),\] \[\chi^{\sigma 1}_{\frac{3}{2},-\frac{1}{2}}(3) =\frac{1}{\sqrt{6}}(\alpha\beta\beta+\beta\alpha\beta-2\beta\beta \alpha),\] \[\chi^{\sigma 2}_{\frac{3}{2},-\frac{1}{2}}(3) =\frac{1}{\sqrt{2}}(\alpha\beta\beta-\beta\alpha\beta). \tag{116}\] \[\chi^{\sigma}_{1,1}(2) =\alpha\alpha,\] \[\chi^{\sigma}_{1,0}(2) =\frac{1}{\sqrt{2}}(\alpha\beta+\beta\alpha),\] \[\chi^{\sigma}_{1,-1}(2) =\beta\beta,\] \[\chi^{\sigma}_{0,0}(2) =\frac{1}{\sqrt{2}}(\alpha\beta-\beta\alpha). \tag{117}\] For pentaquark system, the total spin quantum number can be 1/2, 3/2 or 5/2. Considering that the Hamiltonian does not contain an interaction which can distinguish the third component of the spin quantum number, so the wave function of each spin quantum number can be written as follows \[\chi^{\sigma 1}_{\frac{3}{2},\frac{1}{2}}(5)= \chi^{\sigma}_{\frac{3}{2},\frac{1}{2}}(3)\chi^{\sigma}_{0,0}(2),\] \[\chi^{\sigma 2}_{\frac{3}{2},\frac{1}{2}}(5)= -\sqrt{\frac{2}{3}}\chi^{\sigma}_{\frac{3}{2},-\frac{1}{2}}(3) \chi^{\sigma}_{1,1}(2)+\sqrt{\frac{1}{3}}\chi^{\sigma}_{\frac{3}{2},\frac{1}{2 }}(3)\chi^{\sigma}_{1,0}(2),\] \[\chi^{\sigma 3}_{\frac{1}{2},\frac{1}{2}}(5)= \sqrt{\frac{1}{6}}\chi^{\sigma}_{\frac{3}{2},-\frac{1}{2}}(3) \chi^{\sigma}_{1,1}(2)-\sqrt{\frac{1}{3}}\chi^{\sigma}_{\frac{3}{2},\frac{1}{2 }}(3)\chi^{\sigma}_{1,0}(2)\] \[+\sqrt{\frac{1}{2}}\chi^{\sigma}_{\frac{3}{2},\frac{3}{2}}(3) \chi^{\sigma}_{1,-1}(2),\] \[\chi^{\sigma 4}_{\frac{3}{2},\frac{3}{2}}(5)= \chi^{\sigma}_{\frac{3}{2},\frac{1}{2}}(3)\chi^{\sigma}_{1,1}(2),\] \[\chi^{\sigma 5}_{\frac{3}{2},\frac{3}{2}}(5)= \chi^{\sigma}_{\frac{3}{2},\frac{1}{2}}(3)\chi^{\sigma}_{0,0}(2),\] \[\chi^{\sigma 6}_{\frac{3}{2},\frac{3}{2}}(5)= \sqrt{\frac{3}{5}}\chi^{\sigma}_{\frac{3}{2},\frac{3}{2}}(3) \chi^{\sigma}_{1,0}(2)-\sqrt{\frac{2}{5}}\chi^{\sigma}_{\frac{3}{2},\frac{1}{2 }}(3)\chi^{\sigma}_{1,1}(2),\] \[\chi^{\sigma 7}_{\frac{3}{2},\frac{1}{2}}(5)= \chi^{\sigma}_{\frac{3}{2},\frac{1}{2}}(3)\chi^{\sigma}_{1,1}(2). \tag{118}\] Similar to constructing spin wave functions, we first write down the flavor wave functions of the \(q^{3}\) clusters, which are \[\chi^{f1}_{0,0}(3) =\frac{1}{\sqrt{6}}(2ssc-scs-css),\] \[\chi^{f2}_{0,0}(3) =\frac{1}{\sqrt{2}}(scs-css),\] \[\chi^{f3}_{0,0}(3) =\frac{1}{\sqrt{3}}(ssc+scs+css),\] \[\chi^{f1}_{\frac{1}{2},\frac{1}{2}}(3) =\sqrt{\frac{1}{6}}(uss+sus-2ssu),\] \[\chi^{f2}_{\frac{1}{2},\frac{1}{2}}(3) =\sqrt{\frac{1}{2}}(uss-sus),\] \[\chi^{f3}_{\frac{1}{2},\frac{1}{2}}(3) =\sqrt{\frac{1}{3}}(uss+sus+ssu),\] \[\chi^{f4}_{\frac{1}{2},\frac{1}{2}}(3) =\sqrt{\frac{1}{12}}(2usc+2suc-csu-ucs-cus-scu),\] \[\chi^{f5}_{\frac{1}{2},\frac{1}{2}}(3) =\sqrt{\frac{1}{4}}(ucs+scu-csu-cus),\] \[\chi^{f6}_{\frac{1}{2},\frac{1}{2}}(3) =\sqrt{\frac{1}{4}}(ucs+cus-csu-scu),\] \[\chi^{f7}_{\frac{1}{2},\frac{1}{2}}(3) =\sqrt{\frac{1}{12}}(2usc-2suc+csu+ucs-cus-scu),\] \[\chi^{f8}_{\frac{1}{2},\frac{1}{2}}(3) =\sqrt{\frac{1}{6}}(usc+suc+csu+ucs+cus+scu),\] \[\chi^{f1}_{\frac{1}{2},-\frac{1}{2}}(3) =\sqrt{\frac{1}{6}}(dss+sds-2ssd),\] \[\chi^{f2}_{\frac{1}{2},-\frac{1}{2}}(3) =\sqrt{\frac{1}{2}}(dss-sds),\] \[\chi^{f3}_{\frac{1}{2},-\frac{1}{2}}(3) =\sqrt{\frac{1}{3}}(dss+sds+ssd),\] \[\chi^{f}_{\frac{1}{2},-\frac{1}{2}}(2) = \sqrt{\frac{1}{12}}(2dsc+2sdc-csd-dcs-cds-scd),\] \[\chi^{f5}_{\frac{1}{2},-\frac{1}{2}}(3) = \sqrt{\frac{1}{4}}(dcs+scd-csd-cds),\] \[\chi^{f6}_{\frac{1}{2},-\frac{1}{2}}(3) = \sqrt{\frac{1}{4}}(dcs+cds-csd-scd),\] \[\chi^{f7}_{\frac{1}{2},-\frac{1}{2}}(3) = \sqrt{\frac{1}{12}}(2dsc-2sdc+csd+dcs-cds-scd),\] \[\chi^{f8}_{\frac{1}{2},-\frac{1}{2}}(3) = \sqrt{\frac{1}{6}}(dsc+sdc+csd+dcs+scd).\] Here, both the light and heavy quarks are considered as identical particles with the SU(4) extension. Then, the flavor wave functions of \(\bar{q}q\) clusters are \[\chi^{f}_{1,1}(2) = \bar{d}u,\] \[\chi^{f}_{1,0}(2) = \sqrt{\frac{1}{2}}(\bar{d}d-\bar{u}u),\] \[\chi^{f}_{1,-1}(2) = -\bar{u}d,\] \[\chi^{f}_{0,0}(2) = \sqrt{\frac{1}{2}}(\bar{d}d+\bar{u}u),\] \[\chi^{f}_{\frac{1}{2},\frac{1}{2}}(2) = \bar{d}s,\] \[\chi^{f}_{\frac{1}{2},-\frac{1}{2}}(2) = -\bar{u}s,\] \[\chi^{f}_{\frac{1}{2},\frac{1}{2}}(2) = \bar{d}c,\] \[\chi^{f}_{\frac{1}{2},-\frac{1}{2}}(2) = -\bar{u}c. \tag{105}\] As for the flavor degree of freedom, the isospin \(I\) of pentaquark systems we investigated in this work is \(I=0\). The flavor wave functions of pentaquark systems can be expressed as \[\chi^{f1}_{0,0}(5) = \sqrt{\frac{1}{2}}\chi^{f}_{\frac{1}{2},\frac{1}{2}}(3)\chi^{f}_{ \frac{1}{2},-\frac{1}{2}}(2)-\sqrt{\frac{1}{2}}\chi^{f}_{\frac{1}{2},-\frac{1} {2}}(3)\chi^{f}_{\frac{1}{2},\frac{1}{2}}(2),\] \[\chi^{f2}_{0,0}(5) = \chi_{0,0}(3)\chi^{f}_{0,0}(2). \tag{106}\] For the color-singlet channel (two clusters are color-singlet), the color wave function can be obtained by \(1\otimes 1\): \[\chi^{c}= \frac{1}{\sqrt{6}}(rgb-rbg+gbr-grb+brg-bgr)\] \[\cdot\frac{1}{\sqrt{3}}(\bar{r}r+\bar{g}g+\bar{b}b). \tag{107}\] Finally, we can acquire the total wave functions by combining the wave functions of the orbital, spin, flavor and color parts together according to the quantum numbers of the pentaquark systems.
2309.03284
Photonic link from single flux quantum circuits to room temperature
Broadband, energy-efficient signal transfer between cryogenic and room-temperature environment has been a major bottleneck for superconducting quantum and classical logic circuits. Photonic links promise to overcome this challenge by offering simultaneous high bandwidth and low thermal load. However, the development of cryogenic electro-optic modulators -- a key component for photonic readout of electrical signals -- has been stifled by the stringent requirements of superconducting circuits. Rapid single flux quantum circuits (RSFQ), for example, operate with a tiny signal amplitude of only a few millivolts (mV), far below the volt-level signal used in conventional circuits. Here, we demonstrate the first direct optical readout of an RSFQ circuit without additional electrical amplification enabled by a novel superconducting electro-optic modulator (SEOM) featuring a record-low half-wave voltage V{\pi} of 42 mV on a 1 m-long SEOM. Leveraging the low ohmic loss of superconductors, we break the fundamental V{\pi}-bandwidth trade-off and demonstrate electro-optic bandwidth up to 17 GHz on a 0.2 m-long SEOM at cryogenic temperatures. Our work presents a viable solution toward high-bandwidth signal transfer between future large-scale superconducting circuits and room-temperature electronics.
Mohan Shen, Jiacheng Xie, Yuntao Xu, Sihao Wang, Risheng Cheng, Wei Fu, Yiyu Zhou, Hong X. Tang
2023-09-06T18:02:16Z
http://arxiv.org/abs/2309.03284v2
# Photonic link from single flux quantum circuits to room temperature ###### Abstract Broadband, energy-efficient signal transfer between cryogenic and room-temperature environment has been a major bottleneck for superconducting quantum and classical logic circuits. Photonic links promise to overcome this challenge by offering simultaneous high bandwidth and low thermal load. However, the development of cryogenic electro-optic modulators -- a key component for photonic readout of electrical signals -- has been stiled by the stringent requirements of superconducting circuits. Rapid single flux quantum circuits (RSFQ), for example, operate with a tiny signal amplitude of only a few millivolts (mV), far below the volt-level signal used in conventional circuits. Here, we demonstrate the first direct optical readout of an RSFQ circuit without additional electrical amplification enabled by a novel superconducting electro-optic modulator (SEOM) featuring a record-low half-wave voltage \(V_{\pi}\) of 42 mV on a 1 m-long SEOM. Leveraging the low ohmic loss of superconductors, we break the fundamental \(V_{\pi}\)-bandwidth trade-off and demonstrate electro-optic bandwidth up to 17 GHz on a 0.2 m-long SEOM at cryogenic temperatures. Our work presents a viable solution toward high-bandwidth signal transfer between future large-scale superconducting circuits and room-temperature electronics. Superconducting circuits are among the most promising technologies for quantum information processing[1; 2] and ultra-fast logic circuits [3; 4]. Fulfilling the superiority these cryogenic computation schemes promise, whether classical or quantum, relies on the development toward large-scale superconducting integrated circuits (ICs) [5; 6]. A fundamental roadblock on this scaling roadmap is their connectivity to room-temperature electronics, which has so far relied on coaxial cables that have limited bandwidth and finite thermal conductivity [7]. Also, to sustain the ultra-low-level signal emerging from the superconducting circuits within a coaxial cable, multi-stage amplifications are needed at cryogenic temperatures. These amplifiers add significant thermal load to the superconducting ICs and overall cryogenic cooling budget [7]. To address these challenges, photonic links using optical fibers have been identified as a promising solution [8; 9; 10; 11]. Compared to electrical cables, optical fibers offer two orders of magnitude lower heat load and three orders of magnitude higher bandwidth [10]. Additionally, they are less susceptible to thermal noise and crosstalk. Optical data links at room temperature also enable broadband and low-loss data transportation over local data centers [12; 13] and across remote networks [14]. Similar to the room-temperature fiber links, the implementation of cryogenic-to-room-temperature photonic links critically relies on electro-optic (EO) modulators to transduce microwave signals into the optical domain. Superconducting circuits, however, impose significantly more demanding requirements on the modulators than their room-temperature counterparts, not only in terms of stringent cryogenic compatibility but also in terms of infinitesimal signals to be uplifted. One important class of superconducting logic ICs is the single flux quantum (SFQ) logic family for digital signal processing, which has been utilized in radiofrequency (RF)-digital receivers, development of next-generation energy-efficient computers and proposed for large-scale control and readout of superconducting qubits [15; 16; 17]. These Josephson-junction-based circuits encode the digital information in quantized magnetic flux, which allows operations with auto-Joules of energy and ultra-fast switching above tens of gigahertz, such as rapid single flux quantum (RSFQ) devices [18]. Since the signal generated by SFQ circuits is only a few millivolts (mV) in amplitude, readout of this signal has so far relied on additional electrical amplification to hundreds of mV [19; 20]. Direct photonic links to SFQ circuits have not been realized so far as it requires a broadband EO modulator capable of mV-scale operations. Another important class of superconducting circuits is that for quantum information processing, where the uplift of quantum states to room temperature calls for efficient microwave-to-optics quantum transduction [11; 21]. To achieve enhanced conversion efficiency, cavity-based structures are universally utilized, but at the sacrifice of conversion bandwidth. A recent work [22] demonstrated optical readout of a superconducting qubit with low back-action using an EO transducer with around \(10^{-3}\) transduction efficiency, nonetheless with a transduction bandwidth of only a few kilohertz. With a high-bandwidth EO modulator of similar or moderately improved transduction efficiency, a fully photonic interface for multiplexed readout (egress) of large-scale superconducting qubits can be envisioned [23], which complements recent photonic (ingress) links demonstrated for transmitting qubit control signals from room temperature to cryogenic environment [8]. Cryogenic modulators have recently been demonstrated on different platforms with volt-scale modulation amplitudes [24; 25; 26; 27; 28; 29]. Leveraging cavity enhancement, a recent semiconductor-based microring modulator achieved a 10 mV drive voltage with a 3 dB bandwidth close to 1 GHz [30]. Similar to resonant microwave-to-optics converters, the bandwidth of microring-based modulators is ultimately limited by the cavity linewidth [31]. On the other hand, traveling-wave EO modulators have been widely used in telecommunication networks because of their high bandwidth [32], such as lithium niobate (LN)-based traveling-wave modulators. Studies showed the compatibility of LN modulators with cryogenic operating environment [33; 34] and demonstrated proof of-principle cryogenic optical interconnect using a commercial LN EO modulator [9]. However, the large \(V_{\pi}\) (\(\sim 5\) V) of these commercial EO modulators leads to a low transduction efficiency (\(3.5\times 10^{-7}\)), thus making them less competitive with HEMT-amplified electrical links. Here, \(V_{\pi}\) is the voltage needed to introduce a \(\pi\) phase shift on the modulator's optical output and can be translated to the modulator's transduction efficiency (see Supplementary Information Sec. III). Recent advances in integrated modulators based on lithium niobate on insulator (LNOI) platform [35] have reduced \(V_{\pi}\) to 1-2 V, which however still falls short of the demanding requirements of cryogenic applications. For example, reaching a \(10^{-3}\) transduction efficiency for superconducting qubit readout necessitates a modulator with \(V_{\pi}\) in the range of 100-200 mV (Supplementary Information Sec. III). A straightforward approach to reduce the \(V_{\pi}\) of traveling-wave modulators and improve their transduction efficiency is to increase the modulation length from the current centimeter range to the decimeter or even meter range. On top of the technical challenges of fabricating extremely long modulators, there exists a fundamental limit between bandwidth and efficiency on traveling-wave modulator architecture, imposed by the RF attenuation from ohmic losses [36]. Taking the typical electrode design of an integrated LN EO modulator, the ohmic-loss-limited bandwidth can be estimated as \(f_{\rm 3dB}=20\,\mathrm{GHz}\,(V_{\pi}/V)^{2}\)[36]. As illustrated in Fig. 1a, this limitation severely restricts the performance parameter space of the modulator, particularly with low-\(V_{\pi}\) devices. To overcome this limit, researchers postulated modulators with superconductor electrodes and have demonstrated that low-loss superconducting microwave transmission line could be employed to increase the effective modulation length [37]. This idea has recently resurfaced due to the recently heightened interests in ultra-low \(V_{\pi}\) and high-bandwidth cryogenic modulators [9]. In the previous work [37], although superconductor electrodes were employed in the modulator, the modulation length is only 2 cm (\(V_{\pi}\sim 4\) V) and the device performance was not extended to the regime where the superconductor is more competitive than normal metals. In this Article, we demonstrate the concept of superconducting traveling-wave modulators based on thin-film LN for interfacing with superconducting logic circuits. This superconducting electro-optic modulator (SEOM) design combines the best of two worlds in terms of material performance -- the low microwave loss of superconductors and the low optical loss / high EO coefficient of LN -- thus drastically increasing the EO modulation efficiency. Through an electrode jump-over design, the modulation length can be extended to one meter long (0.5 m in each Mach-Zehnder arm) while still maintaining a compact footprint, thus reducing the \(V_{\pi}\) to as low as 42 mV. We also show that at cryogenic temperatures, the superconducting modulator possesses a 3 dB bandwidth over 17 GHz (20 cm total modulation length) by matching the velocity of microwave and optical signals, in sharp contrast to the modulation bandwidth of a few megahertz when the electrodes are normal. Our results suggest that SEOM can break the fundamental trade-off between \(V_{\pi}\) and modulation bandwidth in EO modulator designs. We further demonstrate cryogenic-to-room-temperature data link with low peak-to-peak voltage (\(V_{\rm pp}\)) and achieve direct data lifting from an RSFQ circuit (5 mV \(V_{\rm pp}\)). We believe our superconducting modulator design provides a pathway toward future high-bandwidth optical link for cryogenic integrated circuits. ## Results **Ultra-low \(V_{\pi}\).** For a specific EO material, the strategy to reduce \(V_{\pi}\) is to fabricate modulators with extended modulation length. Here, we present our design to create the longest LN modulator to date, thereby achieving the lowest \(V_{\pi}\) ever reported. The devices are fabricated on an \(x\)-cut LNOI wafer with a 600-nm-thick LN film. In the two arms of the Mach-Zehnder interferometer (MZI) modulator (Fig. 2a), the long waveguide is laid out spirally with the extended straight sections aligned along the crystalline \(y\)-axis to harness \(r_{33}\), the largest EO coefficient of LN. A 25 \(\upmu\)m waveguide spacing is designed to accommodate a ground-signal-ground (GSG) microwave transmission line in between. The input microwave signal splits at the input port and propagates along each spiral arm of the MZI, and is terminated at the output port. With this design, a total 0.4 m-long optical waveguides can be fit into a Figure 1: Superconducting electro-optic modulator (SEOM) and its application in cryogenic-to-room-temperature link. **a**, The microwave loss in normal metal leads to a fundamental trade-off between modulator’s bandwidth and its V\({}_{\pi}\). SEOM breaks this limit and promises a vastly expanded parameter space. For the normal metal microwave propagation loss we use a typical value of \(\alpha=0.7\,\mathrm{dB}\,\mathrm{cm}^{-1}(f/\mathrm{Hz})^{1/2}\)[36] and thus \(f_{\rm 3dB}=20\,\mathrm{GHz}(V_{\pi}/\mathrm{V})^{2}\). **b**, Illustration of an SEOM enabled photonic link between superconducting and room-temperature electronics. 10 mm by 2.5 mm area (14 mm by 4 mm for a 1 m-long modulator). Due to the large dielectric constant of LN, the optical waveguide is partially etched so that the electrodes are deposited directly on the slab to enhance the electro-optic mode overlap, as shown in Fig. 2b inset. Niobium (Nb) is chosen as the electrode material for its high superconducting transition temperature. Although there are other superconductor materials with higher transition temperatures, these materials are Figure 2: Low-drive-voltage operation of SEOM. **a**, Optical micrograph of a 0.4-meter-long (0.2 m each arm) SEOM device. **b**, Schematic illustration of the jump-over superconductor electrodes. By allowing the electrodes to cross over the waveguide, this design re-orients the modulation electric field as the optical mode is looped back to prevent cancellation of the modulation effect. Inset shows the modulator cross-section. **c**, SEM image of the jump-over structures. **d**, Room temperature \(V_{\pi}\) measurement. The measured \(V_{\pi}\) of three SEOM with total modulation length of 0.2 m, 0.4 m and 1.0 m is 230 mV, 110 mV and 42 mV respectively. **e-g**, Eye diagrams. In **e**, eye diagrams for 1 mV and 3 mV drive voltage are demonstrated with the 1 m SEOM at room temperature. In **f**, the limited bandwidth due to normal metal resistance is manifested by the diminishing eye opening as the bit rates are increased. While for SEOM, this bandwidth limit is lifted and eye diagrams with 10 mV drive voltage at 2 Gbps and 4 Gbps are demonstrated in **g**. The eye diagrams are taken by a 6 GHz oscilloscope. typically compounds and their high kinetic inductance makes the velocity matching between optical and microwave signals challenging. A more quantitatively analysis on microwave transmission line design is provided in the next section. Note that a critical enabling design for the microwave transmission line is the electrode jump-over structures where the signal electrode climbs over the optical waveguide from one side to the other at each waveguide bending. In this way, the modulation electric field always orients in the same direction as the optical mode propagates along the meander, as illustrated by the red arrows in Fig. 2b. Without the jump-over structures, the modulation effect cancels out. To minimize optical loss induced by metal absorption at the jump-over structure, we clad the waveguide with dielectric that retains a slant sidewall to ensure electrical continuity of the electrodes. Fig. 2c shows the scanning-electron microscopy (SEM) picture of the electrode jump-over structures. With the design presented above, we are able to fabricate modulators with up to 1 m-long modulation length. The \(V_{\pi}\) of the fabricated MZI modulators with total modulation length of 0.2 m, 0.4 m and 1 m (0.1 m, 0.2 m and 0.5 m in each arm) are measured to be 230 mV, 110 mV and 42 mV respectively at room temperature, as shown in Fig. 2d. Following the convention to determine the voltage-length product (\(V_{\pi}L\)) using the modulation length of one arm, we measure the \(V_{\pi}L\) value to be around 2.2 V\(\cdot\) cm. At 4 K, we observe a 70% increase on \(V_{\pi}\) (data included in Supplementary Information Sec. V ). The \(V_{\pi}\) change of LN based modulators at cryogenic temperatures has been reported in different studies, and the reported value varies from a decrease of 10% [9] to an increase of 20% [33] and 74% [38]. We think the \(V_{\pi}\) change is resulted from the temperature dependence of the electro-optic coefficient. The discrepancy on the reported results might be due to different material growth methods and fabrication conditions. With some of the reported results showing that the \(V_{\pi}\) decreases or does not substantially change, it is possible that the \(V_{\pi}\) increase could be avoided, but this is subject to future studies. The ultra-low \(V_{\pi}\) allows for mV-drive-voltage operation. At room temperature, we demonstrate eye diagrams with \(V_{\text{pp}}\) as low as 3 mV and 1 mV using a 1 m-long modulator, as shown in Fig. 2d. However, the modulation speed is only 200 kbps, limited by the large resistance of the long normal metal electrodes. Here the dimension of the electrode is intentionally kept narrow (1-2 um in width and 200-300 nm in thickness for the signal electrode) compared to conventional modulator design in order to reduce the device overall footprint. This results in a room-temperature electrode resistance on the order of 1 k\(\Omega\)/cm, which sets the modulator bandwidth limited by the resistance-capacitance (RC) time constant. The simulated specific capacitance of the microwave transmission line is around 1 pC/cm, corresponding to a bandwidth of a few megahertz for the 0.2 m-long (0.1 m each arm) modulator. This bandwidth limit can be seen from the two eye diagram measurements with 10 mV \(V_{\text{pp}}\) in Fig. 2f, where the eye closes when the modulation speed increases from 5 Mbps to 10 Mbps. This bandwidth limit is lifted after the electrodes turn superconducting. The last two eye diagrams in Fig. 2g shows 3 orders of magnitude larger bandwidth after the superconducting transition with 10 mV \(V_{\text{pp}}\) at 2 Gbps and 4 Gbps. The eye diagram data is taken by an oscilloscope with 6 GHz bandwidth. Although the EO analog bandwidth of our SEOM is above 17 GHz (as shown in the next section), eye diagram operation at higher rate requires higher optical power to maintain the same signal-to-noise ratio (Supplementary Information Sec. VII). The operation rate is primarily constrained by our current high optical insertion loss (20 dB) and the associated optical heating effect. Further details regarding these limitations are discussed in the subsequent sections. **Electro-optic bandwidth of SEOM.** The modulation bandwidth of a traveling-wave EO modulator is determined by several factors: 1) the group velocity mismatch between the optical and microwave modes; 2) the propagation loss of each mode; 3) microwave dispersion control and impedance matching. A theoretical derivation of the bandwidth dependence on these factors is provided in the Supplementary Information Sec. VI. With negligible microwave loss and assuming low microwave dispersion (as suggested by simulation), the key to high EO bandwidth is the group velocity matching between the optics and microwave. A careful impedance engineering to match the on-chip transmission line impedance with the 50 \(\Omega\) coaxial cable is also necessary for high-bandwidth operation. With these considerations in mind, we model the microwave transmission line electro-optically coupled with the optical waveguide (Fig. 3a). The silicon dioxide layer between LN and silicon substrate is 4.7 um thick. The optical waveguide geometry is 2 um wide and 600 nm thick with 250 nm slab to maintain low optical propagation loss. The width of the signal electrode of the transmission line \(w\), electrode thickness \(h\) and the gap between electrodes \(gap\) are swept to adjust the microwave transmission line speed and impedance. The gap between the electrodes is chosen to be 5.2 um to minimize the metal absorption loss while maintain a relatively low \(V_{\pi}L\) (Supplementary Information Sec. III). The optical group index is simulated to be around 2.25 and the microwave group index can be effectively adjusted by the signal electrode width and thickness as shown in Fig. 3b. In the simulation, the kinetic inductance of the superconductor is taken into account through the equations in [39], where we assume a uniform current distribution in the signal electrode. With the Nb signal electrode of 275 nm thickness and 2 um width, the simulated specific capacitance, geometric and kinetic inductance are 0.74 pC/cm, 6.2 nH/cm and 1.2 nH/cm, and the microwave index can be matched to that of optics and its impedance can be matched to 50 \(\Omega\) simultaneously (Fig. 3c). Note that low kinetic inductance enables the velocity and impedance match. This could also be achieved by using other elementary superconductors, like aluminum or indium, but it would be challenging for compounds superconductor like NbN, TiN or NbTiN, whose kinetic inductance can be one order of magnitude higher. In Fig. 3d, we assess the EO bandwidth of a 0.2 m SEOM at cryogenic temperatures as the device is cooled below the superconducting transition temperature \(T_{c}\) of Nb electrodes (\(\sim 8\) K, Fig. 3d inset). Above \(T_{c}\), the signal electrode has a resistance of 13.8 k\(\Omega\), which expectedly limits the EO bandwidth to tens of megahertz, as shown in the EO response mea sured at 9 K. After the electrodes turns superconducting, the EO bandwidth immediately expands by more than three orders of magnitude. Although our simulations predict that the velocity and impedance matching can be simultaneously fulfilled on the LNOI platform, the actual device performance might differ from the simulations. This potential deviation could be due to the fabrication imperfections and the uncertainties of materials' coefficients that we incorporated in our simulations. In experiments, this deviation can be compensated by fine tuning of kinetic inductance of the superconductor electrodes [40]. The temperature dependence of the microwave index and propagation loss are measured and included in Supplementary Information Sec. VI. With these measured results, we fit the response curves using the theoretical model depicted in Supplementary Information Sec. III. As shown in Fig. 3d, the highest EO bandwidth of 17.5 GHz is achieved at 5.6 K. According to our model and fitting, the index is best matched at around 6.4 K, but as this temperature is approaching \(T_{c}\), the higher microwave loss limits the EO bandwidth (details in Supplementary Information Sec. VI). For future devices, achieving index matching at a lower temperature with lower microwave loss will further improve the EO bandwidth. **Optical readout of an RSFQ circuit.** Enabled by the ultra-low \(V_{\pi}\) and high bandwidth of our SEOM, we demonstrate the first direct optical readout of an RSFQ circuit. The RSFQ chip is designed by HYPRES, Inc and fabricated at MIT Lincoln Laboratory using SPQSEE process [41]. It employs an on-chip SQUID-stack driver to output a 5 mV \(V_{\text{pp}}\) signal into 50 \(\Omega\) impedance up to 10 Gbps. The RSFQ chip is packaged in a mu-metal housing mounted on the back side of the sample plate of a 4 K cryostat. To interface with the RSFQ circuit, our SEOM is fully packaged to provide modularized EO interface. As shown in Fig. 4c, the on-chip electrode pads are wire-bonded to a printed circuit board (PCB) with very short wires (1-2 mm) to ensure a high bandwidth connection. The PCB is then connectorized on the aluminum housing. Photonic packaging is through on-chip optical grating coupler coupled to single-mode optical fibers (Supplementary Information Sec. VIII). The SEOM module (a 0.2 m SEOM is used in this experiment, \(V_{\pi}\)=220 mV @ 300 K and 380 mV @ 4 K) is mounted on the top side of the sample plate and directly connected to the RSFQ module through coaxial cables, as shown in Fig. 4d. Therefore, the output data stream from the RSFQ circuit is first routed to the SEOM device, where it is translated into optical domain and subsequently lifted to room temperature through optical fibers. To validate the SEOM's EO performance at low excitation signal levels, we first send signals from an arbitrary waveform generator (AWG) at room temperature through different attenuations to drive the modulator. With 20 mV and 10 mV peak-to-peak pseudo-random bit sequences (PRBS7) in non-return-to-zero (NRZ) format at 1 Gbps, the packaged device generates clear eye diagrams as shown in Fig. 4e-f, suggesting that this SEOM device can directly handle the data stream emerging from the RSFQ circuit, which outputs signals with a typical \(V_{\text{pp}}\) of about 5 mV. Next, we switch to the RSFQ circuit as signal source. To electrically visualize the weak signal generated by RFSQ, we have to amplify the signal at cryogenic temperatures, which would otherwise be submerged by room-temperature noise. The electrical readout of the PRBS7 NRZ signal uses an amplifier with 35 dB gain at 4 K and the signal is shown in Fig. 4g. Photodetected signal after SEOM without any elec Figure 3: SEOM bandwidth: modeling and experimental characterization. **a**, Cross-sectional model of the LN optical waveguide and superconductor microwave transmission line. The microwave transmission line has a GSG configuration. The LN ridge waveguide is cladded by SiO\({}_{2}\) and placed between the signal and ground electrodes. \(h\), \(w\) and _gap_ denote the electrode thickness, signal electrode width and the gap between signal and ground electrodes. **b-c**, Simulated microwave group index and characteristic impedance. Fixing \(gap=5.2\,\upmu\)m, the microwave group index can be matched to that of optics when the Nb film thickness is around 275 nm and the signal electrode width is around 2 \(\upmu\)m. Simultaneously, the transmission line impedance can be matched to 50 \(\Omega\) on this LNOI platform. **d**, EO bandwidth measurement of a 0.2 m SEOM at different temperatures. The measured EO responses at 4.8 K, 5.6 K, 6.4 K and 6.8 K are fitted (line), from which we derive a 3 dB bandwidth of 13.5 GHz, 17.5 GHz, 16.8 GHz and 11.0 GHz respectively. When the temperature is above \(T_{c}\), the bandwidth is dropped to tens of megahertz. The inset shows the DC resistance of the signal electrode of the transmission line drops from 13.8 k\(\Omega\) to zero as the temperature decreases below \(T_{c}\) of 8 K. trical amplification is displayed in Fig. 4h. With the decreasing drive voltage from 20 mV, 10 mV to 5 mV, the signal-to-noise ratio (SNR) degrades from 10.4 dB, 7.0 dB to 3.1 dB. The corresponding bit error rates (BER) are calculated to be \(1.4\times 10^{-6}\), \(7.7\times 10^{-4}\) and \(2.2\times 10^{-2}\) respectively [42]. The BER of \(2.2\times 10^{-2}\) is within the tolerance of forward error correction (20% overhead with tolerable BER of \(2.7\times 10^{-2}\)[43; 44]). Here, a significant limitation on the SNR arises from the high optical insertion loss at cryogenic temperatures. The total optical loss is 20 dB, which mostly comprises a 12 dB coupling loss and an additional 8 dB on-chip insertion loss. To compensate for the optical loss, we use a relatively high laser power of 17 dBm for the SEOM device. At this input power level, the calculated transduction efficiency is \(1.4\times 10^{-4}\) as shown in Fig. 5. The current SNR limitation hinders our ability to demonstrate higher-speed eye diagrams, despite the SEOM device's inherent high bandwidth (Supplementary Information Sec. VII). In the following section, we will explore into this SNR limitation and highlight the potential of enabling optical readout of the RSFQ circuit at 10 Gbps with significantly lower optical power by further reducing optical losses. ## Discussion We have established that the use of superconductors breaks the trade-off between \(V_{\pi}\) and modulation bandwidth in EO modulator designs by suppressing the ohmic loss. Nevertheless, the optical loss comes into play as the limiting factor for meter-long modulators. As shown in Supplementary Information Sec. III, the transduction efficiency increases quadratically with the length of modulation, while the light intensity decays exponentially as it propagates. With a given optical input power, the transduction efficiency \(\eta\) (or equivalently the photon number per bit in eye diagram measurement) is given by: \[\eta=P_{\mathrm{opt}}\left[\frac{\pi^{2}}{2}\frac{\Omega}{\omega}\frac{Z_{0}}{ (V_{\pi}L)^{2}}\right]e^{-\alpha_{\mathrm{r}}L}L^{2}, \tag{1}\] Figure 4: Photonic link from an RSFQ circuit to room temperature. **a**, Schematic illustration of the photonic link interfacing RSFQ circuits to room temperature through SEOM. **b**, Micrograph of the RSFQ IC. **c**, Packaged SEOM device. **d**, Picture showing the physical interface between RSFQ and SEOM. The RSFQ module is packaged in a mu-metal shield and mounted to the back side of the sample plate of a 4 K cryostat whereas the SEOM module is mounted on the top side. The output signal from the RSFQ is directly routed to the SEOM through coaxial cables without intermediate amplification. **e-f**, Eye diagrams generated by the packaged SEOM when driven by an arbitrary waveform generator (AWG) at room temperature. The PRBS7 data stream in NRZ format is sent to the SEOM with peak-to-peak amplitude of 20 mV and 10 mV at 1 Gbps. The SNR is 10.4 dB and 7.0 dB for 20 mV and 10 mV drive voltage. **g**, Output signal of the RSFQ circuit characterization. The PRBS7 NRZ signal generated by the RSFQ circuit is amplified by 25 dB at 40 K and subsequently routed to room temperature. **h**, Direct optical readout of the 5 mV \(V_{\mathrm{pp}}\) RSFQ signal. The SNR of the photodetected signal is 3.1 dB. where \(P_{\text{opt}}\) is the optical input power, \(\Omega\) and \(\omega\) are the microwave and optical angular frequency, \(\alpha_{o}\) is the optical propagation loss, \(Z_{0}\) is the microwave transmission line characteristic impedance, and \(L\) is the modulation length. As the voltage-length product \(V_{\pi}L\) is a constant, Eq. (1) has only one independent variable \(L\) and it implies that there is an optimal modulation length for a given propagation loss \(\alpha_{o}\) (\(L=2/\alpha_{o}\)). Our current device still experiences residual photorefractive loss which leads to a propagation loss of about 0.8 dB/cm (Supplementary Information Sec. IX). This material degradation can be traced to unrepaired damage induced by electron-beam exposure during our lithography process, and we can not recover this damage through the conventional thermal annealing process because the superconductor cannot survive the high annealing temperature. This fabrication-induced damage could be avoided or recovered, for example, by changing our current e-beam lithography to photo-lithography or performing the annealing at an ultra-high vacuum chamber. Without the material damage, 0.027 dB/cm has been reported on microring resonators [45]. Given the current 0.8 dB/cm propagation loss, the optimal modulation length is around 10 cm in each MZI arm, as shown in Fig. 5a. With recovered material damage and thus a lower propagation loss, the transduction efficiency can be further improved at a longer modulation length. With 0.05 dB/cm propagation loss, the transduction efficiency will be above 1%. Our current cryogenic power dissipation is also limited by the large optical losses. The power dissipation is composed of two parts: electrical and optical power dissipation. Electrically, power only dissipates at the end of the microwave transmission line when it is terminated by a 50 \(\Omega\) load. Although a DC voltage bias is applied between the signal and ground electrodes, this pure voltage bias on the dielectric material does not consume energy. The dissipated microwave power can be calculated as \(V^{2}/Z_{0}\), where \(V\) is the signal amplitude (one half of \(V_{\text{pp}}\)) and \(Z_{0}\) is the microwave impedance. In our RSFQ optical readout demonstration, the 5 mV \(V_{\text{pp}}\) signal at 1 Gbps corresponds to 125 a/bit. If operated at a higher rate of 10 Gbps, this will be further reduced to 12.5 a/bit. Optically, most of the 17 dBm optical power is dissipated at cryogenic due to large optical loss (50 mW, 50 pJ/bit), which is still comparable with cryogenic broadband electrical amplifiers (typical power consumption is 10-20 mW). If assuming a combined 2.5 dB/facet coupling loss and 0.05 dB/cm propagation loss, a 1 m-long SEOM is capable of optical readout of the RSFQ circuit at 10 Gbps with 0 dBm optical input (100 fJ/bit, see Supplementary Information Sec. X). For large-scale RSFQ readout, although each RSFQ high-frequency output requires one SEOM, it takes only one optical fiber to uplift the signal. For electrical readout, each output would require one coaxial cable, and each coaxial cable introduces tens of mW heat load from room temperate to 40 K and a few mW heat load from 40 K to 4 K [7]. Just as commercial EO modulators powering today's fiber networks, we believe SEOMs with significantly improved modulation efficiency and bandwidth can offer substantial advantages for future large-scale superconducting circuits. In this Article, we present the conceptual advances of SEOMs, including the development of meter-long modulators and a remarkable two orders of magnitude reduction in \(V_{\pi}\). Through careful device engineering, we achieve over 17 GHz bandwidth on SEOM. Using this device, we have successfully demonstrated the direct optical readout of an RSFQ circuit for the first time. By further enhancing our fabrication processes and refining material processing to fully unlock the potential of the thin-film LN-superconductor platform, we anticipate several orders of magnitude further improvement in the modulation efficiency of meter-long SEOMs. This will enable the realization of scalable, low-power-consumption, and high-speed fully photonic links for superconducting circuits. ## Acknowledgments This project was funded by IARPA's SuperCables program through the ARO grant W911-NF-19-2-0115, and DOE Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage (C2QA), Contract No. DE-SC0012704. We thank the Office of Naval Research for providing funding support in the construction of the RF interface through grant N00014-20-1-2134. Y.Z. acknowledges the support from Yale Quantum Institute. We extend our gratitude to Dr. Brad Liu for his assistance with RSFQ circuit module installation and operation. We would also like to acknowledge HYPRES for their contributions to the RSFQ circuit design and for granting permission to use the micrograph of the RSFQ circuit chip in this article. Special thanks go to Drs. Dmitri Kirichenko and Deep Gupta for their insightful discussions. We are grateful to Drs. Michael Gehl, Ben Palmer, Saewoo Nam, Deborah Van Vecheten, William Mayer, Willam Harrod for providing valuable technical and administrative support throughout this project. Finally, we Figure 5: Projected SEOM transduction efficiency. The electro-optic transduction efficiency of SEOM is calculated as a function of modulation length under different optical propagation loss conditions. The calculation assumes a 10 dBm optical input power and uses a cryogenic \(V_{\pi}L\) of 3.8 V \(\cdot\) cm given by experimental results. The current SEOM device possesses a transduction efficiency of \(1.4\times 10^{-4}\) under 0.8 dB/cm propagation loss. With improved propagation loss to 0.2 dB/cm and 0.05 dB/cm, transduction efficiency can approach 1% and 10% with meter-long modulation length. thank Y. Sun, S. Rinehart, L. McCabe, K. Woods and M. Rooks for assistance in the device fabrication. ### Author contributions H.X.T. and M.S. conceived the idea and experiment. M.S. fabricated the device and performed the experiment. J.X. and Y.X. helped with fabrication and experiments. S.W., R.C., W.F. and Y.Z. helped with device packaging and instrumentation. M.S. wrote the manuscript, and all authors contributed to the manuscript. H.X.T. supervised the work. ### Competing interest The authors declare no competing interests. ## Methods **Device fabrication.** The SEOM device is fabricated on \(x\)-cut LNOI wafer (NanoLN) featuring 600 nm thin-film LN and 4.7 um buried oxide on a 500 um-thick high-resistivity silicon substrate. The optical waveguides are defined using electron beam lithography (EBL) with hydrogen silsesquioxane (HSQ) as resist and etched by argon reactive ion etching. The optical waveguide is partially etched by 350 nm with remaining 250 nm slab. After stripping residual HSQ resist, 900 nm HSQ is spun on chip and then defined by EBL as cladding layer. The chip is thermally annealed at 400 \({}^{\circ}\)C for 1 hour after resist development to turn the HSQ resist into silicon dioxide. The niobium electrodes (200-300 nm) are defined using EBL with polymethyl methacrylate (PMMA) resist through liftoff process. The niobium film is deposited through electron beam evaporation in a ultra-high vacuum chamber (\(2\times 10^{-8}\) torr during the deposition) at 10 A/s deposition rate. The fabrication process flow is included in the Supplementary Information Sec. I. **Device characterization.** Cryogenic characterization of the EO bandwidth of the SEOM devices is performed in a closed-cycle low-vibration cryostat from Montana Instruments. The die-level chip is mounted on a 3-axis stack of micropositers (Attocube). Through active alignment, photonic coupling to the SEOM device is through fiber array to on-chip optical grating couplers and electric connection is made using a multi-channel RF probe contacting on-chip superconductor electrode pads. Details regarding the calibration, data process and fitting of the EO response measurement is included in the Supplementary Information Sec. VI. Optical readout of the RSFQ circuit is performed using a packaged SEOM device in another closed-cycle custom-made cryostat. The packaged SEOM device and the RSFQ module are mounted on the front and back side of the 4 K sample plate respectively. Details of the device packaging are included in the Supplementary Information Sec. VIII. ## Data availability The data that support the findings of this study are available from the corresponding authors upon reasonable request. ## Code availability All relevant computer codes supporting this study are available from the corresponding author upon reasonable request.
2309.13097
Zero-Shot Object Counting with Language-Vision Models
Class-agnostic object counting aims to count object instances of an arbitrary class at test time. It is challenging but also enables many potential applications. Current methods require human-annotated exemplars as inputs which are often unavailable for novel categories, especially for autonomous systems. Thus, we propose zero-shot object counting (ZSC), a new setting where only the class name is available during test time. This obviates the need for human annotators and enables automated operation. To perform ZSC, we propose finding a few object crops from the input image and use them as counting exemplars. The goal is to identify patches containing the objects of interest while also being visually representative for all instances in the image. To do this, we first construct class prototypes using large language-vision models, including CLIP and Stable Diffusion, to select the patches containing the target objects. Furthermore, we propose a ranking model that estimates the counting error of each patch to select the most suitable exemplars for counting. Experimental results on a recent class-agnostic counting dataset, FSC-147, validate the effectiveness of our method.
Jingyi Xu, Hieu Le, Dimitris Samaras
2023-09-22T14:48:42Z
http://arxiv.org/abs/2309.13097v1
# Zero-Shot Object Counting ###### Abstract Class-agnostic object counting aims to count object instances of an arbitrary class at test time. It is challenging but also enables many potential applications. Current methods require human-annotated exemplars as inputs which are often unavailable for novel categories, especially for autonomous systems. Thus, we propose zero-shot object counting (ZSC), a new setting where only the class name is available during test time. This obviates the need for human annotators and enables automated operation. To perform ZSC, we propose finding a few object crops from the input image and use them as counting exemplars. The goal is to identify patches containing the objects of interest while also being visually representative for all instances in the image. To do this, we first construct class prototypes using large language-vision models, including CLIP and Stable Diffusion, to select the patches containing the target objects. Furthermore, we propose a ranking model that estimates the counting error of each patch to select the most suitable exemplars for counting. Experimental results on a recent class-agnostic counting dataset, FSC-147, validate the effectiveness of our method. Class-agnostic object counting, variational autoencoder, diffusion models, stable diffusion, zero-shot learning ## 1 Introduction Object counting aims to infer the number of objects in an image. Most of the existing methods focus on counting objects from specialized categories such as human crowds [1], cars [2], animals [3], and cells [4]. These methods count only a single category at a time. Recently, class-agnostic counting [5, 6, 7] has been proposed to count objects of arbitrary categories. Several human-annotated bounding boxes of objects are required to specify the objects of interest (see Figure 0(a)). However, having humans in the loop is not practical for many real-world applications, such as fully automated wildlife monitoring systems or visual anomaly detection systems. A more practical setting, exemplar-free class-agnostic counting, has been proposed recently by Ranjan _et al._[8]. They introduce RepRPN, which first identifies the objects that occur most frequently in the image, and then uses them as exemplars for object counting. Even though RepRPN does not require any annotated boxes at test time, the method simply counts objects from the class with the highest number of instances. As a result, it can not be used for counting a specific class of interest. The method is only suitable for counting images with a single dominant object class, which limits its potential applicability. Thus, our goal is to build an exemplar-free object counter where we can specify what to count. To this end, we introduce a new counting task in which the user only needs to provide the name of the class for counting rather than the exemplars (see Figure 0(b)). Note that the class to count during test time can be arbitrary. For cases where the test class is completely unseen to the trained model, the counter needs to adapt to the unseen class without any annotated data. Hence, we name this setting zero-shot object counting (ZSC), inspired by previous zero-shot learning approaches [9, 10]. To count without any annotated exemplars, we propose finding a few patches in the input image containing the target object to use them as counting exemplars. There are two challenges: 1) how to localize patches that contain the object of interest based on the provided class name, and 2) how to select _good_ exemplars for counting. Ideally, good object exemplars are visually representative for most instances in the image, which can benefit the object counter. In addition, we want to avoid selecting patches that contain irrelevant objects or backgrounds, which likely lead to incorrect object counts. To this end, we propose a two-step method that first localizes the class-relevant patches which contain the objects of interest based on the given class name, and then selects among these patches the optimal exemplars Fig. 1: Our proposed task of zero-shot object counting (ZSC). Traditional few-shot counting methods require a few exemplars of the object category (a). We propose zero-shot counting where the counter only needs the class name to count the number of object instances. (b). Few-shot counting methods require human annotators at test time while zero-shot counters can be fully automatic. for counting. We use these selected exemplars, together with a pre-trained exemplar-based counting model, to achieve exemplar-free object counting. The first step of our framework involves constructing a class prototype based on the given class name. Essentially, this requires a mapping between the categorical label and its visual feature. We employ pre-trained large language-vision models to accomplish this via two approaches. In the conference version of this paper [11], we learn this mapping between language queries and visual features via a conditional variational autoencoder (VAE). This VAE model is trained to generate visual features of object crops for any arbitrary class, conditioned on its semantic embedding extracted from a pre-trained language-vision model [12]. We take the average of the generated features to compute the class prototype, which can be then used to select class-relevant patches through a simple nearest-neighbour lookup scheme. In essence, our VAE-based approach creates a single prototypical feature for each category that can be applied to any images of this class. However, a single prototypical feature might not work well for categories with significant intra-class variance. Objects of the same category across different images can exhibit significant differences in colors (e.g., a green apple versus a red apple), shapes (e.g., an SUV versus a regular car), scales, or materials (a wooden versus a fabric chair). To better handle this variability, we propose to construct a class prototype specific to each image. To do so, we leverage the recent advancements in text-to-image generative models, i.e., Stable Diffusion [13], for prototype generation. Compared to classic VAE-based models, Stable Diffusion enables more realistic and diverse sample generation thanks to large-scale training data. This provides a way to deal with the variations in query objects. Specifically, given a query image, we first use Stable Diffusion to generate a variety of images containing the objects of interest. Then we select among them the object crops that most resemble the query objects and only use them for constructing the class prototype. In this way, a unique prototype is constructed specifically for each testing sample, as opposed to the VAE-based approach that uses a single universal prototype for all images. We show that using image-specific prototypes generally leads to better counting performance, compared to using a single generic categorical one. After obtaining the class-relevant patches, we want to select among them the optimal patches to be used as counting exemplars. Here we observe that the feature maps obtained using _good_ exemplars and _bad_ exemplars often exhibit distinguishable differences. An example of the feature maps obtained with different exemplars is shown in Figure 2. The feature map from a _good_ exemplar typically exhibits some repetitive patterns (e.g., the dots on the feature map) that center around the object areas while the patterns from a _bad_ exemplar are more irregular and occur randomly across the image. Based on this observation, we train a model to measure the goodness of an input patch based on its corresponding feature maps. Specifically, given an arbitrary patch and a pre-trained exemplar-based object counter, we train this model to predict the counting error of the counter when using the patch as the exemplar. Here the counting error can indicate the goodness of the exemplar. After this error predictor is trained, we use it to select those patches with the smallest predicted errors as the final exemplars for counting. Experiments on the FSC-147 dataset show the effectiveness of our proposed patch selection method. We also provide analyses to show that patches selected by our method can be used in other exemplar-based counting methods to achieve exemplar-free counting. In short, our main contributions are: * We introduce the task of zero-shot object counting that counts the number of instances of a specific class in the input image, given only the class name and without relying on any human-annotated exemplars. * We leverage language-vision models to construct class prototypes via two approaches: VAE-based approach and SD-based approach. We show that in both cases the class prototypes can be used to accurately select patches containing objects of interests for counting. * We introduce an error prediction model to further select the optimal patches that yield the smallest counting errors. * We verify the effectiveness of our patch selection method on the FSC-147 dataset, through extensive ablation studies and visualization results. ## 2 Related Work ### _Class-specific Object Counting_ Class-specific object counting focuses on counting pre-defined categories, such as humans [1, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], animals [3], cells [4], or cars [2, 24]. Generally, existing methods can be categorized into two groups: detection-based methods [24, 25, 26] and regression-based methods [27, 28, 29, 30, 31, 32, 33]. Detection-based methods apply an object detector on the image and count the number of objects based on the detected boxes. Regression-based methods predict a density map for each input image, and the final result is obtained by summing up the pixel values. Both types of methods require abundant training data to learn a good model. Class-specific Fig. 2: Feature maps obtained using different exemplars given a pre-trained exemplar-based counting model. The feature maps obtained using good exemplars typically exhibit some repetitive patterns while the patterns from bad exemplars are more irregular. counters can perform well on trained categories. However, they can not be used to count objects of arbitrary categories at test time. ### _Class-agnostic Object Counting_ Class-agnostic object counting aims to count arbitrary categories given only a few exemplars [34, 35, 36, 6, 37, 38, 7, 39]. GMN [7] uses a shared embedding module to extract feature maps for both query images and exemplars, which are then concatenated and fed into a matching module to regress the object count. FamNet [5] adopts a similar way to do correlation matching and further applies test-time adaptation. These methods require human-annotated exemplars as inputs. Recently, exemplar-free object counting has been proposed to eliminate the need for user inputs. Ranjan _et al._ have proposed RepRPN [8], which achieves exemplar-free counting by identifying exemplars from the most frequent objects via a Region Proposal Network (RPN)-based [40] model. However, the class of interest can not be explicitly specified for the RepRPN. In comparison, our proposed method can count instances of a specific class given only the class name. ### _Zero-shot Image Classification_ Zero-shot classification aims to classify unseen categories for which data is not available during training [41, 42, 43, 44, 45, 46]. Semantic descriptors are mostly leveraged as a bridge to enable the knowledge transfer between seen and unseen classes. Earlier zero-shot learning (ZSL) works relate the semantic descriptors with visual features in an embedding space and recognize unseen samples by searching their nearest class-level semantic descriptor in this embedding space [47, 48, 45, 49]. Recently, generative models [50, 51, 52] have been widely employed to synthesize unseen class data to facilitate ZSL [53, 54, 55]. Xian _et al._[54] use a conditional Wasserstein Generative Adversarial Network (GAN) [56] to generate unseen features which can then be used to train a discriminative classifier for ZSL. In our method, we also train a generative model conditioned on class-specific semantic embedding. Instead of using this generative model to hallucinate data, we use it to compute a prototype for each class. This class prototype is then used to select patches that contain objects of interest. ### _Diffusion Models_ Diffusion models [57, 58, 59] recently have demonstrated great success in text-to-image generative systems (e.g., DALL-E [60], Imagen [61] and Stable Diffusion (SD) [13]). Exploring the potential of pre-trained diffusion models in downstream tasks has gained increasing attention. They have been used in few-shot image classification [62], semantic segmentation [63, 64] and object discovery [65]. Karazija _et al._[64] leverage a diffusion-based generative model to produce a set of feature prototypes, which can then be used in a nearest-neighbour lookup scheme to segment images. In our method, we also use a text-conditioned diffusion model (i.e., Stable Diffusion) to construct visual class prototypes. We show that these prototypes can be used to find class-relevant patches for the task of class-agnostic counting. ## 3 Method Figure 3 summarizes our proposed method. We first construct a class prototype for the given class name in a pre-trained feature space. Then given an input query image, we generate a set of object proposals with a pre-trained RPN and crop the corresponding image patches. We extract the feature embedding for each patch and select the patches whose embeddings are the nearest neighbors of the class prototype as class-relevant patches. We further use an error predictor to select the patches with the smallest predicted errors as the final exemplars for counting. We use the selected exemplars in an exemplar-based object counter to infer the object counts. For the rest of the paper, we denote this exemplar-based counter as the "base counting model". We will first describe how we train this base counting model and then present the details of our patch selection method. ### _Training Base Counting Model_ We train our base counting model using abundant training images with annotations. Similar to previous works [5, 6], the base counting model uses the input image and the exemplars to obtain a density map for object counting. The model consists of a feature extractor \(F\) and a counter \(C\). Given a query image \(I\) and an exemplar \(B\) of an arbitrary class \(c\), we input \(I\) and \(B\) to the feature extractor to obtain the corresponding output, denoted as \(F(I)\) and \(F(B)\) respectively. \(F(I)\) is a feature map of size \(d*h_{I}*w_{I}\) and \(F(B)\) is a feature map of size \(d*h_{B}*w_{B}\). We further perform global average pooling on \(F(B)\) to form a feature vector \(b\) of \(d\) dimensions. After feature extraction, we obtain the similarity map \(S\) by correlating the exemplar feature vector \(b\) with the image feature map \(F(I)\). Specifically, if \(w_{ij}=F_{ij}(I)\) is the channel feature at spatial position \((i,j)\), \(S\) can be computed by: \[S_{ij}(I,B)=w_{ij}^{T}b. \tag{1}\] In the case where \(n\) exemplars are given, we use Eq. 1 to calculate \(n\) similarity maps, and the final similarity map is the average of these \(n\) similarity maps. We then concatenate the image feature map \(F(I)\) with the similarity map \(S\), and input them into the counter \(C\) to predict a density map \(D\). The final predicted count \(N\) is obtained by summing over the predicted density map \(D\): \[N=\sum_{i,j}D_{(i,j)}, \tag{2}\] where \(D_{(i,j)}\) denotes the density value for pixel \((i,j)\). The supervision signal for training the counting model is the \(L_{2}\) loss between the predicted density map and the ground truth density map: \[L_{\text{count}}=\|D(I,B)-D^{*}(I)\|_{2}^{2}, \tag{3}\] where \(D^{*}\) denotes the ground truth density map. ### _Zero-shot Object Counting_ In this section, we describe how we count objects of any unseen category given only the class name without access to any exemplar. Our strategy is to select a few patches in the image that can be used as exemplars for the base counting model. These patches are selected such that: 1) they contain the objects that we are counting and 2) they benefit the counting model, i.e., lead to small counting errors. #### 3.2.1 Selecting Class-relevant Patches To select patches that contain the objects of interest, we first generate a class prototype based on the given class name. The class prototype can be considered as a class center representing the patch-level feature distribution of the corresponding class in an embedding space. Then we use the generated class prototype to select the class-relevant patches from a set of object patches cropped from the testing image. Specifically, we introduce two ways of generating prototypes, i.e., generating semantics prototypes using conditional VAE and generating visual prototypes using samples from a latent text-to-image diffusion model, i.e., Stable Diffusion. **VAE-based prototype generation.** To generate class prototypes, we train a conditional VAE model to generate patch-level visual features for an arbitrary class based on the semantic embedding of the class. This strategy is inspired by previous zero-shot learning approaches [53, 54]. The semantic embedding is obtained from a pre-trained text-vision model [12] given the corresponding class name. Specifically, we train a VAE model to reconstruct deep features extracted from a pre-trained ImageNet model. The VAE is composed of an Encoder \(E\), which maps a visual feature \(x\) to a latent code \(z\), and a decoder \(G\) which reconstructs \(x\) from \(z\). Both \(E\) and \(G\) are conditioned on the semantic embedding \(a\).The loss function for training this VAE for an input feature \(x\) can be defined as: \[\begin{split} L_{V}(x)=\text{KL}\left(q(z|x,a)||p(z|a)\right)\\ -\text{E}_{q(z|x,a)}[\text{log}\;p(x|z,a)].\end{split} \tag{4}\] The first term is the Kullback-Leibler divergence between the VAE posterior \(q(z|x,a)\) and a prior distribution \(p(z|a)\). The second term is the decoder's reconstruction error. \(q(z|x,a)\) is modeled as \(E(x,a)\) and \(p(x|z,a)\) is equal to \(G(z,a)\). The prior distribution is assumed to be \(\mathcal{N}(0,I)\) for all classes. We can use the trained VAE to generate the semantics prototype for an arbitrary target class for counting. Specifically, given the target class name \(y\), we first generate a set of features by inputting the respective semantic vector \(a^{y}\) and a noise vector \(z\) to the decoder \(G\): \[\mathbb{G}^{y}=\{\hat{x}|\hat{x}=G(z,y),z\sim\mathcal{N}(0,I)\}. \tag{5}\] The class prototype \(\mathbb{P}^{y}\) is computed by taking the mean of all the features generated by VAE: \[\mathbb{P}^{y}=\frac{1}{|\mathbb{G}^{y}|}\sum\nolimits_{\hat{x}\in\mathbb{G} ^{y}}\hat{x} \tag{6}\] **SD-based prototype generation.** In addition to VAE-based approach for prototype generation, we further leverage the recent advancements in text-to-image models, i.e., Stable Diffusion, to construct class prototypes from SD-generated images. Compared to classic VAE-based models, Stable Diffusion enables more realistic and diverse sample generation, which allows handling the intra-class variation among query objects more effectively. Specifically, given the target class name for counting, we first use a pre-trained Stable Diffusion model to generate a set of images with the class name as prompt. We observe that the SD-generated images often contain multiple object instances in various contexts and backgrounds. However, our goal is to obtain a few representative object crops of the target class that can be used to construct reference prototypes. In particular, given a query image, we aim to find a few diffusion-generated object crops that most resemble the target objects in the query image. To do so, we first apply a pre-trained RPN to predict object proposals on both the diffusion-generated images and the query image. Then we compute the pairwise distance between the diffusion-generated object embeddings and the query image's object embeddings. We select the top-\(k\) diffusion-generated object embeddings with the nearest Fig. 3: Overview of the proposed method. We first obtain a class prototype for the given class name (e.g. grape) in a pre-trained feature space. Then given an input query image, we generate a set of object proposals with a pre-trained RPN and crop the corresponding image patches. We extract the feature embedding for each patch and select the patches whose embeddings are the nearest neighbors of the class prototype as class-relevant patches. Then for each selected class-relevant patch, we use a pre-trained exemplar-based counting model to obtain the intermediate feature maps. Our proposed error predictor then takes the feature maps as input and predicts the counting error (here we use normalized counting errors). We select the patches with the smallest predicted errors as the final exemplar patches and use them for counting. mean distance over all query embeddings. We average these \(k\) embeddings to construct the visual class prototype. An example is shown in Figure 4 where the target objects are red apples. We first obtain a set of object patches containing various crops of apples and select from them a set of red apples to construct the prototype. **Class-relevant patch selection.** Using the class prototype, either generated using VAE or Stable Diffusion, we can select the class-relevant patches across the query image. Specifically, we first use a pre-trained RPN to predict object proposals across the query image and extract their corresponding ImageNet features \(\{f_{1},f_{2},...,f_{m}\}\). To select the class-relevant patches, we calculate the \(L_{2}\) distance between the class prototype and the patch embedding, namely \(d_{i}=\|f_{i}-\mathtt{p}^{y}\|_{2}\). Then we select the patches whose embeddings are the nearest neighbors of the class prototype as the class-relevant patches. Since the ImageNet feature space is highly discriminative, i.e., features close to each other typically belong to the same class, the selected patches are likely to contain the objects of the target class. #### 3.2.2 Selecting Exemplars for Counting Given a set of class-relevant patches and a pre-trained exemplar-based object counter, we aim to select a few exemplars from these patches that are optimal for counting. To do so, we introduce an error prediction network that predicts the counting error of an arbitrary patch when the patch is used as the exemplar. The counting error is calculated from the pre-trained counting model. Specifically, to train this error predictor, given a query image \(\bar{I}\) and an arbitrary patch \(\bar{B}\) cropped from \(\bar{I}\), we first use the base counting model to get the image feature map \(F(\bar{I})\), similarity map \(\bar{S}\), and the final predicted density map \(\bar{D}\). The counting error of the base counting model can be written as: \[\epsilon=|\sum_{i,j}\bar{D}_{(i,j)}-\bar{N^{*}}|, \tag{7}\] where \(\bar{N^{*}}\) denotes the ground truth object count in image \(\bar{I}\). \(\epsilon\) can be used to measure the goodness of \(\bar{B}\) as an exemplar for \(\bar{I}\), i.e., a small \(\epsilon\) indicates that \(\bar{B}\) is a suitable exemplar for counting and vice versa. The error predictor \(R\) is trained to regress the counting error produced by the base counting model. The input of \(R\) is the channel-wise concatenation of the image feature map \(F(\bar{I})\) and the similarity map \(\bar{S}\). The training objective is the minimization of the mean squared error between the output of the predictor \(R(F(\bar{I}),\bar{S})\) and the actual counting error produced by the base counting model \(\epsilon\). After the error predictor is trained, we can use it to select the optimal patches for counting. The candidates for selection here are the class-relevant patches selected by the class prototype in the previous step. For each candidate patch, we use the trained error predictor to infer the counting error when it is being used as the exemplar. The final selected patches for counting are the patches that yield the top-\(s\) smallest counting errors. #### 3.2.3 Using the Selected Patches as Exemplars Using the error predictor, we predict the error for each candidate patch and select the patches that lead to the smallest counting errors. The selected patches can then be used as exemplars for the base counting model to get the density map and the final count. We also conduct experiments to show that these selected patches can serve as exemplars for other exemplar-based counting models to achieve exemplar-free class-agnostic counting. ## 4 Experiments ### _Implementation Details_ **Network Architecture.** For the _base counting model_, we use ResNet-50 as the backbone of the feature extractor, initialized with the weights of a pre-trained ImageNet model. The backbone outputs feature maps of \(1024\) channels. For each query image, the number of channels is reduced to \(256\) using an \(1\times 1\) convolution. For each exemplar, the feature maps are first processed with global average pooling and then linearly mapped to obtain a \(256\)-d feature vector. The counter consists of \(5\) convolutional and bilinear upsampling layers to regress a density map of the same size as the query image. For the _feature generation model_, both the encoder and the decoder are two-layer fully-connected (FC) networks with 4096 hidden units. LeakyReLU and ReLU are the non-linear activation functions in the hidden and output layers, respectively. The dimensions of the latent space and the semantic embeddings are both set to be \(512\). The _error predictor_ composes of \(5\) convolutional and bilinear upsampling layers, followed by a linear layer to output the counting error. **Dataset.** We use the FSC-147 dataset [5] to train the base counting model and the error predictor. FSC-147 is the first large-scale dataset for class-agnostic counting. It includes \(6135\) images from \(147\) categories varying from animals, kitchen utensils, to vehicles. The categories in the training, validation, and test sets do not overlap. The feature generation model is trained using ImageNet features extracted from MS-COCO objects. **Training Details.** Both the base counting model and the error predictor are trained using the AdamW optimizer with a fixed learning rate of \(10^{-5}\). The base counting model is trained for \(300\) epochs with a batch size of \(8\). We resize the input query image to a fixed height of \(384\), and the width is adjusted accordingly to preserve the aspect ratio of the original image. Exemplars are resized to \(128\times 128\) before being input into the feature extractor. To select the class-relevant patches, we use the Region Proposal Network of Faster RCNN pre-trained on MS-COCO dataset to generate \(100\) object proposals per image. The feature generation model is trained using the Adam optimizer and Fig. 4: Prototype generation using Stable Diffusion. the learning rate is set to be \(10^{-4}\). With the VAE-based generator, the semantic embeddings are extracted from CLIP [12]. For visual prototype generation, we use stable-diffusion-v1-4 [13] pre-trained on the lation dataset [66]. The generated images are in the size of \(512\times 512\). For each generated image, we take the top-\(5\) RPN proposals with the highest objectness scores. We combine all the proposals from diffusion-generated images and extract their embeddings to do similarity matching with the object embeddings from the query image. We select the top-\(5\) embeddings with nearest distances over query embeddings and compute their average to obtain the class prototype. The final selected patches are those that yield the top-\(3\) smallest counting errors predicted by the error predictor. ### _Evaluation Metrics_ We use Mean Average Error (MAE) and Root Mean Squared Error (RMSE) to measure the performance of different object counters. Besides, we follow [36] to report the Normalized Relative Error (NAE) and Squared Relative Error (SRE). In particular, MAE = \(\frac{1}{n}\sum_{i=1}^{n}|y_{i}-\hat{y_{i}}|\); RMSE = \(\sqrt{\frac{1}{n}\sum_{i=1}^{n}(y_{i}-\hat{y_{i}})^{2}}\); NAE = \(\frac{1}{n}\sum_{i=1}^{n}\frac{|y_{i}-\hat{y_{i}}|}{y_{i}}\); SRE = \(\sqrt{\frac{1}{n}\sum_{i=1}^{n}\frac{(y_{i}-\hat{y_{i}})^{2}}{y_{i}}}\) where \(n\) is the number of test images, and \(y_{i}\) and \(\hat{y_{i}}\) are the ground truth and the predicted number of objects for image \(i\) respectively. Compared with the absolute errors MAE and RMSE, the relative errors NAE and SRE better reflect the practical usage of visual counting [36]. ### _Comparing Methods_ We compare our method with the previous works on class-agnostic counting, which can be categorized into exemplar-based counting methods and reference-less counting methods. Exemplar-based methods include FamNet (Few-shot adaptation and matching Network [5]), BMNet (Bilinear Matching Network [6]), CounTR (Counting TRansformer [37]) and SAFECount (Similarity-Aware Feature Enhancement block for object Counting [38]). These methods require a few human-annotated exemplars as inputs. References methods, i.e., RepRPN [8] and CounTR [37], do not require annotated boxes at test time. Nevertheless, the class of interest can not be specified, which makes them only suitable for counting images with a single dominant object class. Our proposed zero-shot counting, is a new setup which allows the user to specify what to count by simply providing the class name without any exemplar. We also make exemplar-based methods work in the exemplar-free manner by replacing the human-provided exemplars with the exemplars generated by a pre-trained object detector. Specifically, we use the RPN of Faster RCNN pre-trained on MS-COCO dataset and select the top-\(3\) proposals with the highest objectness score as the exemplars. ### _Results_ **Quantitative results.** As shown in Table I, the performance of all exemplar-based counting methods drops significantly when replacing human-annotated exemplars with RPN generated proposals. BMNet+ [6], for example, shows an \(19.90\) error increase _w.r.t._ the test MAE and a \(40.81\) increase _w.r.t._ the test RMSE. In comparison, the performance gap is much smaller when using our selected patches as exemplars. Our patch selection method with VAE-generated prototype obtains \(27.47\) MAE on the validation set and \(23.14\) MAE on the test set. By using the SD-generated class prototype, the error rates can be further reduced, achieving \(26.30\) MAE on the validation set and \(21.53\) MAE on the test set. Noticeably, compared with the human-annotated exemplars, the NAE and the SRE on the test set are even reduced when using our selected patches. **Qualitative analysis.** In Figure 5, we present a few input images, the image patches selected by our method, and the corresponding density maps. Our method effectively identifies the patches that are suitable for object counting. The density maps produced by our selected patches are meaningful and close to the density maps produced by human-annotated patches. The counting model with random image patches as exemplars, in comparison, fails to output meaningful density maps and infers incorrect object counts. In Figure 6, we visualize some images from the FSC-147 dataset and the corresponding patches selected by RPN and our method respectively. The RPN-selected patches are the top-\(3\) proposals with the highest objectness scores. As can be seen from the figure, the patches selected by RPN may contain objects not relevant to the provided class name or contain multiple object instances. These patches are not suitable to be used as counting exemplars and will lead to inaccurate counting results. This suggests that choosing counting exemplars based on objectness score is not reliable. In comparison, our proposed method can accurately localize image patches according to the given class name. These selected patches can then be used as counting exemplars and yield meaningful density maps and reasonable counting results. ## 5 Analysis ### _Ablation Studies_ Our proposed patch selection method consists of two steps: the selection of class-relevant patches via a generated class prototype and the selection of the optimal patches via an error predictor. We analyze the contribution of each step quantitatively and qualitatively. Quantitative results are in Table II. We first evaluate the performance of a simple RPN-based baseline, i.e. using the top-3 RPN proposals with the highest objectness scores as exemplars without any selection step. This baseline method has an error rate of \(32.19\) on the validation MAE and \(29.25\) on the test MAE. As shown in Table II, using the class prototype generated by VAE to select class-relevant patches reduces the error rate by \(2.34\) and \(4.87\) on the validation and test set _w.r.t._ MAE, respectively. Using the class prototype generated by Stable Diffusion reduces the MAE by \(4.43\) and \(7.26\) on the validation and test set respectively. Applying the error predictor can improve the baseline performance by \(4.63\) on the validation MAE and \(6.46\) on the test MAE. Finally, using the Stable Diffusion prototype and error predictor together further boosts performance, achieving \(26.30\) on the validation MAE and \(21.53\) on the test MAE. We provide further qualitative analysis by visualizing the selected patches. As shown in Figure 7, for each input query image, we show \(10\) class-relevant patches selected using the class prototype generated with Stable Diffusion, ranked by their predicted counting error (from low to high). All the \(10\) selected class-relevant patches exhibit some class specific features. However, not all these patches are suitable to be used as counting exemplars, i.e., some patches only contain parts of the object, and some patches contain some background. By further applying our proposed error predictor, we can identify the most suitable patches with the smallest predicted counting errors. Both the quantitative comparison in Table I and qualitative results presented in Figure 7 validate the effectiveness of our patch selection method. ### _Generalization to Exemplar-based Methods_ Our proposed method can be considered as a general patch selection method that is applicable to other visual counters \begin{table} \begin{tabular}{l|c|c c c|c c c c} \hline \hline \multirow{2}{*}{Exemplars} & \multirow{2}{*}{Method} & \multicolumn{4}{c|}{Val Set} & \multicolumn{4}{c}{Test Set} \\ & & MAE & RMSE & NAE & SRE & MAE & RMSE & NAE & SRE \\ \hline \multirow{8}{*}{Exemplar-based} & FamNet+ [5] & 23.75 & 69.07 & 0.52 & 4.25 & 22.08 & 99.54 & 0.44 & 6.45 \\ & BMNet [6] & 19.06 & 67.95 & 0.26 & 4.39 & 16.71 & 103.31 & 0.26 & 3.32 \\ & BMNet + [6] & 15.74 & 58.53 & 0.27 & 6.57 & 14.62 & 91.83 & 0.25 & 2.74 \\ & CountR [37] & 13.13 & 49.83 & 0.24 & 0.45 & 11.95 & 91.23 & 0.23 & 1.72 \\ & SAFECount [38] & 14.46 & 51.88 & 0.26 & 0.91 & 13.58 & 91.31 & 0.25 & 1.66 \\ & Ours & 18.55 & 61.12 & 0.30 & 3.18 & 0.68 & 109.14 & 0.36 & 7.63 \\ \hline \multirow{8}{*}{Reference-less} & RepRPN [8] & 30.40 & 98.73 & - & - & 27.45 & 129.69 & - & - \\ & CountR [37] & 17.40 & 70.33 & 0.34 & 1.64 & 14.12 & 108.01 & 0.29 & 1.93 \\ & FamNet+ [5] + RPN & 42.85 & 121.59 & 0.75 & 6.94 & 42.70 & 146.08 & 0.74 & 7.14 \\ & BMNet [6] + RPN & 37.26 & 108.54 & 0.42 & 5.43 & 37.22 & 143.13 & 0.41 & 5.31 \\ & BMNet+ [6] + RPN & 35.15 & 106.07 & 0.41 & 5.28 & 34.52 & 132.64 & 0.39 & 5.26 \\ & SAFECount [38] + RPN & 34.98 & 107.46 & 0.38 & 5.22 & 33.89 & 139.92 & 0.39 & 5.34 \\ & Ours + RPN & 32.19 & 99.21 & 0.38 & 4.80 & 29.25 & 130.65 & 0.35 & 4.35 \\ \hline \multirow{2}{*}{Zero-Shot} & Patch-Sel (VAE) & 27.47 & 90.85 & 0.37 & 4.52 & 23.14 & 114.40 & 0.34 & 3.95 \\ & Patch-Sel (SD) & **26.30** & **88.80** & **0.34** & **4.27** & **21.53** & **113.28** & **0.31** & **3.61** \\ \hline \hline \end{tabular} \end{table} TABLE I: Quantitative comparisons on the FSC-147 dataset. “RPN” denotes using the top-3 RPN proposals with the highest objectness scores as exemplars. “Patch-Sel (VAE)” and “Patch-Sel (SD)” denotes using patch selection method with VAE-generated prototypes and SD-generated prototypes respectively. Fig. 5: Qualitative results on the FSC-147 dataset. We show the counting exemplars and the corresponding density maps of ground truth boxes, randomly selected patches, and our selected patches respectively. Predicted counting results are shown at the top-right corner. Our method accurately identifies suitable patches for counting and the predicted density maps are close to the ground truth density maps. to achieve zero-shot object counting. To verify that, we use our selected patches as the exemplars for four other different exemplar-based methods: FamNet+ [5], BMNet [6], BMNet+ [6] and SAFECount [38]. Table III shows the results on the FSC-147 dataset. The baseline uses top-3 RPN proposals with the highest objectness scores as exemplars for the pre-trained exemplar-based counter. Our patch selection method with VAE-generated class prototype reduces the error rates by a large margin for all exemplar-based methods. For example, the MAE for BMNet+ [6] reduces from \(35.51\) to \(26.89\) on the validation set and from \(34.52\) to \(23.14\) on the test set. By using our patch selection method with the SD-generated class prototype, the error rates are further reduced for most cases, e.g., we observe for FamNet+ [5], there is an error reduction of \(12.4\%\)_w.r.t._ the validation MAE and \(18.0\%\)_w.r.t._ the test MAE. The consistent performance improvements validate that our patch selection method generalizes well to other exemplar-based counting methods. \begin{table} \begin{tabular}{c|c c c c c|c c c c} \hline \hline \multirow{2}{*}{Prototype} & \multirow{2}{*}{Predictor} & \multicolumn{3}{c}{Val Set} & \multicolumn{3}{c}{Test Set} \\ \cline{3-10} & & MAE & RMSE & MAE & SRE & MAE & RMSE & MAE & SRE \\ \hline - & - & 32.19 & 99.21 & 0.38 & 4.80 & 29.25 & 130.65 & 0.35 & 4.35 \\ (VAE) & - & 29.85 & 97.63 & 0.42 & 5.20 & 24.38 & 115.04 & 0.36 & 4.18 \\ (SD) & - & 27.76 & 97.06 & 0.35 & 4.43 & 21.99 & 113.31 & 0.33 & 3.77 \\ \cline{2-10} & ✓ & 27.56 & **88.37** & 0.41 & 4.54 & 22.79 & 127.09 & 0.36 & 3.98 \\ (VAE) & ✓ & 27.47 & 90.85 & 0.37 & 4.52 & 23.14 & 114.40 & 0.34 & 3.95 \\ (SD) & ✓ & **25.30** & 88.80 & **0.34** & **4.27** & **21.53** & **113.28** & **0.31** & **3.61** \\ \hline \hline \end{tabular} \end{table} TABLE II: Ablation study on each component’s contribution to the final results. We show the effectiveness of the two steps of our framework: selecting class-relevant patches via a generated class prototype and selecting optimal patches via an error predictor. Fig. 6: Qualitative comparison with top-3 exemplars from RPN. Our proposed method can select patches suitable for counting while RPN-selected patches contain non-relevant objects or multiple object instances. Fig. 7: Qualitative ablation analysis. All the \(10\) selected class-relevant patches exhibit some class-specific attributes. They are ranked by the predicted counting errors and the final selected patches with the smallest errors are framed in green. ### _Multi-class Object Counting_ Our method can count instances of a specific class given the class name, which is particularly useful when there are multiple classes in the same image. In this section, we show some visualization results in this multi-class scenario. As shown in Figure 8, our method selects patches according to the given class name and counts instances from that specific class in the input image. Correspondingly, the heatmap highlights the image regions that are most relevant to the specified class. Here the heatmaps are obtained by correlating the exemplar feature vector with the image feature map in a pre-trained ImageNet feature space. Note that we mask out the image region where the activation value in the heatmap is below a threshold when counting the objects of interests. We also show the patches selected using another exemplar-free counting method, RepRPN [8]. The class of RepRPN selected patches can not be explicitly specified. It simply selects patches from the class with the highest number of instances in the image according to the repetition score. ### _Qualitative Comparison between SD-generated Prototypes and VAE-generated Prototypes_ In this section, we provide qualitative comparison between patches selected via SD-generated prototypes and VAE-generated prototypes. As shown in Figure 9, we present a few input images and the corresponding patches selected by VAE-generated prototypes and SD-generated prototypes. Although the patches selected by VAE-generated prototypes generally contain the objects of interest, they miss parts of the objects in some cases (e.g., the second patch of _grap_), or contain multiple object instances within one patch (e.g., the second patch of _ strawberry_). In comparison, the patches selected by SD-generated prototypes are generally better exemplars for counting, i.e., one patch mostly contains a single complete object instance. ### _Analysis on SD-generated Prototypes_ **Qualitative Visualization.** To generate visual class prototypes, we first use a pre-trained Stable Diffusion to generate a set of object patches for the class of interest. Then given a query image, we select the generated patches that most resemble the target objects in the query image. We compute the average feature embeddings of the selected generated patches to construct the prototype. In Figure 10, we demonstrate this process for three different categories, i.e., grapes, eggs, and apples. For each category, we show how we select different patches to construct the prototypes for the given query image. As can be seen in the first Fig. 8: Visualization results of our method on some multi-class examples. Our method selects patches according to the given class name and the corresponding heatmap highlights the relevant areas. \begin{table} \begin{tabular}{c|c|c c c c|c c c c} \hline \hline \multirow{2}{*}{Baseline} & \multirow{2}{*}{Exemplars} & \multicolumn{4}{c|}{Val Set} & \multicolumn{4}{c}{Test Set} \\ & & MAE & RMSE & NAE & SRE & MAE & RMSE & NAE & SRE \\ \hline \multirow{3}{*}{FamNet+} & RPN & 42.85 & 121.59 & 0.75 & 6.94 & 42.70 & 146.08 & 0.74 & 7.14 \\ & Patch-Sel (VAE) & 39.51 & 101.70 & 0.84 & 6.80 & 39.91 & 143.04 & 0.74 & 6.77 \\ & Patch-Sel (SD) & **34.60** & **96.76** & **0.63** & **5.71** & **32.71** & **139.93** & **0.55** & **5.47** \\ \hline \multirow{3}{*}{BMNet} & RPN & 37.26 & 108.54 & 0.42 & 5.43 & 37.22 & 143.13 & 0.41 & 5.31 \\ & Patch-Sel (VAE) & 27.71 & 93.98 & 0.36 & 4.55 & 24.45 & 130.42 & 0.33 & 4.09 \\ & Patch-Sel (SD) & **26.53** & **91.55** & **0.32** & **4.36** & **22.27** & **129.67** & **0.28** & **3.85** \\ \hline \multirow{3}{*}{BMNet+} & RPN & 35.51 & 106.07 & 0.41 & 5.28 & 34.52 & 132.64 & 0.39 & 5.26 \\ & Patch-Sel (VAE) & 26.89 & 92.37 & 0.35 & 4.56 & 23.14 & 114.40 & 0.34 & 3.95 \\ & Patch-Sel (SD) & **25.91** & **90.62** & **0.33** & **4.40** & **19.45** & **109.82** & **0.27** & **3.53** \\ \hline \multirow{3}{*}{SAFECount} & RPN & 34.98 & 107.46 & 0.38 & 5.22 & 33.89 & 139.92 & 0.39 & 5.34 \\ & Patch-Sel (VAE) & 28.34 & 94.22 & 0.41 & 4.65 & 23.60 & **110.95** & 0.40 & **4.26** \\ \cline{1-1} & Patch-Sel (SD) & **26.85** & **91.09** & **0.33** & **4.30** & **21.44** & 115.30 & **0.30** & 5.74 \\ \hline \hline \end{tabular} \end{table} TABLE III: Using our selected patches as exemplars for other exemplar-based class-agnostic counting methods (FamNet+, BMNet, BMNet+ and SAFECount) on FSC-147 dataset. Our patch selection method generalizes well to other exemplar-based counting methods. column, the set of RPN proposals extracted from SD-generated images exhibit rich variations. Among these proposals, our method selects those that are most relevant to the testing images for constructing prototypes. For example, in the last row of Figure 10 (c), only apples with mixed colors are selected to match the objects' colors in the testing image. Compared with VAE-generated prototypes which remain the same for all query images, SD-generated prototypes can better handle the variations of query images and select more accurate counting exemplars. **Number of Patches for Prototype Generation.** In our main experiments, we select the top-\(5\) SD-generated patches with the nearest mean distance over query patches, and compute their average features to construct prototypes. In this section, we conduct an ablation study on how the number of patches selected for prototype generation affects the counting performance. Specifically, we select top-\(5\), top-\(25\), top-\(50\) and all patches to construct class prototypes and use them to select exemplars. Results are summarized in Table IV. We observe that the performance drops on both the validation set and test set as the number of selected patches increases. The counting errors are highest when using all generated patches to construct prototypes. In this case, the same class prototype is applied for all images of this class, which is not optimal for counting objects with large intra-class diversity. Our method, in comparison, selects the most similar patches based on the query image, which leads to more accurate prototypes. ## 6 Conclusion We propose a new task, zero-shot object counting, to count instances of a specific class given only the class name without access to any exemplars. To address this, we developed a two-step approach that accurately localizes the optimal patches to be used as counting exemplars. We leverage language-vision models to construct class prototypes via two approaches: VAE-based approach and diffusion-based approach. Through these two approaches, we present a comprehensive study of prototype construction at both category and image levels. In the context of our specific task, the diffusion-based image prototype has notably outperformed the category-level prototype constructed via VAE, thanks to its ability to customize prototypes to match object appearances in each image. More generally, our approach of employing language-vision models for zero-shot recognition is applicable in various tasks. In scenarios where data of large-scale generative models do not exist such as medical imaging, or remote sensing, VAE can be a useful choice. We show that the prototypes can be used to select the patches containing objects of interests. Furthermore, we \begin{table} \begin{tabular}{c|c c|c c} \hline \hline Prototype & \multicolumn{2}{c|}{Val Set} & \multicolumn{2}{c}{Test Set} \\ Patches & MAE & RMSE & MAE & RMSE \\ \hline top-\(5\) & **27.76** & **97.06** & **21.99** & **113.31** \\ top-\(25\) & **28.03** & **97.87** & 22.12 & 113.71 \\ top-\(50\) & 28.33 & 98.50 & 22.34 & 113.88 \\ all & 30.13 & 101.70 & 23.37 & 116.04 \\ \hline \hline \end{tabular} \end{table} TABLE IV: Ablation study on the number of diffusion-generated patches for prototype generation. Fig. 9: Visualization of the patches selected by VAE-generated prototypes and SD-generated prototypes. Patches selected by SD-generated prototypes are of higher quality. introduce an error prediction model to select those patches with the smallest predicted errors as the final exemplars for counting. Extensive results demonstrate the effectiveness of our method. We also conduct experiments to show that our selected patches can be incorporated into other exemplar-based counting methods to achieve exemplar-free counting.
2309.03290
Axionlike Dark Matter Model Involving Two-Phase Structure and Two-Particle Composites (Dimers)
Within the self-gravitating Bose-Einstein condensate (BEC) model of dark matter (DM), we argue that the axionlike self-interaction of ultralight bosons ensures the existence of both rarefied and dense phases in the DM halo core of (dwarf) galaxies. In fact, this stems from two independent solutions of the Gross-Pitaevskii equation corresponding to the same model parameters. For a small number of particles, this structure disappears along with the gravitational interaction, and the Gross-Pitaevskii equation reduces to the stationary sine-Gordon equation, the one-dimensional antikink solution of which mimics a single-phase DM radial distribution in the halo core. Quantum mechanically, this solution corresponds to a zero-energy bound state of two particles in a closed scattering channel formed by the domain-wall potential with a finite asymptotics. To produce a two-particle composite with low positive energy and a finite lifetime, we appeal to the resonant transition of one asymptotically free particle of a pair from an open channel (with a model scattering potential) to the closed channel. Using the Feshbach resonance concept, the problem of two-channel quantum mechanics is solved in the presence of a small external influence which couples the two channels, and an analytical solution is obtained in the first approximation. Analyzing the dependence of scattering data on interaction parameters, we reveal a long-lived two-particle composite (dimer) possessing a lifetime of millions of years. This result is rather surprising and supposes important implications of dimers' being involved in forming large DM structures. It is shown that the dimers' appearance is related with the regime of infinite scattering length due to resonance. The revealed dependence of the DM scattering length $a$ on the parameters of interactions can theoretically justify variation of $a$ in the DM dominated galaxies.
A. M. Gavrilik, A. V. Nazarenko
2023-09-06T18:10:08Z
http://arxiv.org/abs/2309.03290v2
# Axionlike Dark Matter Model Involving Two-Phase Structure ###### Abstract Within the self-gravitating Bose-Einstein condensate (BEC) model of dark matter (DM), we argue that the axionlike self-interaction of ultralight bosons ensures the existence of both rarefied and dense phases in the DM halo core of (dwarf) galaxies. In fact, this stems from two independent solutions of the Gross-Pitaevskii equation corresponding to the same model parameters. The existence of two-phase structure did also appear in previously studied models with polynomial self-interactions, which actually involve the truncated expansion series of the axionlike self-interaction. For a small number of particles, this structure disappears along with the gravitational interaction, and the Gross-Pitaevskii equation reduces to the stationary sine-Gordon equation, the one-dimensional antikink solution of which mimics a single-phase DM radial distribution in the halo core. Quantum mechanically, this solution corresponds to a zero-energy bound state of two particles in a closed scattering channel formed by the domain-wall potential with a finite asymptotics. To produce a two-particle composite with low positive energy and a finite lifetime, we appeal to the resonant transition of one asymptotically free particle of a pair from an open channel (with a model scattering potential) to the closed channel. Using the Feshbach resonance concept, the problem of two-channel quantum mechanics is solved in the presence of a small external influence which couples the two channels, and an analytical solution is obtained in the first approximation. Analyzing the dependence of scattering data on interaction parameters, we reveal a long-lived two-particle composite (dimer) possessing a lifetime of millions of years. This result is rather surprising and supposes important implications of dimers' being involved in forming large DM structures. It is shown that the dimers' appearance is related with the regime of infinite scattering length due to resonance. The revealed dependence of the DM scattering length \(a\) on the parameters of interactions can theoretically justify variation of \(a\) in the DM dominated galaxies and its role for large DM structures. dark matter, axions, BEC, Gross-Pitaevskii equation, sine-Gordon equation, antikink, two-channel quantum scattering, resonance scattering, Feshbach resonance, dimers (two-particle composites) pacs: 95.35.+d, 03.75.Hh, 03.65.Nk ## I Introduction Axionlike bosons with low mass and periodic self-interaction belong to the most popular candidates for the role of dark matter (DM) particles [1]. Axions were hypothetically introduced by Peccei-Quinn (PQ) [2; 3] as pseudo-Goldstone bosons to resolve the problem of strong charge parity (CP) in quantum chromodynamics (QCD). Two non-thermal mechanisms for the axion production in the early Universe were soon proposed, namely vacuum misalignment [4; 5; 6] and cosmic string decay [7]. In fact, the thermalization of axions, which were created by the vacuum realignment in the PQ scenarios with either broken or unbroken symmetry, seems irrelevant at the early stage because of their initial coherence. On the other hand, it was argued that the axion component, which appeared during the decay of topological defects, is thermalized due to self-interactions [1]. Besides, gravitational scattering can lead to the re-thermalization of the QCD axions in the era of radiation dominance, as suggested in [8; 9]. This indicates that a system of axions can have huge occupation numbers and be treated nonrelativistically. In this regard, the discussion in the literature [8; 10; 11; 12] on the existence of an axion Bose-Einstein condensate (BEC) led to identifying any condensate regime with a classical field, regardless of whether the axions are in the ground state or not. Thereby, it is recognized that the axions in an occupied state are coherent, so their distribution should be considered from the point of view of classical field theory or quantum mechanics (in a spirit of Gross and Pitaevskii) [13]. Moreover, the wave-based approach does prevail over the corpuscular one as the particle mass decreases. It is argued that axions with masses predicted by QCD [14; 15] are not capable to form giant BECs comparable in size to the DM halos [16; 17]. Indeed, the first attempt to describe galactic halos formed by bosons [18], either in their ground state (BEC) or in an appropriate isothermal state, resulted in an extremely small mass of the order of \(10^{-24}\) eV/\(c^{2}\). A similar mass estimate was also obtained in [19] when studying the rotation curves by considering a giant system of self-gravitating Bose liquid. This means that there may exist other types of axions with a very small mass, called ultralight axions [1]. We also call these (dark) particles _axionlike_. Note that the difference between the masses of QCD axions and ultralight axions can reach tens of orders of magnitude. In view of significant mass spread, it is easy to miss the emergence of composites (complexes of axionlike particles) due to specific nature of their interaction. In order to fill the gap, we are studying the problem of axion dimer formation in this paper. It is expected that interaction characteristics (e.g., scattering length) differ greatly for individual axions and their composites (like dimers). Noteworthy, a large number of theoretical works are devoted to the study of the one-sort BEC DM models at vanishing absolute temperature (see, for instance, [11; 12; 13; 14; 15; 16; 17; 19; 20; 21; 22; 23; 24; 25; 26]). Their being aimed at the description of astrophysical phenomena should borrow the chiral cosinelike self-interaction [2; 27] (or at least its truncated part), and also requires engaging the gravity (usually treated separately from the unification theory), the account of which leads to breaking the inherent symmetry. Note also that there are works, including experimental ones (see [1; 28; 29]), that study the interaction of axions with other substances and their transformation (say, due to the Primakoff effect, see [30; 31]). The condensate properties of the gravitating axionlike DM, with both the leading pairwise contributions to the self-interaction [24; 25; 26] and the next three-particle corrections [32; 17; 33] being taken into account, are promising for further exploration and application of axionlike particles. Having got a number of characteristics, the dilute and dense phases along with the first-order phase transitions are theoretically predicted in BEC [32; 17; 33]. Besides, analysis of the effects and ways to better describe the observables suggest the existence of molecule-like composites [34] and the relevance of deformation-based description [35; 36; 37]. Theoretically, these possibilities are considered as very important, especially when dealing with the dark sector. There are also indications that quantum entanglement may be involved [38; 33]. Thus, there are arguments concerning both the first-order phase transition at zero temperature with changing the interaction parameters and its influence on the rotation curves of the DM-dominated galaxies [33]. The distinct phases could also apparently affect the state of DM bosonic stars [17], the merging of which may produce gravitational waves [39; 40] as an alternative to black hole merging [41]. Physically, the phase transition in BEC DM is associated with quantum fluctuations in regions with a relatively high number density of ultralight particles, where the three-particle effects become significant. Anyway, the choice of the self-interaction potential is decisive. Having gained an idea of the nontrivial phase structure of DM with two- and three-particle interactions and its manifestations in observables [32; 33], we want to show here that the model with the cosinelike interaction mentioned above should also lead to similar consequences. Obviously, the already used self-interactions of the polynomial form now may be treated as the expansion terms of the total potential. It is important that the cosinelike generalization not only complicates the form of interaction, but also reduces the number of independent parameters. We focus on revealing different phases of the DM with axionlike interaction in the spherically symmetric case, when the main function we find is the spatial distribution of particles in the BEC. In general, it is natural to assume that the DM also consists of particles of different sorts, including composites. At first glance, composites ("molecules") would be produced in a dense environment. However, as was shown earlier [34], high particle density leads to particle disintegration triggered by frequent collisions. But the production of composites may be caused by a large scattering length of particle interaction. The very possibility of changing the scattering length is able to shed new light on the properties and behavior of the initial (elementary) DM particles, the nature of which have not yet been identified. What is observed and described, including it in the present study, is mainly the result of self-interaction (and that of gravitation), that is, a steady state with a vanishingly small scattering length, confirmed by numerous models based on the Gross-Pitaevskii equation [17; 25; 33; 34]. Therefore, we need to explain these distinct particle states separated by an energy gap. Within qualitative treatment, formation of the simplest "molecules" of two and three particles is explored in Ref. [34]. Here, we turn to scattering processes, using certain analogy with nuclear processes, and also with experimentally studied phenomena in atomic BECs in the laboratory [42]. Assuming this to be admissible in the DM as well, we appeal to the quantum mechanical formation of a dimer of two particles, borrowing the ideas of the Feshbach resonance and using two scattering channels [42; 43; 44; 45]. Although the different channels may be associated with configurations of internal degrees of freedom of DM particles (a detailed analysis of which is an independent task), we include auxiliary influences in our consideration to disclose a plausible mechanism for the formation of bound states during the Universe evolution. Thereby, we emphasize the fact that one good potential is not sufficient to form DM composites in space. The starting point for studying the dimer formation is the bound state of two particles held by the axionlike interaction. Then, omitting the gravitational interaction between a few _ultralight_ particles, the Gross-Pitaevskii equation reduces, in fact, to the stationary sine-Gordon equation. In order to gain more analytical results, we may restrict ourselves to the one-dimensional case that enables reproducing its (anti)kink solution at zero energy [46]. In the absence of gravity, the two-phase structure should disappear, leaving us with one (mixed) phase. Thanks to the analytical solution, the sine-Gordon potential rewritten as a function of space is, of course, a domain wall with finite asymptotics. This means that a pair of particles is in a closed channel with zero energy, and we are faced with the task of bringing an asymptotically free particle with low energy into this trap. As mentioned above, our solution is to admit the exis tence of an open channel in which an incident particle is scattered by another interaction (one of those in which DM particles may participate due to internal properties). If the particle in this channel has a small positive energy close to the energy of the bound state (i.e. zero), one can expect a resonant transition between the open and closed channels in the spatial region of the trap under a small external influence/force. This mechanism leads to the appearance of an isolated positive energy level of a two-particle (compound) system, which becomes possessing finite lifetime [44]. We treat this state as a dimer, whose characteristics are important for understanding its role in the formation of the DM halo. According to the Feshbach resonance concept [43], the dimer emergence at resonance is accompanied by an infinite scattering length, which eventually depends on the interaction parameters. This fact may also affect the elucidation of other processes. Indeed, it was previously assumed that the so-called unitary regime (of infinite scattering length) could induce instability of the BEC DM halo, similar to what is observed in the laboratory BEC [47; 48]. But this phenomenon occurs at high density, which we exclude. The paper is organized as follows. In Sec. II we show the existence of two phases of the BEC DM halo core on the base of a pair of distinct solutions of the stationary Gross-Pitaevskii equation with axionlike periodic and gravitational interactions at fixed values of the coupling constants and chemical potential. The reduction of the Gross-Pitaevskii equation to the stationary sine-Gordon equation is carried out in Sec. III for the case of a small number of (ultralight) particles, when the gravitational interaction is negligible. The exact one-dimensional solution is also discussed there, and its further use outlined. The Feshbach resonance concept is applied to describe the axion dimer formation in Sec. IV. Therein, for convenience of reading, it is divided in five subsections, the problem is solved and its various aspects are highlighted. The necessary preliminaries and the model formulation are given in Sec. IV A - it introduces the basic concepts and tools. The most significant analytical part of study is presented in Sec. IV.2, where the quantum mechanics equations of the two-channel problem are solved. Therein, an isolated energy level of a compound system of two particles is uncovered and discussed. Also, analytical expressions for the wave functions are given in the first approximation. The dependence of the scattering length on the interaction parameters is studied in Sec. IV.3. The free parameters of the model are fixed there, and the Feshbach resonance at zero energy is analyzed. In Sec. IV.4, when considering resonant scattering, the information about the resonance (and dimer) involving incident particle with nonzero energy is numerically extracted. The physical characteristics of the dimer in the context of DM halo are given and discussed in Sec. IV E. It is revealed and emphasized that _the lifetime of the dimer is of the order of millions of years_. The final Sec. V is devoted to discussion of implications as well as concluding remarks. ## II Gross-Pitaevskii equation for axionlike DM and its solution To start with, we formulate stationary macroscopic model of gravitating Bose-Einstein condensate (BEC) of ultralight bosons with axionlike interaction \(V_{\rm SI}\), restricting ourselves to the spherical symmetry and the absence of hydrodynamic flows. Let the BEC with a constant chemical potential \(\tilde{\mu}\) be described by real function \(\psi(r)\) of radial variable \(r=|{\bf r}|\), with \(\psi^{2}(r)\) defining a local particle density. The behavior of \(\psi(r)\) in the ball \(B=\{{\bf r}\in\mathbb{R}^{3}|\,|{\bf r}|\leq R\}\) is determined by the vanishing variation of the energy functional \(\Gamma\) with respect to the variation of \(\psi(r)\) along with the Poisson equation for the gravitational potential \(V_{\rm gr}(r)\): \[\frac{\Gamma}{4\pi}=\int_{0}^{R}\left[\frac{\hbar^{2}}{2m}( \partial_{r}\psi)^{2}-\tilde{\mu}\psi^{2}+mV_{\rm gr}\psi^{2}+V_{\rm SI}\right] r^{2}\,{\rm d}r, \tag{1}\] \[\Delta_{r}V_{\rm gr}=4\pi Gm\psi^{2}. \tag{2}\] The radial part of Laplace operator \(\Delta_{r}\) and its inverse \(\Delta_{r}^{-1}\) of variable \(r\) are \[\Delta_{r}f(r)=\partial_{r}^{2}f(r)+\frac{2}{r}\,\partial_{r}f(r), \tag{3}\] \[\Delta_{r}^{-1}f(r)=-\frac{1}{r}\int_{0}^{r}f(s)\,s^{2}\,{\rm d}s -\int_{r}^{R}f(s)\,s\,{\rm d}s; \tag{4}\] \(R\) is the radius of the ball where matter is concentrated. The instantonic axionlike self-interaction is chosen here to be [2; 27] \[V_{\rm SI} = \frac{U}{v}\left[1-\cos\left(\sqrt{v}\psi\right)\right]-\frac{U} {2}\psi^{2} \tag{5}\] \[= U\left[-\frac{v}{4!}\psi^{4}+\frac{v^{2}}{6!}\psi^{6}-\ldots \right], \tag{6}\] where the axion field \(\varphi\) is related to the wave function by \(|\varphi|^{2}=(\hbar^{2}/m)|\psi|^{2}\) in the nonrelativistic limit [17]. Note also that there exists the effective axionic potential [49; 50]. Thus, from the series expansion of \(\cos\left(\sqrt{v}\psi\right)\), we see that the first term in (6) corresponds to a two-particle self-interaction, and the next term \(\psi^{6}\) implies a three-particle self-interaction. Note that the works [24; 25; 37; 51; 52; 53] (see also references therein) intensively studied the BEC DM models with only pair interaction for \(v=-|v|\), in the Thomas-Fermi approximation, when \(\hbar\to 0\) in Eq. (1). Switching on the three-particle interaction enables to observe the two-phase structure of BEC DM that is studied in [32; 33; 17]. For this reason, we expect a similar effect to occur here. The axionlike cosine-interaction (5) is characterized by two constants \(U\) and \(v\), which have the dimensions of energy and volume, respectively. In the particle physics [49; 50], they are related with the axion mass and decay constant \(f_{\rm a}\) as \(U=mc^{2}\), \(v=\hbar^{3}c/(mf_{\rm a}^{2})\). Their relativistic nature is noted in [17] in the context of nonrelativistic model of axion stars. However, in astrophysical applications, these constants may have other meaning and values, which will be discussed below. To analyze the general properties of the model, we reformulate it in dimensionless variables: \[\xi=\frac{\sqrt{mU}}{\hbar}\,r, \chi(\xi)=\sqrt{v}\,\psi(r), \tag{7}\] \[\xi_{B}=\frac{\sqrt{mU}}{\hbar}\,R, A=4\pi\frac{G\hbar^{2}m}{U^{2}v},\] (8) \[u=\frac{\tilde{\mu}}{U}, \nu=1+2u+2A\Phi_{0}, \tag{9}\] where \(\nu\) plays the role of effective chemical potential, which absorbs the constant term of axion interaction and the gravitational potential at the origin \(\xi=0\), namely \[\Phi_{0}=-\int_{0}^{\xi_{B}}\chi^{2}(\xi)\,\xi\,\mathrm{d}\xi. \tag{10}\] In our study, \(\nu\) is regarded as a free variable parameter, due to arbitrariness of \(u\). The model equations in terms of the wave-function \(\chi(\xi)\) and auxiliary gravitational potential \(\Phi(\xi)\) read \[(\Delta_{\xi}+\nu)\,\,\chi-2A\Phi\chi-\sin\chi=0, \tag{11}\] \[\Phi(\xi)=-\frac{1}{\xi}\int_{0}^{\xi}\chi^{2}(s)\,s^{2}\, \mathrm{d}s+\int_{0}^{\xi}\chi^{2}(s)\,s\,\mathrm{d}s, \tag{12}\] where \(\Delta_{\xi}\Phi(\xi)=\chi^{2}(\xi)\) is satisfied, and \(\Phi(0)=0\). To obtain a finite and stable solution \(\chi(\xi)\), the nonlinear Eqs. (11)-(12) should be (numerically) integrated under the following conditions: \(\chi(0)<\infty\), \(\chi^{\prime}(0)=0\), \(\chi^{\prime\prime}(0)<0\). For given \(A\) and \(\nu\), the finite initial value \(\chi(0)=z\) should be positive solution of the transcendental equation \[2Az^{2}+\left(\nu-\frac{\sin z}{z}\right)(\nu-\cos z)=0 \tag{13}\] which is derived by substituting \(\chi(\xi)=\chi(0)+\chi^{\prime\prime}(0)\xi^{2}/2\) into (11)-(12) for \(\xi\to 0\) and by finding \(\chi^{\prime\prime}(0)\). Note that the requirement \(\chi^{\prime\prime}(0)<0\) is equivalent to imposing \(\nu>\sin z/z\) for positive \(\nu\). The absence of a solution \(z\) for a given pair \((A,\nu)\) means that \(\chi(\xi)=0\) everywhere. It happens for \(\nu>\nu_{\mathrm{max}}\), where \(\nu_{\mathrm{max}}(A)\) is also found numerically from (13). For \(\nu<\nu_{\mathrm{max}}\), two branches of \(\chi_{0}(\nu)\) can occur, which indicate the existence of two regimes and a first-order phase transition in the model. Let us emphasize that the magnitude of parameter \(A\) plays a crucial role for subsequent implications. Assuming that gravity is weaker than the axion self-interaction, we take the parameter \(A\gtrsim 10^{-3}\). Then, Eq. (13) leads to two independent solutions for \(z>\pi\). Indeed, choosing \(A\) as in Fig. 1, the upper branch of \(\chi_{0}(\nu)\) corresponds to \(\chi_{0}\in[\chi_{s};5\pi/2]\), while the lower branch of \(\chi_{0}(\nu)\) gives us values of \(\chi_{0}\) within the interval \([3\pi/2;\chi_{s}]\), where the separating value \(\chi_{s}=\chi_{0}(\nu_{\mathrm{max}})\lesssim 2\pi\). Therefore, there exist two independent solutions \(\chi^{(\alpha)}(\xi)\), \(\alpha=1,2\), of the set of Eqs. (11)-(13) for the same parameters \(A\) and \(\nu\). They are characterized by \(\chi_{0}^{(\alpha)}\) and \(\xi_{B}^{(\alpha)}\), which belong to different branches (\(\alpha=1,2\)) as in Fig. 1. This means that any space average \(F(\nu)\) in the statistical description, for instance, _mean particle density_ \[\sigma(\nu)=\frac{3}{\xi_{B}^{3}}\int_{0}^{\xi_{B}}\chi^{2}(\xi)\xi^{2}\, \mathrm{d}\xi, \tag{14}\] for fixed \(A\) and variable \(\nu\leq\nu_{\mathrm{max}}\), also consists of two branches \(F^{(1)}(\nu)\) and \(F^{(2)}(\nu)\), e.g. \[\sigma^{(\alpha)}(\nu)=3\left(\xi_{B}^{(\alpha)}\right)^{-3}\int_{0}^{\xi_{B} ^{(\alpha)}}\left[\chi^{(\alpha)}(\xi)\right]^{2}\,\xi^{2}\,\mathrm{d}\xi. \tag{15}\] Then the graph of the function \(F(\nu)\) is the union of the graphs for \(F^{(1)}(\nu)\) and \(F^{(2)}(\nu)\) so that \(F^{(1)}(\nu_{\mathrm{max}})=F^{(2)}(\nu_{\mathrm{max}})\) by construction. But it is convenient for us to continue the use of the definition (14), keeping in mind the need to take into account different branches. Contrary to the expectation of a weak axion field (\(\chi_{0}<\pi\)) near the true vacuum [27], the situation looks different in the nonrelativistic model of DM with Newtonian interaction. Also, there is no invariance there under the global transformation \(\chi\to\chi+2\pi\) (it is violated by gravitation). Besides, such a discrepancy is related with the consideration of the condensate in a finite volume (of galactic DM halo). We might expect some distinctions when describing gravity as a space-time geometry. Indeed, Fig. 1(b) demonstrates the value of (first) zero \(\xi_{B}\) of oscillating function \(\chi(\xi)\) (that is, \(\chi(\xi_{B})=0\)), which limits the system size in our model and is found by integrating Eqs. (11)-(12) for given \(A\) and \(\nu\). The typical solutions for the wave function \(\chi(\xi)\) are presented in Fig. 2(a), where \(\xi\in[0;\xi_{B}]\). They describe a core with a finite magnitude of the particle (and mass) density at the center \(\xi=0\). Besides, an additional information may be extracted through introducing the effective potential \(W\) [see Fig. 2(b)] as a function of radial variable \(\xi\), namely \[W(\xi)=2A\Phi_{0}+2A\Phi(\xi)+\frac{\sin\chi(\xi)}{\chi(\xi)}, \tag{16}\] \[\left[-\Delta_{\xi}+W(\xi)\right]\chi(\xi)=\varepsilon\chi(\xi), \tag{17}\] when Eq. (11) is re-written in the form of Schrodinger equation with the "energy" \(\varepsilon=1+2u\), see Eq. (9). Fig. 2(b) shows the forms of potential \(W(\xi)\). The black curve describes the effective potential of the mixed (single-phase) state at \(\nu=\nu_{\rm max}\). The blue and red curves correspond to the effective potentials of two phases for the same parameters \(A\) and \(\nu<\nu_{\rm max}\), but obtained from the solutions \(\chi(\xi)\) with different \(\chi_{0}(\nu)\) and \(\xi_{B}(\nu)\) belonging to different branches in Fig. 1. The different positions and depths of the minima of these potentials confirm the existence of two macroscopic states in the axion system. The particles being in one of the two phases is conditioned by the applied factors e.g. pressure [32; 33]. Outside the system at \(\xi>\xi_{B}\), where the matter is absent, \(W(\xi)\) is continuously extended by the gravitational potential of the form \(1-2A{\cal N}/\xi\), where \({\cal N}\) is the total number of particles in the ball \(\xi\leq\xi_{B}\). Let us estimate the characteristics of our model, which are related with the dimensionless quantities (7)-(9). For this aim, we refer to the model from [32], which dictates to separate the approaches to describing the core and the tail of the DM halo due to different role of self-interaction in relatively dense and rarefied regions. It is worth noting that recent astronomical measurements indicate spatial fluctuations in the density of cold DM (around the quasar) on the scale of 10 kpc [54], which may be associated with an oscillating tail of the condensate wave function, rather than the mentioned smooth tail. Focusing on the phenomena in the core, we need to reproduce the size scale \(r_{0}\) and the central mass density \(\rho_{0}\), which define \(r=r_{0}\xi\) and \(\rho(\xi)=\rho_{0}\chi^{2}(\xi)/\chi_{0}^{2}\). At the same time, this is needed to control the parameters \(U\) and \(v\) in (5). As it was stated in Ref. [17], models for describing compact objects (such as axion stars) and cosmological models lead to different parametrizations of axions. First of all, this concerns the different mass ranges. While cosmological models constrain the axion mass as \(10^{-7}\,{\rm eV}<mc^{2}<10^{-2}\,{\rm eV}\), another kind of models suggests that \(mc^{2}\sim 10^{-22}\) eV, commonly attributed to fuzzy DM [55; 21]. Physically, the choice of a smaller particle mass ensures the formation of certain structures in the Universe [16], some of which we are trying to describe. It motivates us to fix \(m\sim 10^{-22}\) eV \(c^{-2}\). Although the parameter \(U\) in the chiral models is set proportional to \(mc^{2}\)[50], we do not declare such an identity in the nonrelativistic case, and admit only that the value of \(U\) provides the predominance of axion repulsion over gravitation at relatively large magnitude of axion field. Besides, we have to determine \(f_{\rm a}\) in order to specify \(v=\hbar^{3}c/(mf_{\rm a}^{2})\). Although \(10^{18}\,{\rm eV}<f_{\rm a}<10^{21}\,{\rm eV}\) in cosmology [4], the decay constant \(f_{\rm a}\) may be appearing larger in the models with ultralight particles [17]. Thus, combining the definition \(v=\hbar^{3}c/(mf_{\rm a}^{2})\) and the relation \(m\chi_{0}^{2}=v\rho_{0}\) resulting from (7), we obtain that \[f_{\rm a} \simeq 3.304\times 10^{19}\,{\rm eV}\,\left[\frac{\rho_{0}}{10^{-19}\,{ \rm kg}\,{\rm m}^{-3}}\right]^{1/2} \tag{18}\] \[\times\left[\frac{mc^{2}}{10^{-22}\,{\rm eV}}\right]^{-1}\left[ \frac{\chi_{0}}{2\pi}\right]^{-1},\] where the central mass density \(\rho_{0}\) is taken to be of the order of \(10^{-19}\,{\rm kg}\,{\rm m}^{-3}\), while a mean mass density is assumed to be of the order of \(10^{-20}\,{\rm kg}\,{\rm m}^{-3}\) as usual [17; 25]. Using the second relation of (8) and \(m\chi_{0}^{2}=v\rho_{0}\), we Figure 2: A particular form of the wave function \(\chi(\xi)\) (a) and the effective potential \(W(\xi)\) (b) for the fixed parameters \(A\) and \(\nu\). Blue and red lines are obtained with the same \(A\) and \(\nu\), but differ in the initial values \(\chi_{0}\), see Fig. 1(a). The value of \(\chi_{0}\) for blue curves belongs to the upper branch of \(\chi_{0}(\nu)\) in Fig. 1(a), while the red lines are plotted using \(\chi_{0}\) of the lower branch of \(\chi_{0}(\nu)\). Black curves correspond to a single state at \(\nu=\nu_{\rm max}\). find: \[U \simeq 2.145\times 10^{-29}\,\mathrm{eV}\,\left[\frac{\rho_{0}}{10^{-19}\, \mathrm{kg}\,\mathrm{m}^{-3}}\right]^{1/2} \tag{19}\] \[\times\left[\frac{A}{2\times 10^{-3}}\right]^{-1/2}\left[\frac{ \chi_{0}}{2\pi}\right]^{-1}.\] The characteristic scale is defined here as \(r_{0}=\hbar/\sqrt{mU}\) and equals to \[r_{0} \simeq 0.138\,\mathrm{kpc}\,\left[\frac{\rho_{0}}{10^{-19}\,\mathrm{ kg}\,\mathrm{m}^{-3}}\right]^{-1/4}\left[\frac{mc^{2}}{10^{-22}\,\mathrm{eV}} \right]^{-1/2} \tag{20}\] \[\times\left[\frac{A}{2\times 10^{-3}}\right]^{1/4}\left[\frac{ \chi_{0}}{2\pi}\right]^{1/2}.\] Such \(r_{0}\) is appropriate for estimating the size of the central part of the DM halo as \(R=r_{0}\xi_{B}\), but should be fitted together with the total mass \(M\). To justify the first-order phase transition in the model, one has to develop a statistical approach, which is omitted here. Nevertheless, the discontinuous change in particle density (14) at zero temperature is expected to be caused by a change in the long-wave part of pressure [32] \[\Pi=-\frac{3}{\xi_{B}^{3}}\int_{0}^{\xi_{B}}\left[(\partial_{\tau}\chi)^{2}- \nu\chi^{2}\right]\xi^{2}\,\mathrm{d}\xi, \tag{21}\] which also consists of two branches, as stated above. Clearly, the effect of different DM phases on the observables, as well as on the rotation curves, also deserves a separate study. ## III Sine-Gordon equation and a bound state Let us analyze the ground state of the DM halo core by turning to a one-dimensional model with the coordinate \(\xi\in[0;+\infty)\), when gravity is absent and only the axion self-interaction plays a key role. This means that Eq. (17) under the simplifications \[A=0,\hskip 28.452756pt\varepsilon=0,\hskip 28.452756pt\Delta_{\xi}\to \frac{\mathrm{d}^{2}}{\mathrm{d}\xi^{2}} \tag{22}\] reduces to the stationary sine-Gordon equation: \[\left(-\frac{\mathrm{d}^{2}}{\mathrm{d}\xi^{2}}+W\right)\chi=0,\hskip 28.452756ptW =\frac{\sin\chi}{\chi}, \tag{23}\] which is in the form of Schrodinger equation with the axionlike potential \(W\). Eq. (23) is invariant under the global transformation \(\chi(\xi)\mapsto\chi(\xi)+2\pi n\), \(n\in\mathbb{Z}\), while Eq. (11) is not. Its general solution is easily derived by integration, which is carried out in numerous works (see, for instance, Sec. 5.3 in [46]). For physical reasons, we write down and exploit the stationary anti-kink solution \[\chi_{\mathrm{ak}}(\xi)=4\arctan\mathrm{e}^{-\xi},\quad\xi\geq 0. \tag{24}\] We also consider the solution of the form \(\chi_{\mathrm{ak}}(\xi-L)\) with an arbitrary constant \(L\). Altogether these solutions at dimensionless energy \(\varepsilon=0\) describe the ground state and, moreover, devoid any nodes, in contrast to the oscillating solutions in Sec. II. According to (23), they determine the potential \(W\) in terms of the coordinate \(\xi\). It is clear that the solution (24) can be also obtained from Eq. (23) with the potential \(W\) depending directly on \(\xi\). Using the auxiliary formula \[\sin 4z=4\,\frac{1-\tan^{2}z}{\left(1+\tan^{2}z\right)^{2}}\,\tan z, \tag{25}\] one arrives at \(W\) of the form \[W_{\mathrm{ak}}(\xi)=\frac{\tanh\xi}{2\cosh\xi\arctan\mathrm{e}^{-\xi}}. \tag{26}\] Note that, replacing \(\chi\) with \(4\arctan\varphi\), the sine-Gordon potential reduces to the form \(\sin\chi=4\varphi\cos_{\mu=1}\varphi\) accordingly to (25), where \(\cos_{\mu}z\) is the \(\mu\)-deformed cosine-function [56; 57] taken at \(\mu=1\). Moreover, \(\cos_{\mu=1}\xi\) is used in [56] to simulate the potential of two coupled axions at the quantum mechanical level. To reproduce (24) by solving Eq. (23), we have to take \(\chi_{\mathrm{ak}}(0)=\pi\) and \(\chi^{\prime}_{\mathrm{ak}}(0)=-2\). The same approach relates the potential \(W_{\mathrm{ak}}(\xi-L)\) with the solution \(\chi_{\mathrm{ak}}(\xi-L)\). At \(\xi=0\), we set \(\chi_{\mathrm{ak}}=4\arctan\mathrm{e}^{L}\) and \(\chi^{\prime}_{\mathrm{ak}}=-2/\cosh L\). The behavior of \(\chi_{\mathrm{ak}}(\xi-L)\) and \(W_{\mathrm{ak}}(\xi-L)\) is shown in Fig 3. For fixed \(L\), we see that \(\chi_{\mathrm{ak}}\to 0\) and \(W_{\mathrm{ak}}\to 1\) for \(\xi\to\infty\) as long as \(\chi_{\mathrm{ak}}\) reaches its maximum at \(\xi=0\). For large \(L\), the value of \(\chi_{\mathrm{ak}}(\xi-L)\) at \(\xi=0\) tends to \(2\pi\), which is similar to the single-phase solution in Fig. 2(a), colored in black. We can deduce that the profile of \(\chi_{\mathrm{ak}}\) qualitatively depicts the DM halo core due to the potential \(W_{\mathrm{ak}}\), which results in an infinite scattering length \(a\) in the Born approximation and forms a closed scattering channel for Figure 3: Antikink solution \(\chi(\xi-L)\) (solid lines) and corresponding potential \(W(\xi-L)\) (dashed lines) for different \(L\). particles with zero total energy \(\varepsilon\) (or \(\nu\)), which are unable to overcome the potential barrier/domain wall. Although the gravitational interaction of a large number of relatively fast particles modifies this barrier [cf. Fig. 2(b)], the mechanism of injection of a slow particle into a closed channel is of special interest. One possibility which we further explore is the transfer of particle between different scattering channels using the Feshbach resonance stimulated by an additional impact. Let us calculate the integral over the entire (one-dimensional) space: \[N(L) \equiv \int_{0}^{\infty}\chi_{\rm ak}^{2}(\xi-L)\,{\rm d}\xi \tag{27}\] \[= 4\int_{0}^{\alpha}\,\frac{z^{2}}{\sin z}\,{\rm d}z;\qquad\;\; \alpha=2\arctan{\rm e}^{L}. \tag{28}\] Note that \(2\alpha=\chi_{\rm ak}(\xi-L)\) at \(\xi=0\), that is the (maximal) value of axion field at the origin. On integrating, the result is presented in differing forms: \[N = 2\alpha^{2}+4\sum_{n=1}^{\infty}(-1)^{n+1}\frac{2^{2n-1}-1}{(n+1) (2n)!}\,B_{2n}\alpha^{2n+2} \tag{29}\] \[= 2{\rm e}^{{\rm i}\alpha}\,\left[\Phi\left({\rm e}^{2{\rm i} \alpha},3,\frac{1}{2}\right)-2{\rm i}\alpha\,\Phi\left({\rm e}^{2{\rm i} \alpha},2,\frac{1}{2}\right)\right]\] \[+4\alpha^{2}\left(\ln\tan\frac{\alpha}{2}-{\rm i}\frac{\pi}{2} \right)-14\,\zeta(3),\] where \(B_{2n}\) is the Bernoulli number; \(\Phi(z,s,a)\) is the Lerch transcendent; \(\zeta(3)=1.20205...\) is the particular value of Riemann zeta-function; at last, \({\rm i}=\sqrt{-1}\). The first expression is valid for \(\alpha<\pi\) and given by Eq. (1.5.44.1) in [58]. Behavior of \(N(L)\) is also shown in Fig. 4. By construction, \(N(L)\) is related with the number of particles in nonlinear problem (23). This can be useful, for instance, to control the effect of gravitation, as mentioned above. It is obvious that \(N(L)\) is distinct from the (anti)kink topological charge [46]. On the other hand, when formulating the linear Schrodinger equation with potential \(W_{\rm ak}(\xi-L)\), we can use \(N(L)\) to normalize \(\chi_{\rm ak}(\xi-L)\), the fact that will be applied below in the quantum mechanical setting. Thus, we summarize that, neglecting gravity when considering a small number of axions, their (ground) bound state is still revealed even in the one-dimensional case, which nevertheless retains main physical properties of the model we need and only simplifies the mathematical description. Besides, the model (field) equation is reformulated in an equivalent quantum-mechanical form through introducing an effective potential depending on space. ## IV Feshbach resonance We would like to consider in more detail the mechanism of the transition of a DM particle into the bound state described above, noting that the presence of one good potential is apparently not enough for this. We appeal to resonance scattering, which implies the existence of an isolated (quasi-discrete) energy level. Our approach inherits the ideas of Feshbach resonance [43], which uses at least two channels of scattering, one open and one closed channels with distinct Hamiltonians. Coupling these channels enables to create an isolated level and bind the scattered particle. This approach seems to be appropriate, since it is difficult to directly transfer a zero-energy particle through the domain wall into the trap of sine-Gordon potential. Let us assume that a spinless particle is able to jump between open and closed channels. When the total energy exceeds the open channel threshold (\(E=0\)), the open channel becomes both an incoming and an outgoing channel. The Feshbach resonance occurs when the energy of the bound state in the closed channel is close to the threshold of the open channel. Due to the coupling of the channels, the unperturbed bound state of the closed channel becomes dressed. This dressed state is treated as a quasi-bound state of the scattered particle. The scattered particle temporarily passes into the quasi-bound state with positive energy and returns to the open channel after a typical time delay \(\tau=2/\Gamma\), determined by the decay width \(\Gamma\) of the quasi-bound state. We associate this state of two interacting particles with a composite (dimer). ### Preliminaries and Model Formulation Formulating our model on the base of stationary Schrodinger equation, we consider in fact a slowly moving particle which is assumed to be in one of the two channels. One channel is _closed_ and is described in the absence of external interaction by the sine-Gordon equation in the ground state with energy \(E^{(1)}=0\), as mentioned earlier in Sec. III. An _open_ (entrance) channel, the existence of which we assume, corresponds to elastic scattering due to another interaction, where a wave/particle initially has low energy \(E^{(2)}=k^{2}>0\) determined by momentum \(k\), which we also treat as the relative momentum of the pair Figure 4: Normalization (27) associated with the number of particles as a function of length parameter \(L\). of interacting particles. Then, there is an energy gap between these channels \(Q=E^{(2)}-E^{(1)}>0\), which is further affected by external impact. Thus, such a model involves three different interactions, and \(E^{(1)}\) and \(E^{(2)}\) are not energy levels of the same Hamiltonian. For this reason, the interaction parameters must be tuned to obtain the desired effect of resonant transition. We describe the motions of a particle in two channels, adopting the matrix representation [45]: \[\mathbb{H}X=EX,\hskip 28.452756pt\mathbb{H}=\left(\begin{array}{cc}H_{\rm bs }&\Omega\\ \Omega^{\dagger}&H_{\rm wv}\end{array}\right), \tag{30}\] where \(H_{\rm bs}\) and \(H_{\rm wv}\) are the Hamiltonians of the bound state (in closed channel) and the scattered wave (in open channel), respectively. The coupling between channels is represented by \(\Omega\) and is associated with extra force, which is turned on and starts to act _after_ fixing the gap \(Q\). In other words, \(E^{(1)}\) and \(E^{(2)}\) are given at \(\Omega=0\), while energy \(E\) is determined by switching \(\Omega\). We account for the energy gap \(Q\) in Eq. (30) by defining \(H_{\rm bs}=H_{\rm ak}+Q\), where \[H_{\rm ak}=-\frac{\mathrm{d}^{2}}{\mathrm{d}\xi^{2}}+W_{\rm ak}(\xi-L) \tag{31}\] with the potential \(W_{\rm ak}(\xi)\) from Eq. (26), and \(W_{\rm ak}(0)=0\). Hence, we use the dependence on parameter \(L>0\), which plays an important role in further constructions. In a sense, the system is doubly degenerate at \(\Omega=0\) due to existing two independent wave functions for the same eigenvalue \(E\): \[X^{(1)}=\left(\begin{array}{c}\chi^{(1)}\\ 0\end{array}\right),\hskip 28.452756ptX^{(2)}=\left(\begin{array}{c}0\\ \chi^{(2)}\end{array}\right), \tag{32}\] which are evidently orthogonal in this representation. We identify \(\chi^{(1)}\) with \(\chi_{\rm ak}(\xi-L)\) from Eq. (24). In principle, we need to write \(X=X^{(1)}\cos\alpha+X^{(2)}\sin\alpha\) with some \(\alpha\) in order to normalize the total wave function \(X\) with respect to the matrix representation. As shown in Fig. 3, the spatial interval \(\xi\in[0;L]\) is most significant for the manifestation of a bound state. Therefore, essential processes should be related with this region, which defines the _resonance zone_. For this reason, we concentrate there on the external force, which is parametrized by \(\omega\) as \[\Omega(\xi)=-\omega^{2}\,\theta(L-\xi),\hskip 28.452756pt\Omega^{\dagger}=\Omega. \tag{33}\] Here \(\theta\) is the Heaviside step-function. Similarly, we define the interaction in the open channel by the square-well potential: \[V_{\rm sq}(\xi)=-V\,\theta(L-\xi),\hskip 14.226378ptV>0. \tag{34}\] The strength \(V\) along with \(\omega^{2}\) are the variable parameters of the model. Since we are studying the mechanism of the emergence of resonance and two-particle composite, the refinement of the nature and form of these extra interactions remains for further consideration. Here we only use their simplest version and discuss their origin after the computations performed. By construction, all spatial functions in such a model are divided into two components belonging either to the interval \(\xi\in[0;L]\) or to the interval \(\xi\in[L;\infty)\), which we label by "\(<\)" and "\(>\)" relative to the separating point \(\xi=L\). Then, the wave functions for the channels are numbered by \(\alpha=1,2\) and decomposed as \[\chi^{(\alpha)}(\xi)=\theta(L-\xi)\,\chi^{(\alpha)}_{<}(\xi)+\theta(\xi-L)\, \chi^{(\alpha)}_{>}(\xi). \tag{35}\] We connect the functions at separating point \(\xi=L\) by the matching condition \[\frac{\mathrm{d}}{\mathrm{d}\xi}\,\ln\chi^{(\alpha)}_{<}(\xi)\bigg{|}_{\xi=L} =\left.\frac{\mathrm{d}}{\mathrm{d}\xi}\,\ln\chi^{(\alpha)}_{>}(\xi)\right|_{ \xi=L}, \tag{36}\] to guarantee the equality of derivatives and proportionality of the functions in the left and right sides of (36). Before proceeding further, we recall the known results for the open channel (\(\alpha=2\)) in the absence of coupling \(\Omega\). The scattering characteristics result from the equation \[H_{\rm wv}\chi^{(2)}\equiv\left(-\frac{\mathrm{d}^{2}}{\mathrm{d}\xi^{2}}+V_{ \rm sq}(\xi)\right)\chi^{(2)}=E\chi^{(2)}. \tag{37}\] Solution to Eq. (37) is given as \[\chi^{(2)}(\xi)=\theta(L-\xi)\,\chi^{(2)}_{<}(\xi)+\theta(\xi-L) \,\chi^{(2)}_{>}(\xi), \tag{38}\] \[\chi^{(2)}_{<}(\xi)=\sin K\xi,\hskip 14.226378pt\chi^{(2)}_{>}( \xi)=\sin KL\,\frac{\sin\left(k\xi+\delta\right)}{\sin\left(kL+\delta\right)}, \tag{39}\] where \(K=\sqrt{E+V}\) and \(k=\sqrt{E}\), that is, \(E=k^{2}\). Note that \(\chi^{(2)}_{>}\) behaves as \(\exp\left(-\sqrt{|E|}\xi\right)\) when \(E<0\). The phase shift \(\delta\) is derived from the relation (36): \[K\cot\left(KL\right)=k\cot\left(kL+\delta\right). \tag{40}\] Then, computing the scattering length as \[a=-\lim_{k\to 0}\frac{\tan\delta(k)}{k}, \tag{41}\] one finds its expression for potential \(V_{\rm sq}(\xi)\): \[a_{V}=L\,\left(1-\frac{\tan\left(\sqrt{V}L\right)}{\sqrt{V}L}\right). \tag{42}\] Note that \(a_{V}\) demonstrates discontinuous behavior for \(\sqrt{V}L=(2n-1)\pi/2\) and \(n\in\mathbb{N}\). We omit the detailed consideration of this zero-energy resonance. Let us emphasize the essential difference between the physical consequences of the zero and divergent scattering lengths \(a_{V}\) in one and three dimensions, despite the formal similarity of the presented expressions to the three-dimensional case [59]. While the vanishing \(a_{V}\) means complete transparency in three dimensions, the opposite effect occurs in one dimension: the reflection coefficient becomes equal to unity, which leads to complete opacity. Transparency in one dimension is achieved when \(a_{V}\) diverges. Nevertheless, the divergence of the scattering length reveals a zero-energy bound state in both three-dimensional and one-dimensional cases [60]. In one dimension (see [61]), the scattering matrix \(S=\mathrm{e}^{2\mathrm{i}\delta}\) and amplitude \(f\) are \[S=\mathrm{e}^{-2\mathrm{i}kL}\,\frac{K\cot{(KL)}+\mathrm{i}k}{K\cot{(KL)}- \mathrm{i}k},\quad f=\frac{1}{2}\left(\mathrm{e}^{2\mathrm{i}\delta}-1\right). \tag{43}\] These formulas tell us how to extract scattering data in the open channel. ### Two-Channel Quantum Mechanics Thus, we admit a single bound state in the closed channel and a continuum of waves with momentum \(k\) in the open channel. Taking into account the complexity of the problem involving anti-kink potential, we intend to analytically describe the Feshbach resonance between the channels in the first approximation. As mentioned above, the initial set of equations in entire space is \[(H_{\mathrm{ak}}+Q-E)\chi^{(1)}+\Omega\chi^{(2)} = 0, \tag{44}\] \[(H_{\mathrm{vv}}-E)\chi^{(2)}+\Omega^{\dagger}\chi^{(1)} = 0. \tag{45}\] For convenience, we will use the bra- and ket-vectors to simplify the notation of matrix elements. In the first approximation, we put [44]: \[|\chi^{(1)}\rangle=\frac{\lambda}{N(L)}\,|\chi_{\mathrm{ak}}\rangle,\quad \langle\xi|\chi_{\mathrm{ak}}\rangle=\chi_{\mathrm{ak}}(\xi-L), \tag{46}\] where \(\lambda\) is a complex constant which should be found; \(N(L)\) is given by Eq. (27). Acting by \(\langle\chi_{\mathrm{ak}}|\) on Eq. (44), one has \[\lambda=\frac{\langle\chi_{\mathrm{ak}}|\Omega|\chi_{<}^{(2)} \rangle}{E-Q}, \tag{47}\] where it has been used that \(H_{\mathrm{ak}}|\chi_{\mathrm{ak}}\rangle=0\), the normalization \(\langle\chi_{\mathrm{ak}}|\chi_{\mathrm{ak}}\rangle=N(L)\), and the equality \(\langle\chi_{\mathrm{ak}}|\Omega|\chi^{(2)}\rangle=\langle\chi_{\mathrm{ak}}| \Omega|\chi_{<}^{(2)}\rangle\) due to the form of \(\Omega(\xi)\). At this stage, the coefficient \(\lambda\) still depends on the unknown function \(\chi_{<}^{(2)}\). Introducing the auxiliary Hamiltonian \[H_{2}=-\frac{\mathrm{d}^{2}}{\mathrm{d}\xi^{2}}-K^{2},\hskip 28.452756ptK^{2}=E+V, \tag{48}\] which is defined in the region \(\xi\in[0;L]\), the equations for the open channel take the form \[H_{2}|\chi_{<}^{(2)}\rangle+\Omega^{\dagger}|\chi^{(1)}\rangle = 0, \tag{49}\] \[-\frac{\mathrm{d}^{2}\chi_{>}^{(2)}}{\mathrm{d}\xi^{2}}-k^{2}\chi _{>}^{(2)} = 0. \tag{50}\] Solution to Eq. (49) can be written as \[|\chi_{<}^{(2)}\rangle = |\tau_{0}\rangle-G_{2}^{(+)}\Omega^{\dagger}|\chi^{(1)}\rangle \tag{51}\] \[= |\tau_{0}\rangle-\frac{\lambda}{N(L)}\,G_{2}^{(+)}\Omega^{\dagger }|\chi_{\mathrm{ak}}\rangle,\] where unperturbed wave function \(\tau_{0}(\xi)=\langle\xi|\tau_{0}\rangle\) coincides with \(\chi_{<}^{(2)}(\xi)\) from Eq. (39) and is such that \[H_{2}\,\tau_{0}(\xi)=0,\hskip 28.452756pt\tau_{0}(\xi)=\sin{K}\xi. \tag{52}\] The Hamiltonian \(H_{2}\) determines also the Green's operator \(G_{2}^{(+)}=(H_{2}-\mathrm{i}\epsilon)^{-1}\) that contains the shifted energy \(E+\mathrm{i}\epsilon\) at \(\epsilon\to 0\). The corresponding Green's function is \[G_{2}^{(+)}(\xi,\zeta;K)=\frac{\sin{K}\xi_{1}\cos{K}\xi_{2}}{K}+ \mathrm{i}\,\frac{\sin{K}\xi\sin{K}\zeta}{K}, \tag{53}\] \[\xi_{1}=\min(\xi,\zeta),\hskip 28.452756pt\xi_{2}=\max(\xi, \zeta),\] which serves for finding the outgoing wave under the boundary condition \(G_{2}^{(+)}(0,\zeta;K)=0\). To express \(\lambda\) in terms of the known solutions \(\tau_{0}\) and \(\chi_{\mathrm{ak}}\), let us operate by \(\langle\chi_{\mathrm{ak}}|\Omega\) on Eq. (51). Then we obtain \[\lambda=\frac{\langle\chi_{\mathrm{ak}}|\Omega|\tau_{0}\rangle}{E-Q+N^{-1}(L) \langle\chi_{\mathrm{ak}}|\Omega G_{2}^{(+)}\Omega^{\dagger}|\chi_{\mathrm{ak} }\rangle}. \tag{54}\] The condition of vanishing of the denominator reveals the isolated (quasi-discrete) energy level of the dressed state [43; 44]. Having introduced the notations \[\omega^{4}\Delta_{L}(K) = N^{-1}(L)\,\mathrm{Re}\,\langle\chi_{\mathrm{ak}}|\Omega G_{2}^{(+)}\Omega^{\dagger}|\chi_{\mathrm{ak}}\rangle, \tag{55}\] \[\omega^{4}\gamma_{L}(K) = N^{-1}(L)\,\mathrm{Im}\,\langle\chi_{\mathrm{ak}}|\Omega G_{2}^{( +)}\Omega^{\dagger}|\chi_{\mathrm{ak}}\rangle, \tag{56}\] let us sketch how this works for a fixed \(Q>0\). We first imagine the situation when \(V\gg E\) for \(E\to 0\), and the denominator of \(\lambda\) vanishes at some complex value of the energy \(E^{\prime}-\mathrm{i}\Gamma^{\prime}/2\), thereby making the magnitude of the wave functions extremely large. The resonance energy \(E=E^{\prime}\) and decay width \(\Gamma^{\prime}\) may be simply determined: \[E^{\prime}=Q-\omega^{4}\Delta_{L}(\sqrt{V}),\hskip 28.452756pt\Gamma^{\prime}=2 \omega^{4}\gamma_{L}(\sqrt{V}). \tag{57}\] For relatively small \(\omega^{2}\) and positive \(\Delta_{L}(\sqrt{V})\), we can achieve that \(Q>E^{\prime}>0\) due to two additional interactions. Besides, for positive \(\gamma_{L}(\sqrt{V})\), the lifetime of particle in such a state is \(\tau=2/\Gamma^{\prime}\), in dimensionless units. In general case we write \[\lambda=\frac{\langle\chi_{\mathrm{ak}}|\Omega|\tau_{0}\rangle}{E-Q+\omega^{4} \Delta_{L}(K)+\mathrm{i}\omega^{4}\gamma_{L}(K)}. \tag{58}\] Consider the overlap integral that defines \(\langle\chi_{\mathrm{ak}}|\Omega|\tau_{0}\rangle\): \[B_{L}(K)\equiv 4\int_{0}^{L}\arctan{\mathrm{e}^{L-\xi}}\,\sin{K}\xi\,\mathrm{d}\xi. \tag{59}\] It can be transformed to the form \[B_{L}(K) = \frac{1}{2{\rm i}}\left[\phi_{L}(L,{\rm i}K)-\phi_{L}(L,-{\rm i}K)\right] \tag{60}\] \[-\frac{1}{2{\rm i}}\left[\phi_{L}(0,{\rm i}K)-\phi_{L}(0,-{\rm i}K )\right].\] Using the Lerch transcendent \(\Phi(z,s,a)\), we introduce \[\phi_{L}(\xi,a) \equiv \frac{{\rm e}^{a\xi}}{a}\left[\,4\arctan{\rm e}^{L-\xi}\right. \tag{61}\] \[\left.-2\,{\rm e}^{L-\xi}\,\Phi\left(-{\rm e}^{2(L-\xi)},1,\frac{ 1-a}{2}\right)\right].\] This function is such that \[\partial_{\xi}\phi_{L}(\xi,a)={\rm e}^{a\xi}\chi_{\rm k}(\xi-L). \tag{62}\] Thus, we have \(\langle\chi_{\rm ak}|\Omega|\tau_{0}\rangle=-\omega^{2}B_{L}(K)\). As seen above, the first correction to the wave function \(\chi_{<}^{(2)}(x)\) is determined by \(G_{2}^{(+)}\Omega^{\dagger}|\chi_{\rm ak}\rangle\). To find the complex function \(\langle\xi|G_{2}^{(+)}\Omega^{\dagger}|\chi_{\rm ak}\rangle=-\omega^{2}\tau_ {1}(\xi)\), we turn to the solution of the inhomogeneous equation \[H_{2}\,\tau_{1}(\xi)=\chi_{\rm ak}(\xi-L), \tag{63}\] \[\tau_{1}(\xi)=\int_{0}^{L}G_{2}^{(+)}(\xi,\zeta;K)\,\chi_{\rm ak} (\zeta-L)\,{\rm d}\zeta. \tag{64}\] On computing, the summands of \(\tau_{1}(\xi)=\tau_{1}^{\rm R}(\xi)+{\rm i}\,\tau_{1}^{\rm I}(\xi)\) are written as \[\tau_{1}^{\rm R}(\xi) = \frac{{\rm e}^{-{\rm i}K\xi}}{2{\rm i}K}\,\phi_{L}(\xi,{\rm i}K)- \frac{{\rm e}^{{\rm i}K\xi}}{2{\rm i}K}\,\phi_{L}(\xi,-{\rm i}K) \tag{65}\] \[+[\phi_{L}(L,{\rm i}K)+\phi_{L}(L,-{\rm i}K)]\,\frac{\sin{(K\xi)} }{2K}\] \[+{\rm i}\,[\phi_{L}(0,{\rm i}K)-\phi_{L}(0,-{\rm i}K)]\,\frac{\cos{( K\xi)}}{2K},\] \[\tau_{1}^{\rm I}(\xi) = B_{L}(K)\,\frac{\sin{(K\xi)}}{K}. \tag{66}\] Taking into account the form of \(\tau_{1}^{\rm I}(\xi)\) and the definition of \(B_{L}(K)\), we specify the function \(\gamma_{L}(K)\) [see Eq. (56)]: \[\gamma_{L}(K)=\frac{B_{L}^{2}(K)}{K\,N(L)}\geq 0. \tag{67}\] At the same time, the real part \(\tau_{1}^{\rm R}(\xi)\) determines also the deviation \(\Delta_{L}(K)\) so that \[\Delta_{L}(K)=N^{-1}(L)\int_{0}^{L}\chi_{\rm ak}(\xi-L)\,\tau_{1}^{\rm R}(\xi) \,{\rm d}\xi. \tag{68}\] This integral is not simple to be calculated analytically, instead we present the numerical result in Fig. 5. Let us note the similarity of the behavior of \(\gamma_{L}(K)\) and \(\Delta_{L}(K)\) with analogous functions from Ref. [56], which are calculated for another potential in three dimensions. _This means that the number of spatial dimensions does not affect main physical aspect of the problem_. By combining, the closed channel wave function in the first approximation reads \[\chi^{(1)}(\xi)=-\frac{\omega^{2}}{D_{L}(K)}\,\sqrt{\frac{K\gamma_{L}(K)}{N(L )}}\,\chi_{\rm ak}(\xi-L), \tag{69}\] where we have used the notation \(D_{L}(K)\) introduced as \[D_{L}(K) = K^{2}-V-Q+\omega^{4}\Delta_{L}(K)+{\rm i}\omega^{4}\gamma_{L}(K) \tag{70}\] \[= D_{L}^{\rm R}(K)+{\rm i}D_{L}^{\rm I}(K).\] Solution (69) vanishes at \(\omega=0\) and describes a short-lived state, that is seen by restoring for a moment the time dependence due to decaying factor \(\exp{(-{\rm i}Et)}\) determined by the complex \(E=Q-\omega^{4}\Delta_{L}(K)-{\rm i}\omega^{4}\gamma_{L}(K)\). Combining the terms with \(\tau_{0}(\xi)\) and \(\tau_{1}^{\rm I}(\xi)\) due to proportionality \(\tau_{1}^{\rm I}(\xi)\propto\tau_{0}(\xi)\), we present the open channel wave function in the resonance zone as \[\chi_{<}^{(2)}(\xi)=\frac{D_{L}^{\rm R}(K)}{D_{L}(K)}\,\tau_{0}(\xi)-\frac{ \omega^{4}}{D_{L}(K)}\,\sqrt{\frac{K\gamma_{L}(K)}{N(L)}}\,\tau_{1}^{\rm R}(\xi). \tag{71}\] To find the phase shift \(\delta\) for the function \(\chi_{>}^{(2)}(\xi)\) namely \[\chi_{>}^{(2)}(\xi) = \chi_{<}^{(2)}(L)\,\frac{\sin{(k\xi+\delta)}}{\sin{(kL+\delta)}}; \tag{72}\] \[\chi_{<}^{(2)}(L) = \frac{D_{L}^{\rm R}(K)\sin{KL}-D_{L}^{\rm I}(K)\cos{KL}}{D_{L}(K)},\] we appeal to the matching condition (36). We obtain \[K\,\frac{D_{L}^{\rm R}(K)\cos{KL}+D_{L}^{\rm I}(K)\sin{KL}}{D_{L}^{\rm R}(K) \sin{KL}-D_{L}^{\rm I}(K)\cos{KL}}=k\cot{(kL+\delta)}.\] This relation can be re-written as \[K\cot{(KL-\delta_{\rm rs})}=k\cot{(kL+\delta)}, \tag{73}\] \[\delta_{\rm rs}(K)=\arctan{\frac{D_{L}^{\rm I}(K)}{D_{L}^{\rm R}( K)}}, \tag{74}\] where \(\delta_{\rm rs}\) is the phase shift caused by interactions in the resonance zone. At \(\delta_{\rm rs}\equiv 0\), only potential scattering with \(V_{\rm sq}(x)\) remains in the open channel. Now it is easy to extract the total phase shift: \[\delta = -kL+\arctan{\left[\frac{k}{K}\,\tan{(KL-\delta_{\rm rs})}\right]}\] \[\simeq k\left[\frac{\tan{(KL-\delta_{\rm rs})}}{K}-L\right],\] where the second expression is used for \(k\to 0\). ### Feshbach Phenomenon Let us dwell on the effects at zero energy and momentum \(k\). It is reasonable to study the properties of the scattering length \(a\) of the particles in the open channel, having got the phase shift \(\delta\) in (75) and using the Eq. (41). For our purposes, we represent \(a\) in the form \[a(\omega^{2})=a_{\rm bg}\,\left(1+\frac{\mathcal{D}}{\omega^{4}- \omega_{c}^{4}}\right). \tag{76}\] In this formula, we have explicitly taken into account the dependence on the magnitude of the external influence \(\omega^{2}\) [see (33)] and shown the presence of critical value \(\omega_{c}^{2}\): \[\omega_{c}^{2}=\frac{\sqrt{Q}}{\sqrt{\Delta_{L}(K_{V})+\gamma_{L} (K_{V})\tan{(K_{V}L)}}};\quad K_{V}\equiv\sqrt{V}. \tag{77}\] This is determined by the energy gap \(Q>0\) between the two channels, which actually coincides with the kinetic energy of the incident particle outside the resonance zone at \(\xi>L\). Avoiding here the zero-energy resonances in the open channel at \(K_{V}L=(2n-1)\pi/2\) for \(n\in\mathbb{N}\), we require \(0<\Delta_{L}(K_{V})+\gamma_{L}(K_{V})\tan{(K_{V}L)}<\infty\) to ensure a real value of \(\omega_{c}^{2}\). The remaining characteristics are given as \[a_{\rm bg}=a_{V}+\ell,\hskip 28.452756pt\mathcal{D}=\omega_{c}^{4} \,\frac{\ell}{a_{\rm bg}}, \tag{78}\] \[\ell=\frac{1}{K_{V}}\,\frac{\gamma_{L}(K_{V})[1+\tan^{2}{(K_{V}L )}]}{\Delta_{L}(K_{V})+\gamma_{L}(K_{V})\tan{(K_{V}L)}}>0, \tag{79}\] where \(a_{V}\) is defined by (42), while \(a_{\rm bg}\) is the so-called background scattering length. Note that the dependence of \(a\) on \(\omega^{2}\) vanishes at \(Q=0\) (\(\omega_{c}^{2}=0\)) so that \(a=a_{V}+\ell\). At \(V\to 0\), we obtain \(a_{V}=0\), and \[a = \ell_{0}\,\frac{\omega^{4}}{\omega^{4}-\omega_{c}^{4}},\hskip 28.452756pt \omega_{c}^{4}=\frac{Q}{\Delta_{L}(0)}, \tag{80}\] \[\ell_{0} = \frac{1}{N(L)\,\Delta_{L}(0)}\lim_{K\to 0}\left(\frac{B_{L}(K)}{K} \right)^{2}, \tag{81}\] where \(B_{L}(K)\) is defined by Eq. (59), and Eq. (67) is used. Thus, we gain the relation \(a=\ell_{0}(L)\propto L\) when both \(Q=0\) and \(V=0\). In general, the Feshbach phenomenon which is in our focus, results from the _dependence of the scattering length \(a\) on the interaction parameters_, that allows us to detect bound states by the divergence of \(a\). Our model assumes that the change in \(a\) depends on \(Q\) (defining \(\omega_{c}^{2}\)) and on \(\omega^{2}\), while \(V>0\) is given. Analyzing Eq. (76), let us fix \(L=5\), \(K_{V}=0.2\) that provides \(K_{V}<K_{\Delta}\) and \(\Delta_{L}(K_{V})>0\) (see Fig. 5), and \(Q=0.15\). It leads to the following characteristics in dimensionless units: \(a_{V}\simeq-2.787039\), \(a_{\rm bg}\simeq 3.467758\), \(\omega_{c}^{2}\simeq 0.1128185\), and \(\mathcal{D}\simeq 0.0229575\). The dependence of \(a\) on the external impact strength \(\omega^{2}\) is shown in Fig. 6, which confirms that the bound state does indeed occur at \(\omega^{2}=\omega_{c}^{2}\). Critical value \(\omega_{c}^{2}\) determines the threshold of the production of shallow dimers at \(a(\omega^{2})\gg L\) with the binding energy [62] \[E_{\rm bind}=-\varkappa^{2}\propto-\frac{1}{a^{2}(\omega^{2})}. \tag{82}\] This follows from considering the case of small negative energy \(E\to 0^{-}\), when \(\chi_{>}^{(2)}\propto\exp{[-\varkappa(\xi-L)]}\) for \(\xi>L\), and Eq. (36) yields \[-\varkappa=\lim_{k\to 0}\left.\frac{\rm d}{{\rm d}\xi}\ln{\chi_{<}^{(2)}( \xi)}\right|_{\xi=L}=-\frac{1}{a(\omega^{2})-L}. \tag{83}\] Thus, the Feshbach phenomenon justifies the need for a large scattering length in the formation of composites (of at least two particles), as predicted in Ref. [34] using phenomenological approaches. This is due to the fact that the Feshbach phenomenon as a zero-energy effect is valid in a different number of spatial dimensions, although there are distinct physical implications of zero and diverging scattering lengths in one and three dimensions [60]. In principle, the energy gap \(Q\) can be maintained by a spatially homogeneous interaction that induces the energy predominance of one configuration of the system of particles over another. In this regard, we mention experiments with alkali atoms, the energy configurations of which are determined by the spin and the applied magnetic field. Therefore, there, the Feshbach phenomenon is related with a resonant transition between configurations with a change of the magnetic field [62; 42]. ### Resonance Scattering To reveal the newly formed bound state (of two axions), we also investigate the resonance scattering of an incident particle with a nonzero energy \(E=k^{2}\). Figure 6: Scattering length \(a\) as the function of external impact \(\omega^{2}\). The dashed straight lines are asymptotics. Black line corresponds to \(\omega^{2}=\omega_{c}^{2}\), while green and orange lines are for \(a=a_{\rm bg}\) and \(a=a_{V}\), respectively. The scattering matrix element \(S=\mathrm{e}^{2\mathrm{i}\delta}\) for the open channel [cf. Eq. (43)] is \[S=\mathrm{e}^{-2\mathrm{i}kL}\,\frac{K\cot{(KL-\delta_{\mathrm{rs}})}+\mathrm{i} k}{K\cot{(KL-\delta_{\mathrm{rs}})}-\mathrm{i}k}. \tag{84}\] Denoting its denominator as \[F(k)=K\cot{(KL-\delta_{\mathrm{rs}})}-\mathrm{i}k;\quad K=\sqrt{k^{2}+V}, \tag{85}\] the condition \(F(k)=0\) determines the pole of \(S\) and the resonance point (although, not every pole of \(S\) is related with the compound system existence). Usually, a resonance is observed in narrow region of energy \(E\), which covers the resonant value \(E_{\mathrm{res}}=E_{0}-\mathrm{i}\Gamma_{0}/2\) with some \(E_{0}>0\) and \(\Gamma_{0}>0\). Positivity of \(E_{0}\) makes this level unstable, whose lifetime is \(\tau=2/\Gamma_{0}\), in dimensionless units. Here, we find \(E_{\mathrm{res}}=K_{\mathrm{res}}^{2}-V\) for given \(Q\), \(V\), \(\omega\), and \(L\) by solving the equation \(F(\sqrt{K_{\mathrm{res}}^{2}-V})=0\) in an appropriate form. After identical transformation is performed, the following equation should be solved numerically at \(\omega^{4}\ll 1\) by using an iterative procedure with the initial value \(K=\sqrt{V+Q}\): \[K_{m+1}^{2} = V+Q\] \[- \omega^{4}\left[\Delta_{L}(K_{m})+\mathrm{i}\gamma_{L}(K_{m})\, \frac{k_{m}\cot{K_{m}L}-\mathrm{i}K_{m}}{K_{m}\cot{K_{m}L}-\mathrm{i}k_{m}} \right],\] \[k_{m} = \sqrt{K_{m}^{2}-V}. \tag{86}\] Omitting the indexes \(m\) and \(m+1\) restores the equation equivalent to \(F(\sqrt{K^{2}-V})=0\). To extract the resonance part of \(S\)-matrix, we expand the complex function \(F(k)\) of a real \(k\) in vicinity of complex root \(k_{\mathrm{res}}=\sqrt{E_{\mathrm{res}}}=k_{r}-\mathrm{i}\kappa_{r}\) as \[F(k)\simeq C\,(k-k_{r}+\mathrm{i}\kappa_{r}),\quad F(k_{\mathrm{res}})=0, \tag{87}\] where \(C\equiv F^{\prime}(k_{\mathrm{res}})\) is a complex constant. Since \(k=k_{\mathrm{res}}^{*}\) solves conjugate equation \(F^{*}(k)=0\), and \(F^{*}(k)\simeq C^{*}(k-k_{r}-\mathrm{i}\kappa_{r})\) near the resonance, we define the phase shift \(\delta_{0}\) related with potential scattering so that \[\mathrm{e}^{2\mathrm{i}\delta_{0}}=\mathrm{e}^{-2\mathrm{i}kL}\,\frac{C^{*}} {C}. \tag{88}\] The calculated ingredients enable to write down the scattering matrix and the total phase shift in the form: \[S \simeq \mathrm{e}^{2\mathrm{i}\delta_{0}}\,\frac{k-k_{r}-\mathrm{i} \kappa_{r}}{k-k_{r}+\mathrm{i}\kappa_{r}}, \tag{89}\] \[\delta = \delta_{0}-\arctan{\frac{\kappa_{r}}{k-k_{r}}}. \tag{90}\] Representing these quantities in the conventional form in terms of \(E\), we must expand \(F(\sqrt{E})\) in powers of \(E\): \[F(\sqrt{E})\simeq\frac{C}{2k_{\mathrm{res}}}\,(E-E_{\mathrm{res}}), \tag{91}\] where \(C\) is as above. It leads to redefinition of phase shift \(\delta_{0}\) because of the relation: \[\arctan{\frac{\kappa_{r}}{k-k_{r}}}-\frac{1}{2}\arctan{\frac{\kappa_{r}}{k_ {r}}}=\arctan{\frac{\Gamma_{0}}{2(E-E_{0})}}. \tag{92}\] Thus, we can see that the phase shift \(\delta\) experiences a jump \(\delta(k_{r}-0)-\delta(k_{r}+0)=\pi\), which reveals a resonance. Besides, \(\delta\) determines the cross section \(\sigma\) in accordance with the optic theorem in one dimension [61; 63]: \[\sigma(k)=2\sin^{2}{\delta(k)}. \tag{93}\] Iterating Eq. (86) for \(L=5\), \(V=0.04\), \(Q=0.15\), and \(\omega_{1}^{4}=\omega_{c}^{4}\), used to test the Feshbach phenomenon above, we obtain the solution: \[E_{\mathrm{res}}=0.1752355035-\mathrm{i}\,0.07423053097. \tag{94}\] The behavior of \(\delta(k)\) and \(\sigma(k)\) is depicted in Fig. 7 and shows that the incident particle with momentum \(k=\sqrt{Q}\) can be bound, if \(k\) is within the interval \((k_{r}-\kappa_{r}/2;k_{r}+\kappa_{r}/2)\) for \(k_{r}=\mathrm{Re}\,\sqrt{E_{\mathrm{res}}}>0\) and \(\kappa_{r}=-\mathrm{Im}\,\sqrt{E_{\mathrm{res}}}>0\). Note that the asymmetric form of the resonant peak in Fig. 7(b) is due to the term \(-kL\) in the phase shift \(\delta_{0}\). The obtained formulas and results describe, in general, the mechanism of the occurrence of resonance (associated with two-particle complex - dimer) without specifying the extra interactions used. Although the model needs to be refined in accordance with specific physical conditions, this formalism remains applicable to various studies. Figure 7: Phase shift \(\delta\) (a) and cross section \(\sigma\) (b) as functions of incident momentum \(k\). Jump and peak in the graphs indicate the resonance and appear at \(k_{r}=\mathrm{Re}\,\sqrt{E_{\mathrm{res}}}\simeq 0.4275189270\). ### Analysis of DM Dimer Let us now convert dimensionless characteristics into physical units. The main scale we need is the length scale \(r_{0}\) associated with the interaction radius. In principle, its refinement requires additional considerations that are beyond the scope of this study. Anyway, we apply \(r_{0}\) which is given by Eq. (20) and is a measure of the size of the DM halo core. Taking into account the relation to dimensionless variable in Sec. II, the nonrelativistic scales for energy and time are \[\varepsilon_{0}=\frac{\hbar^{2}}{2mr_{0}^{2}},\quad\tau_{0}=\frac{\hbar}{2 \varepsilon_{0}}, \tag{95}\] where \(m\) is the mass of axionlike particle, and \(\tau_{0}\) is determined using the (minimum) uncertainty principle. Substituting the typical values of \(m=10^{-22}\,\mathrm{eV}/c^{2}\) and \(r_{0}=0.138\,\mathrm{kpc}\) from Sec. II, we arrive at the estimates \[\varepsilon_{0}\simeq 1.074\times 10^{-29}\,\mathrm{eV},\quad\tau_{0}\simeq 9.715 \times 10^{5}\,\mathrm{yrs}. \tag{96}\] Analyzing, the resonance energy \(\varepsilon_{0}\,\mathrm{Re}\,E_{\mathrm{res}}\simeq 1.882\times 10^{-30}\, \mathrm{eV}\) turns out to be tens of orders of magnitude lower than the critical temperature \(T_{c}^{(d=3)}\) of free-boson BEC in three dimensions. It results from substituting the axionlike particle concentration \(n^{(d=3)}=\rho_{0}/m\simeq 5.61\times 10^{38}\,\mathrm{m}^{-3}\) at typical parameters in Sec. II into the known expression written for \(d\) dimensions: \[T_{c}^{(d)}=\frac{2\pi\hbar^{2}}{m}\left(\frac{n^{(d)}}{\zeta(d/2)}\right)^{2 /d}, \tag{97}\] where \(\zeta(s)\) is the Riemann zeta-function. In fact, we get the same result in the one-dimensional case, defining the concentration as \(n^{(d=1)}\sim(n^{(d=3)})^{1/3}\). Defining the resonance lifetime as \(t=2\tau_{0}/\Gamma_{0}\) and accounting for \(\Gamma_{0}\simeq 0.148\) as in the test above, we deduce compound system stability over a period \[t\simeq 1.313\times 10^{7}\,\mathrm{yrs}. \tag{98}\] We associate the resonance with a dimer (two-axion composite), whose one-dimensional wave function \(\chi_{\mathrm{D}}(\xi)\) and the binding energy \(E_{\mathrm{bind}}\) in dimensionless units are expected to be [see Eqs. (82)-(83)] \[\chi_{\mathrm{D}}(\xi)\propto\exp\left(-\frac{\xi}{a}\right),\quad E_{ \mathrm{bind}}=-\frac{1}{a^{2}}. \tag{99}\] Here \(a\) is the large scattering length given by Eq. (76); \(\xi\gg L\) is a dimensionless distance between particles; besides, the dimensionless binding energy has to be converted into physical units using the scale \(\varepsilon_{0}\). These formulas are valid for \(a\gg L\), where \(L\) is related with the dimensionless radius of axionlike interaction (see Sec. III). This indicates the regime of large scattering length \(a\), which is achieved for a coupling \(\omega^{2}\) near its critical value \(\omega_{c}^{2}\) (see Fig. 6). Thereby, it confirms the hypothesis about the need for large \(a\) made in Ref. [34]. Note that the three-dimensional wave function of a dimer in the spherically symmetric case behaves as \(\exp{(-\mu r)}/r\), where \(r\) is the distance between particles, and is often encountered for a simplified description of composites in nuclear physics, for example, as deuteron in the ground state [59; 62]. Zoo of diverse (two-particle) states in this field of physics also enables to discover analogs of molecules, the formation of which is viewed within the Feshbach resonance concept. An important guide for confirming the existence of axion dimers can be dipion molecules. This follows from the common nature of axions and pions [2], that may also lead to dark analogs of pions and their molecules. In any case, the long-term resonance in our scenario suggests consideration of DM as multicomponent environment due to the participation of composites. The presence of composites affects BEC DM properties and stimulates a detailed study of aspects of the composites formation. Moreover, the dependence of the pair scattering length on interactions gives us a theoretical possibility to explain a vanishingly small \(a\) observed in the BEC models with two-particle interaction [32; 25; 33], as well as its variation in the DM halo of dwarf galaxies. Indeed, this can be done on the basis of Eq. (80). For the sake of correctness, it requires specifying the interaction parameters \(V\) and \(\omega^{2}\). Perhaps, the gravity also plays a certain role, which we explicitly do not take into account here when studying the Feshbach resonance. ## V Conclusion In an attempt to describe the axionlike DM, we have involved a periodic chiral self-interaction that initially possesses \(U(1)\) symmetry, which is broken in the massive system due to including Newtonian gravity. While the expected two different phases in a self-gravitating BEC DM are mostly revealed by a jump in the particle density, the desired multispecies matter, according to the current view, can be formed by both axionlike particles and their composites created in the processes that are similar to those in high energy (particle) physics. These phenomena can be helpful for identifying DM particles. We develop the effects within the quantum mechanics, finding both the BEC wave function and the wave functions of both the composite (dimer) and its constituents. This means that we stay aside the quantized fields with a variable number of particles, appealing to the quantum mechanical formation and decay of DM dimers. Since the Gamow's tunneling seems unsuitable even for describing dimer decay, the Feshbach resonance theory appears to be relevant. Typically, the Feshbach resonance requires different scattering channels with their own interaction potentials, so one may be faced with assigning internal degrees of freedom to DM particles, the configuration of which determines each channel. Ignoring this issue as for now, we only focus on the problem with additional model potentials. This way, in particular, may be sufficient to describe the spontaneous creation and decay of compound particles in nuclear physics [45]. On the other hand, the detection of Feshbach dimers in atomic BECs in the laboratory [42; 62] made it possible to trace the resonance mechanism in detail, motivating us to apply it to the axionlike DM model and outline its implementation. An important property of the Feshbach resonance is the increase of the scattering length to infinity, which coincides with the condition for the creation of composites, disclosed by the use of effective models [34]. Thus, in our theoretical study we combine the models of axionlike particles, including the sine-Gordon equation with its soliton solutions, and the Feshbach resonance concept. We focused on the quantum-mechanical formation of axion dimers in scattering processes and discussed in this regard the multicomponent DM consisting of axions and their composites, which were previously predicted in the qualitative study in Ref. [34]. Besides, having obtained the crucially important scattering length dependence on the interaction parameters, we got an idea of how to ensure its very small but different values when describing the DM halo of various (dwarf) galaxies on the base of the Gross-Pitaevskii-Poisson equations with pair interaction [34; 25; 17]. Using the cosinelike self-interaction derived for QCD axions [2; 27], we formulated in Sec. II the gravitating BEC DM model based on the modified Gross-Pitaevskii-Poisson equations (11)-(12) at zero temperature. This type of nonlinearity generalizes the polynomial interactions that were used in the preceding models and are looking as its truncation [32; 25; 17; 33]. On the other hand, the gained properties motivated us, first, to look for distinct phases of the axionlike BEC DM. Indeed, the existence of both rarefied or dilute (gaseous) and dense (liquidlike) phases immediately results from two independent solutions to the Gross-Pitaevskii equation even without developing a statistical description. But, the first-order phase transition in this model, the dynamics of which has yet to be detailed, would not be independently controlled by the strengths of pair and three-particle interactions as previously. This situation is in contrast with that considered in Ref. [33]. In any case, the phase transition between states must be stimulated by pressure/compression induced by long-wavelength quantum fluctuations [32]. The predominance of one or another phase in the DM halo of a certain galaxy can be inferred from the characteristics determined by fitting rotation curves and other observables. For instance, the DM gaseous (dilute) phase dominates in galaxy M81dwB, according to Ref. [33]. Regarding composites of DM, we note that the dense BEC phase is unfavored for composites because of their probable destruction caused by frequent collisions, as shown in [34]. On the other hand, for their appearance in a rarefied phase, a large scattering length is needed and has to be argued. Although an interaction potential with nonzero asymptotics often leads to an extremely large scattering length in the Born approximation [59], the scattering length also diverges due to zero-energy resonance, when the scattered particle goes into a bound state. Choosing the latter option, the bound state associated with a composite of at least two particles must be characterized by an isolated (quasi-discrete) level of positive energy and a finite lifetime. This is dictated, in particular, by the Feshbach resonance concept [43; 44; 45]. Intending to get more analytical results, we turn to the one-dimensional case. Then, focusing on the problem of a few interacting axions in the ground state, the three-dimensional Gross-Pitaevskii equation reduces to the stationary sine-Gordon equation with its antikink solution as in Sec. III. Comparing Fig. 2 and Fig. 3, we see that the effective potential \(W\) in Fig. 3 basically inherits the behavior of the gravitation-modified potentials for the two phases in Fig. 2(b), while the antikink solution mimics the DM distribution in the DM halo core. The discrepancy between the particle energies on the right-hand sides of Eq. (17) and Eq. (23) means that the distribution profile in Fig. 3 is formed by axions in the state of zero energy. Therefore, we are faced with the need to explain the appearance of axions with zero energy in a one-dimensional trap \(W\), despite the presence of a domain wall. To resolve this problem for at least two particles, we use two scattering channels: closed and open. A closed channel is represented by a bound state induced by the potential \(W\) with asymptotics \(W\to 1\). An open channel implies elastic and asymptotically free scattering with a tiny positive energy. Let a particle perform transit between the channels coupled by an external impact. Given both the scattering potential (34) in an open channel and the coupling (33), the two-channel quantum-mechanical problem is formulated in Sec. IV. Though the square-well potentials are used therein, another form of them is also allowed. Then, with a certain adjustment of the parameters of extra interactions, an intermediate level appears, called the "dressed" state, which enables it to overcome the initial energy gap \(Q>0\) between the two channels. In a sense, such a level appearance is similar to the result of splitting, within a degeneracy problem under the action of perturbation. The most significant processes take place in the resonance zone, which is a finite region of space bounded by a common radius of interaction \(L\) (in dimensionless units) for all potentials. To infer the information about processes far from the resonance zone, we resort to scattering theory and, thereby, extract data from the phase shift \(\delta\) of the wave function of outgoing particle in an open channel, after leaving the resonance zone. Although the basic scattering characteristics in one and three dimensions do differ, we relate the scattering length \(a\) with \(\delta\) by Eq. (41) as usual [63]. The analytical solution (69)-(72) of the two-channel problem is obtained in the first approximation, by taking into account potential scattering in an open channel with square well (34). This also comprises the characteristics of the dressed state that occurs when a particle hops between channels with close energies. Possessing positive energy, the dressed state has a finite lifetime and the resonance property to decay. If we imagined two interacting particles, one of which is pinned at the origin, then the dressed state would be seen as a compound system or an excited dimer. Besides, one justifies a nonzero decay width at zero energy due to the dependence of dressed state characteristics (57) on the magnitude \(V\) of the potential (34). At zero energy, we consider the Feshbach phenomenon to reveal a bound state by the divergence of the scattering length \(a\) at certain (critical) value of the external influence. Parametrizing the external interaction (33) by \(\omega^{2}\), the critical value \(\omega_{c}^{2}\) is determined by \(\sqrt{Q}\) in Eq. (77). That means that the scattering length \(a(\omega^{2})\) behaves as \(a(\omega^{2}_{c}\pm\epsilon^{2})\to\pm\infty\) at \(\epsilon\to 0\), that is shown in Fig. 6, and confirms the existence of a bound (dressed) state. This is valid in both one and three dimensions, although the divergent and zero scattering lengths have quite opposite effects on reflectance and transparency in one and three dimensions [60]. Note also the similarity of this phenomenon with that for alkali atoms in the laboratory, when \(\omega^{2}\) is replaced by a magnetic field \(B\). Thus, one can expect the formation of shallow dimers with binding energy \(E_{\rm bind}=-1/a^{2}\) in dimensionless units at large scattering length \(a\gg L\). On the other hand, having got the dependence of \(a\) on the interaction parameters, one could reproduce the vanishingly small values of \(a\) that take place in the BEC DM models with pair interaction [17; 25; 34]. Although we provide formulas for this, a detailed analysis is omitted. To complete the study, the resonance scattering at nonzero energy is considered, and we find the complex value of resonant energy \(E_{\rm res}=E_{0}-{\rm i}\,\Gamma_{0}/2\) as a pole of the scattering matrix for some fixed values of the parameters: \(L=5\), \(\sqrt{V}=0.2\), and \(\omega^{2}=\omega_{c}^{2}\) at \(Q=0.15\). One gets \(E_{0}>0\) and \(\Gamma_{0}>0\), in contrast to the typical bound state with \(E_{0}<0\) and \(\Gamma_{0}=0\). We conclude that an incident particle with energy \(E=Q\) and momentum \(k=\sqrt{Q}\) participates in the resonance in Fig. 7, because \(E_{0}-\Gamma_{0}/2\leq Q\leq E_{0}+\Gamma_{0}/2\). To estimate the dimer lifetime \(t=2\tau_{0}/\Gamma_{0}\), we use the time scale \(\tau_{0}=mr_{0}^{2}/\hbar\) for nonrelativistic axions with mass \(m\simeq 10^{-22}\,{\rm eV}/c^{2}\). Substituting the scale for the DM halo core \(r_{0}\simeq 0.138\,{\rm kpc}\) found in Sec. II, jointly with the numerically obtained value \(\Gamma_{0}\simeq 0.148\), _we get the encouraging value of lifetime, namely \(t\simeq 1.3\times 10^{7}\,{\rm yrs}\)_. This may be sufficient for dimers participation in forming large DM structures. But, the fate of dimers depends on the potentials used, which can be of gravitational, stochastic, and even (dark) electromagnetic (due to the desired Primakoff effect [30]) nature. A reasonable justification for the extra scattering channel(s) needs the existence of internal degrees of freedom in DM particles, allowing them to interact differently. In this way, two channels in laboratory experiments with atoms result from energetically different spin configurations in an applied magnetic field. Meanwhile, in the case of DM particles, the internal degrees of freedom may be isospin or something else. Although in DM-dominated galaxies the scattering length takes on different values [25], caused in our model by the effect of additional interactions with situationally distinct characteristics, its extremely large value, leading to the formation of dimers, does require fine tuning of certain conditions. Probably, such tuning may not always occur in all galaxies and appears to be spontaneous. However, the emergence conditions for the (unitary) regime of infinite scattering length may be fulfilled during the formation of galaxies along with changes in the parameters of extra interactions (similar to a magnetic field change in the laboratory). In addition, we would like to note the need to take into account long-lived dimers in the formation of multicomponent DM halos, which ensures BEC stability according to the results of Ref. [64]. Clearly, these issues requires further study, and likewise the production of dimers and other composites in the environment [65; 66]. ###### Acknowledgements. Both authors acknowledge support from the National Academy of Sciences of Ukraine by its priority project "Properties of the matter at high energies and in galaxies during the epoch of the reionization of the Universe" No. 0123U102248.
2309.13083
Graph-Based Permutation Patterns for the Analysis of Task-Related fMRI Signals on DTI Networks in Mild Cognitive Impairment
Permutation Entropy ($PE$) is a powerful nonlinear analysis technique for univariate time series. Recently, Permutation Entropy for Graph signals ($PEG$) has been proposed to extend PE to data residing on irregular domains. However, $PEG$ is limited as it provides a single value to characterise a whole graph signal. Here, we introduce a novel approach to evaluate graph signals \emph{at the vertex level}: graph-based permutation patterns. Synthetic datasets show the efficacy of our method. We reveal that dynamics in graph signals, undetectable with $PEG$, can be discerned using our graph-based patterns. These are then validated in DTI and fMRI data acquired during a working memory task in mild cognitive impairment, where we explore functional brain signals on structural white matter networks. Our findings suggest that graph-based permutation patterns in individual brain regions change as the disease progresses, demonstrating potential as a method of analyzing graph-signals at a granular scale.
John Stewart Fabila-Carrasco, Avalon Campbell-Cousins, Mario A. Parra-Rodriguez, Javier Escudero
2023-09-21T19:53:53Z
http://arxiv.org/abs/2309.13083v2
Graph-based permutation patterns for the analysis of task-related fMRI signals on DTI networks in mild cognitive impairment ###### Abstract Permutation Entropy (\(\mathrm{PE}\)) is a powerful nonlinear analysis technique for univariate time series. Very recently, Permutation Entropy for Graph signals (\(\mathrm{PE}_{\mathrm{G}}\)) has been proposed to extend \(\mathrm{PE}\) to data residing on irregular domains. However, \(\mathrm{PE}_{\mathrm{G}}\) is limited as it provides a single value to characterise a whole graph signal. Here, we introduce a novel approach to evaluate graph signals at the vertex level: graph-based permutation patterns. Synthetic datasets show the efficacy of our method. We reveal that dynamics in graph signals, undetectable with \(\mathrm{PE}_{\mathrm{G}}\), can be discerned using our graph-based permutation patterns. These are then validated in the analysis of DTI and fMRI data acquired during a working memory task in mild cognitive impairment, where we explore functional brain signals on structural white matter networks. Our findings suggest that graph-based permutation patterns change in individual brain regions as the disease progresses. Thus, graph-based permutation patterns offer promise by enabling the granular scale analysis of graph signals. John S. Fabila-Carrasco\({}^{\ast\dagger}\), Avalon Campbell-Cousins\({}^{\ast\dagger}\), Mario A. Parra-Rodriguez\({}^{+}\), Javier Escudero\({}^{\dagger}\)\({}^{\dagger}\) School of Engineering, IDCOM, University of Edinburgh, UK \({}^{+}\) Department of Psychological Sciences and Health, University of Strathclyde, UK Graph signals, Permutation entropy, Graph topology, Permutation patterns, Neuroimaging. ## 1 Introduction Entropy-based nonlinear analysis techniques have become particularly valuable for analysing noisy or short time series related to complex systems [2, 20]. These methods offer insights into signal irregularity, revealing effects such as financial crisis in time series [25] and anomalies in mechanical and physiological systems [1]. Among them, permutation entropy (\(\mathrm{PE}\)) is noted for its robustness to noise, fast calculation, and sound statistical properties [7]. Building on Shannon's entropy, \(\mathrm{PE}\) quantifies the distribution of "permutation patterns" in time series [4]. Such patterns have broad applications, from biomedical to finance data [21]. Advanced quantifiers further refine time series analysis, like 'forbidden patterns' in finance [27] and weighted differences of pattern likelihoods for univariate brain signals [3]. While \(\mathrm{PE}\) is powerful, its univariate focus is a limitation. A multivariate \(\mathrm{PE}\) version exists but it dilutes individual channel characteristics [13]. More recently, a 2D version of \(\mathrm{PE}\) has been proposed for images [14]. Graph signals offer a novel avenue for data analysis on irregular domains [16]. The framework of graph signals is highly relevant for a wide variety of settings, such as weather patterns or vehicular traffic [15]. One particularly relevant example is neuroimaging, where brain activity can naturally be seen as a graph signal measured over a brain network [11]. In this context, we have recently introduced \(\mathrm{PE}_{\mathrm{G}}\) for graph signals [8], extending permutation entropy to irregularly sampled data. Permutation patterns have received considerable attention recently due to their useful properties in univariate time series, and their study has very recently been extended to 2D formulations [5]. However, they remain unexplored for graph signals. Our contributions are: * The first definition of permutation patterns for graph signals as a way to characterise them at granular level. * The extension to the setting of graph signals of recently introduced contrasts \(\alpha\) and \(\beta\) that account for relationships in permutation patterns. * The study of the behaviour of \(\alpha\) and \(\beta\) for synthetic benchmarks of graph signals. * The illustration of graph permutation patterns to characterise local changes in neuroimaging datasets in mild cognitive impairment, a prodromal phase of Alzheimer's disease. ## 2 Graph-based permutation patterns This section presents the notation, basics of graph-based permutation patterns, and the definitions of contrasts \(\alpha\) and \(\beta\). ### Notation Let \(G=(\mathcal{V},\mathcal{E},\mathbf{A})\) represent a _simple undirected graph_ with vertex set \(\mathcal{V}=\{v_{1},v_{2},\ldots,v_{N}\}\) and edge set \(\mathcal{E}\) defined as \(\mathcal{E}\subset\{(v_{i},v_{j})|v_{i},v_{j}\in\mathcal{V}\}\). This graph excludes isolated vertices, multiple edges, and self-loops. The adjacency matrix \(\mathbf{A}\) is an \(N\times N\) symmetric matrix with \(\mathbf{A}_{ij}=1\) if an edge connects \(v_{i}\) and \(v_{j}\), and \(\mathbf{A}_{ij}=0\) otherwise. A _graph signal_\(\mathbf{X}\) maps \(\mathcal{V}\rightarrow\mathbb{R}\) and is depicted as a column vector \(\mathbf{X}=[x_{1},x_{2},\ldots,x_{N}]^{T}\), where the indices correspond to \(\mathcal{V}\). A _permutation_\(\pi\) is a bijection \(\pi:\mathbb{N}_{m}\rightarrow\mathbb{N}_{m}\) with \(\mathbb{N}_{m}=\{1,2,\ldots,m\}\). Using shorthand, \(\pi_{k}\) stands for \(\pi(k)\) for each \(k\in\mathbb{N}_{m}\), and the permutation is expressed as \(\pi=\pi_{1}\pi_{2}\ldots\pi_{m}\). The complete set of permutation patterns is denoted by \(\Pi\). For instance, the notation \(\pi=321\) implies \(\pi(1)=3\), \(\pi(2)=2\), and \(\pi(3)=1\). Given a vector \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{m})\in\mathbb{R}^{m}\), \(\mathbf{x}\) is said to exhibit the pattern \(\pi\) if \(\pi_{i}<\pi_{j}\) is true if and only if \(x_{i}<x_{j}\). Lastly, \(|.|\) denotes cardinality. ### Graph-based permutation patterns Let \(\mathbf{X}\) be a graph signal defined on \(G\), \(2\leq m\in\mathbb{N}\) be the _embedding dimension_. The graph-based permutation patterns are defined as follows: 1. The _embedding matrix_\(\mathbf{Y}\in\mathbb{R}^{N\times m}\) is given by \(\mathbf{Y}=[\mathbf{y}_{0},\mathbf{y}_{1},\cdots,\mathbf{y}_{m-1}]\), defined by \[\mathbf{y}_{k}=D^{k}\mathbf{A}^{k}\mathbf{X}\in\mathbb{R}^{N\times 1}\;,\quad k =0,1,\ldots,m-1\;,\] where \(D^{k}\) is the diagonal matrix \(D^{k}_{ii}=1/\sum_{j=1}^{N}(\mathbf{A}^{k})_{ij}\). 2. _Graph-based permutation patterns._ Each vertex of the graph is assigned an _embedding vector_ and mapped to a unique permutation pattern. Formally, the _embedding vectors_ consist of \(m\) numbers corresponding to each row of the matrix \(\mathbf{Y}\), i.e., \(\operatorname{row}_{i}(\mathbf{Y})=(y_{ij})_{j=1}^{m}\) for \(i=1,2,\ldots,N\). Each embedding vector (one for each vertex of the graph) is uniquely mapped to a permutation pattern, i.e., \(v_{i}\rightarrow\operatorname{row}_{i}(\mathbf{Y})\rightarrow\pi\in\Pi\). 3. _Relative frequencies._ For each dispersion pattern \(\pi\in\Pi\), its relative frequency, \(\rho\left(\pi\right)\in[0,1]\), is obtained as: \[\rho\left(\pi\right)=\frac{|\{v_{i}\mid v_{i}\in\mathcal{V}\text{ and }v_{i}\text{ has type }\pi\}|}{N}\;.\] ### Permutation patterns and contrast for length \(3\) Here, we focus on the case of \(m=3\) (illustrated in Fig. 1), which is well-studied in univariate time series [3]. Increased pattern lengths make statistical estimates of pattern frequencies less accurate and their interpretation increasingly challenging [3]. The turning rate, denoted as \(\alpha\), quantifies the prevalence of turning points relative to monotonically increasing or decreasing segments within a time series. The up-down balance, denoted as \(\beta\), distinguishes upward and downward patterns [3]. These are traditionally defined as: \[\alpha = \rho(132)+\rho(213)+\rho(231)+\rho(312)\;;\] \[\beta = \rho(123)-\rho(321)\;.\] As detailed in Sec. 2.2, we can expand the traditional definitions of \(\alpha\) and \(\beta\) beyond their original scope as presented in [3]. Notably, when the graph is a directed path, the signal contrast on this graph matches that of a univariate time series. However, our graph-based approach allows us to craft more encompassing contrasts, integrating both the graph's topology and its embedded data. A critical nuance of our methodology is its ability to assign a distinct pattern to each sample. This contrasts with time series permutation patterns. This granularity affords deeper insights into graph signals, enabling precise characterization of each data point. ## 3 Benchmarking on synthetic data \(\operatorname{MIX}\) **Processing.** In a _Random Geometric Graph_\((\operatorname{RGG})\), every vertex \(v_{i}\in\mathcal{V}\) is assigned a random 2D coordinate \(\mathbf{z}_{i}\in[0,1]^{2}\). Two vertices, \(v_{i}\) and \(v_{j}\), are connected if the distance between their coordinates is \(\leq r\). The graph signal \(\operatorname{MIX}_{G}\) transforms \(\operatorname{MIX}_{\mathbb{R}^{2}}\) to the graph domain as: \[\operatorname{MIX}_{G}(v_{i})=\left((1-R)S+RW\right)(\mathbf{z}_{i})\quad \text{for}\quad 1\leq i\leq N\;.\] Here, \(R\) is a random variable with a probability \(p\) of being 1 and \(1-p\) of being 0, \(W\) represents uniformly distributed white noise, and \(S(\mathbf{z}_{i})=\sin(2\pi fz_{i}^{1})+\sin(2\pi fz_{i}^{2})\). For further analysis, readers are referred to [9]. **Permutation Entropy Analysis.** Our investigation centered on discerning the irregularities of the \(\operatorname{MIX}_{G}\) graph signal, especially those affected by the variations in parameters \(f\) and \(p\). Throughout this process, the \(\operatorname{RGG}\) parameters remained constant at \(N=1500\) and \(r=0.06\), and the study spanned 20 realizations. The initial step involved computing the Shannon's entropy, termed as \(\operatorname{PE}_{\mathrm{G}}\), for the signal distributions. The resulting entropy mean and standard deviation (std), evaluated across different \(\operatorname{MIX}\) process frequencies, are presented in Fig. 2. Our observations indicate that relying purely on the permutation entropy distribution falls short in delivering clear insights. Specifically, this method does not effectively track the signal dynamics amid rising noise or shifting frequency. **Permutation Pattern Analysis** Using our graph-based Figure 1: The six permutation patterns for \(m=3\) permutation pattern analysis, we present an exploration of the \(\mathrm{MIX}\) process. **Baseline Behavior at \(p=0\):** At this level, the \(\mathrm{MIX}\) signal naturally shows periodic tendencies. When the frequency decreases, \(\alpha\) drops due to fewer local extrema. Conversely, \(\beta\) increases, highlighting the presence of dominant global peaks. **Effects of Noise and \(p\):** An increase in noise or the \(p\) parameter leads to a rise in \(\alpha\). Notably, a slight change in \(p\) from \(0\) to \(0.1\) causes a significant increase in \(\alpha\). This underscores the \(\mathrm{MIX}\) process's sensitivity to small changes, with the trend stabilizing for higher \(p\) values, as illustrated in Fig. 3(a). **Frequency Relationship:** For a constant \(p\), \(\alpha\) shows a direct relationship with frequency: a decrease in frequency leads to a heightened \(\alpha\) due to an increase in local extrema. Conversely, higher frequencies result in fewer local points, leading to a reduced \(\alpha\). **The \(\beta\)-\(\alpha\) Contrast:**\(\alpha\) and \(\beta\) display inverse behaviors. An elevated \(\alpha\) signifies signals characterized by slower oscillations with frequent local extrema. Conversely, a higher \(\beta\) suggests fewer local extrema, implying a signal with more rapid oscillations and dominated by global peaks and troughs, as depicted in Fig. 3(b). ## 4 Real-World Illustration in MCI Dementia currently affects over 50 million people worldwide and is expected to triple by 2050 [22]. Alzheimer's disease (AD) is the main cause of dementia and causes immense emotional and financial strain on families and healthcare services. Its early stages are often categorized by stages of Mild Cognitive Impairment (MCI), often progressing (within 4 years) to the dementia stage of AD [22]. To understand this progression, we explore a novel MRI model of AD and its potential use in characterizing the stages of disease. ### Participants and task Participants from the longitudinal study [18] were assessed with a battery of neuropsychological tests commonly used to assess dementia, grouping subjects into early Mild Cognitive Impairment (eMCI), MCI, and Alzheimer's disease converters after a 2-year follow up (MCIc) [18]. From these, 8 healthy controls (Age: \(76.50\pm 5.21\), Sex: 2M; 6F), 7 eMCI (Age: \(76.86\pm 6.41\), Sex: 4M; 3F), 10 MCI (Age: \(72.30\pm 5.64\), Sex: 5M; 5F), and 6 MCIc subjects (Age: \(76.33\pm 5.09\), Sex: 4M; 2F) were selected to undergo DTI and fMRI acquisition during which they performed a Visual Short-Term Memory Binding Task (VSTMBT). The VSTMBT [17] is a task sensitive to memory related changes in early stage AD. Participants were presented non-anneable coloured shapes on a screen for 2s (encoding). They must memorize this information after a blank screen is shown for a variable amount of time of 2, 4, 6, or 8s (maintenance). Then, they are presented the same or a different set of associations of shapes and colours for 4s. The participants must determine if they are the same or different (probe), followed by an inter-trial interval before repetition. In this study, we focus on the encoding phase of the task to assess the formation of memories in healthy and diseased groups. ### Graph and signal construction fMRI data was collected with a GE Signa Horizon HDxt 1.5T clinical scanner. During the VSTMBT, contiguous interleaved axial gradient EPI were collected alongside the intercommissural plane throughout two continuous runs (TR/TE \(=2000/40\)ms; matrix \(=64\times 64\); \(\text{fov}=24\)cm; thickness \(=5\)mm; \(\text{gap}=0\)mm). Outlier detection, realignment, slice-timing correction, co-registration of the structural and functional (\(T_{1}\)) images to the MNI space, segmentation, and normalization were performed with SPM 12. ROIs for each subject are defined using an 85 region atlas, detailed below. For each ROI, the mean signal is acquired across the voxels in that region and highpass filtered (0.06Hz) to avoid fMRI signal drift. For Diffusion MRI, \(3\)\(T_{2}\)-weighted (\(\text{b}=0\)s mm\({}^{-2}\)) and sets of diffusion-weighted (\(b=1000\)s mm\({}^{-2}\)) single-shot spin-echo-planar (EP) volumes were acquired with diffusion gradients applied in 32 non-collinear directions. Subsequent volumes were in the axial plane (\(\text{fov}=240\times 240\); matrix \(=128\times 128\); thickness \(=2.5\)mm), giving voxel dimensions of \(1.875\times 1.875\times 2.5\)mm. Figure 3: Graph-based contrasts for the \(\mathrm{MIX}\) process. Figure 2: Mean and std of values of \(\mathrm{PE_{G}}\) for a consistent graph across increasing noise levels and varying frequencies. A \(T_{1}\) weighted volume was also acquired with 1.3 mm\({}^{3}\) voxel dimensions. This volume was parcellated into 85 ROIs with the Desikan-Killiany atlas combined with additional regions acquired via sub-cortical segmentation detailed in [6], and the brain-stem using FreeSurfer. Standard pre-processing was applied following [6]. Of note, the DTI network weights were determined by the streamline density between regions, corrected for ROI size. ### Results We calculate the graph-based patterns as per Section 2.2 with \(m=3\). The graph is the subject's SD-weighted DTI network and the signal _at each node_ is the mean signal across the encoding phases of the task, yielding a pattern at each node. In the healthy brain networks, we observe the existence of dominant patterns in some clusters, such as patterns 5&6 in ROIs 1-18, and patterns 1&2 in ROIs 75-81 (see Fig. 3(a)), suggesting that there may be some identifying patterns associated with the encoding phase of the VSTMBT. (Here we refer to patterns #1 to #6 following the order as in Fig. 1.) To determine whether patterns change with disease, we perform chi-squared analysis comparing the per node patterns between each pair of control and disease groups. Furthermore, we assess the stability of the resulting \(p\)-value by permuting the control and disease groups 1000 times to calculate how often our original \(p\)-value (\(p\)) is smaller than that of the randomly permuted groups (\(p^{\prime}\)). Due to the limited sample size, we took a conservative approach to report regions where both \(p,p^{\prime}\leq 0.05\). We find that, as the disease progresses (Table 1), the number of regions which exhibit significant changes in pattern increases, as expected. Most interestingly, we see an increasing change in the orbitofrontal cortex as the disease progresses both with decreasing \(p\)-value in the Right lateral orbitofrontal, along with the presence of the medial orbitofrontal at the later stage of disease. The orbitofrontal cortex is a vulnerable region to early deposition of amyloid plaques, a key bio-marker in AD progression [23, 19]. Similarly, damage in the entorhinal, paracentral, frontal, and hippocampal structures are other early indicators of AD in studies of amyloid deposition and structural and functional MRI [10, 12, 26]. Additionally, we look at pattern frequency changes between groups at granular scale. Namely, we identify the most dominant pattern per node for each subject group that appears in at least half of the subjects. This is visualized in Fig. 5. Here, nodes in orange are those that have changed pattern, blue indicated no change, black had no definitive pattern within the control group, and labelled nodes are from Table 1. ## 5 Conclusion and Future Work We have extended the analysis of permutation patterns to graph signals, providing a novel lens to view and analyze such data at granular scale. Our findings indicate that the turning rate \(\alpha\) serves as an effective tool to examine the graph-based pattern analysis. Furthermore, we identify the potential use of graph based permutation patterns for multi-modal MRI data of MCI. Though limited by sample size, our results motivate larger studies of graph based permutation patterns on other real-world data such as MRI-based brain graph signals. \begin{table} \begin{tabular}{l l l l} **Control vs.** & **ROIs** & \(p\)**-value** & \(p<p^{\prime}\) \\ \hline **cMCI** & Right-lateralorbitofrontal & 0.019 & 0.009 \\ \hline **MCI** & Right-entorhinal & 0.015 & 0.002 \\ & Right-lateralorbitofrontal & 0.020 & 0.027 \\ & Right-paraphippocampal & 0.010 & 0 \\ \hline **MCIc** & Left-hippocampus & 0.049 & 0.050 \\ & Left-caudalmiddlefrontal & 0.036 & 0.033 \\ & Left-medialorbitofrontal & 0.031 & 0.008 \\ & Right-lateralorbitofrontal & 0.005 & 0 \\ & Right-paracentral & 0.049 & 0.021 \\ \end{tabular} \end{table} Table 1: Statistical tests to find regions with significant differences in the distribution of graph-based permutation patterns between control and different stages of MCI. Figure 4: (a) visualizes the distribution of patterns (rows), across subjects (columns). In (b), patterns for a node were based on the mode of the distribution of patterns for the healthy group, but only when that pattern was in at least half of subjects (black otherwise). (b) was generated with the BrainNet viewer tool [24]. Figure 5: Changes in pattern between healthy and disease. Note that only \(2\%\) of DTI edges are drawn for clarity. Generated with the BrainNet viewer tool [24].
2309.07790
Mass transfer and boson cloud depletion in a binary black hole system
Ultralight boson is one of the potential candidates for dark matter. If exists, it can be generated by a rapidly rotating black hole via superradiance, extracting the energy and angular momentum of the black hole and forming a boson cloud. The boson cloud can be affected by the presence of a companion star, generating fruitful dynamical effects and producing characteristic gravitational wave signals. We study the dynamics of the boson cloud in a binary black hole system, in particular, we develop a framework to study the mass transfer between two black holes. It is found that bosons occupying the growing modes of the central black hole can jump to the decaying modes of the companion black hole, resulting in cloud depletion. This mechanism of cloud depletion is different from that induced by the resonant perturbation from the companion.
Yao Guo, Wenjie Zhong, Yiqiu Ma, Daiqin Su
2023-09-14T15:25:43Z
http://arxiv.org/abs/2309.07790v1
# Mass transfer and boson cloud depletion in a binary black hole system ###### Abstract Ultralight boson is one of the potential candidates for dark matter. If exists, it can be generated by a rapidly rotating black hole via superradiance, extracting the energy and angular momentum of the black hole and forming a boson cloud. The boson cloud can be affected by the presence of a companion star, generating fruitful dynamical effects and producing characteristic gravitational wave signals. We study the dynamics of the boson cloud in a binary black hole system, in particular, we develop a framework to study the mass transfer between two black holes. It is found that bosons occupying the growing modes of the central black hole can jump to the decaying modes of the companion black hole, resulting in cloud depletion. This mechanism of cloud depletion is different from that induced by the resonant perturbation from the companion. ## I Introduction The detection of gravitational waves by current ground-based gravitational wave detectors [1] (LIGO, Virgo, etc.) opens up a new avenue to explore the astrophysical processes that involve strong gravitational field. Future space-borne gravitational wave detectors (LISA, TQ [2], etc.) extend the detection frequency range and allow for exploration of more fruitful physics, e.g., supermassive black holes. One of the interesting target sources for gravitational wave detectors is the ultralight boson, including axion and pseudoscalar axion-like particles. Axion provides a solution to the strong CP problem [3; 4; 5] and has the potential to demystifying the baryon asymmetry of the universe, while more general axion-like particles are predicted by symmetry breaking in string theory [6]. These ultralight bosons are also potential candidates for dark matter [7; 8; 9; 10; 11]. The ultralight bosons, if exists, can be produced by a rapidly rotating black hole (BH) through superradiance instabilities [12; 13; 14; 15; 16; 17; 18; 19; 20], carrying away the mass and angular momentum of black hole and forming a boson cloud/condensate [21]. Since the presence of ultralight boson reduces the mass and spin of the black hole, the precision measurement of black hole mass and spin via gravitational wave detectors provides a powerful constraint on the properties of bosons [22; 23; 24]. In addition, the boson cloud can radiate continuous gravitational waves due to its asymmetric distribution and self-annihilation [21], and thus can be detected by future gravitational wave detectors. If a companion star is present, the boson cloud around a black hole is distorted by a time-dependent tidal field. Under certain conditions, the time-dependent tidal field can induce transitions between the boson's various energy levels, in particular, between the growing modes and decaying modes [25]. This would generate fruitful dynamical effects. The transition of boson can transfer energy and angular momentum from the boson cloud to the companion star, modifying its orbital evolution. In the extreme case, the energy loss of the companion star due to the emission of gravitational waves is balanced by the energy gain from the boson cloud, thus forming the so called "floating orbit" [26]. When the boson occupies the decaying mode, it may decay into the black hole, resulting in the depletion of the boson cloud [25; 27]. In some cases, the boson cloud could be completely cleaned up by the companion star. The bosons that absorbed by the black hole increase the mass of the black hole while reduce its spin. The reduction of the black hole spin can turn some of the growing modes into decaying modes and then accelerate the cloud depletion [28]. The companion can also induce transitions between bound and unbound orbits of the boson, thus ionize it [29]. These fruitful dynamical processes of the boson cloud produce characteristic gravitational wave signals and consequently can be detected by current and future gravitational wave detectors [30]. This provides a way to infer the presence of the ultralight boson and constrain its various properties. When the black hole and its companion is sufficiently close, the boson can escape to the companion, resulting in a redistribution of the mass of the boson cloud. A gravitational "molecule", an analog to the hydrogen molecule, can be formed under certain conditions [31; 32]. If the companion is also a rotating black hole, then the escaped boson may occupy the decaying modes of the companion black hole and may decay into it. If happens, then this provides a new channel for boson cloud depletion. In this work, we develop a framework to study the mass transfer between two black holes in a binary system, assuming that the two black holes have the same mass and spin, and their spin orientation is parallel. We make analogy with the hydrogen molecule ion and calculate the wave functions of the molecular orbits and the corresponding energy eigenvalues using the variational method. We then obtain the probability for the boson jumping to the decay ing mode of the companion black hole, and show that this leads to a strong cloud depletion which almost completely cleans up the boson cloud. This paper is organized as follows. In Sec. II, we briefly review the boson cloud around a black hole and its depletion into the black hole due to the time-dependent tidal field produced by the companion. In Sec. III, by making analogy with hydrogen molecular ion, we use the variational method to derive the wave functions and energy eigenvalues of the boson in the binary black hole system. In Sec. IV, we use the adiabatic approximation to study the time evolution of the wave function of the boson, evaluate the the probability of jumping to the decaying mode of the companion black hole, and calculate the time evolution of the cloud mass due to the decay into the companion black hole. We finally conclude in Sec. V. ## II Boson cloud around a black hole ### Gravitational atom A rapidly rotating black hole can radiate ultralight bosons via superradiance instabilities. These bosons condensate in some of their orbits, forming a boson cloud around the black hole. When the mass of the boson is small, the size of the cloud can be much larger than the gravitational radius of the rotating black hole. In this limit, the Newtonian approximation is sufficient to describe the dynamics of the boson cloud. The orbits of the boson is determined by a Schrodinger-like equation, similar to that of an electron in a hydrogen atom. The eigenstates of the boson are denoted as \(|\varphi_{n\ell m}\rangle\) or \(\varphi_{n\ell m}\), and the eigenfrequencies are given by [25] \[\omega_{n\ell m}\approx\mu\bigg{(}1-\frac{\alpha^{2}}{2n^{2}}\bigg{)}, \tag{1}\] where \(\mu\) is the mass of the boson, \(\alpha\equiv GM\mu/\hbar c\) is the dimensionless "fine-structure constant". The radial profile of the wave function peaks at \[r_{c,n}\approx\bigg{(}\frac{n^{2}}{\alpha^{2}}\bigg{)}r_{g}=n^{2}r_{b}, \tag{2}\] where \(r_{g}\) is defined as the gravitational radius of the black hole, \(r_{g}\equiv GM/c^{2}\), and \(r_{b}=r_{g}/\alpha^{2}\) is defined as the Bohr radius. However, there is a crucial difference between the electron in the hydrogen atom and the boson around a black hole: the orbits of the electron are stable while the orbits of the boson are not stable due to the presence of the black hole horizon. This is characterized by the imaginary part of the eigenfrequency, \(\omega_{n\ell m}\rightarrow\omega_{n\ell m}+i\Gamma_{n\ell m}\). In the limit \(\alpha\ll 1\), \(\Gamma_{n\ell m}\) can be approximated as [33] \[\Gamma_{n\ell m}=\frac{2r_{+}}{M}C_{n\ell m}(\alpha)(m\Omega_{\rm H}-\omega_{n \ell m})\alpha^{4\ell+5}, \tag{3}\] where \(C_{n\ell m}(\alpha)\) is positive and given by \[C_{n\ell m}(\alpha) = \frac{2^{4\ell+1}(n+\ell)!}{n^{2\ell+4}(n-\ell-1)!}\bigg{[}\frac {\ell!}{(2\ell)!(2\ell+1)!}\bigg{]}^{2} \tag{4}\] \[\times\prod_{j=1}^{\ell}\bigg{[}j^{2}(1-\tilde{a}^{2})+(\tilde{a }m-2\tilde{r}_{+}\alpha)^{2}\bigg{]},\] with \(\tilde{a}=a/M\) and \(\tilde{r}_{+}=r_{+}/M\). Here \(r_{+}\) is the size of the event horizon, \(a\) is the spin and \(\Omega_{\rm H}\) is the angular velocity of the rotating black hole. The orbits with positive \(\Gamma_{n\ell m}\) are growing modes, for which the number of boson grows exponentially; while the orbits with negative \(\Gamma_{n\ell m}\) are decaying modes, for which the number of boson decays exponentially. Starting from a rapidly rotating black hole, bosons are radiated due to the superradiance instabilities and then occupy the growing modes. The radiated bosons carry away angular momentum, slowing down the rotation of the black hole. At an equilibrium point the black hole rotates slow enough such that no bosons can be further radiated, and a quasi-stationary boson cloud forms around the black hole. ### Hyperfine and Bohr resonance When the black hole with a boson cloud is part of a binary system, the gravitational field of the companion star distorts the cloud, resulting in transition of bosons between growing modes and decaying modes. The bosons that jump to decaying modes can return to the black hole, reducing the total mass of the cloud and transferring angular momentum to the companion star. Therefore, the existence of boson cloud would affect the orbital evolution of the companion star and the gravitational waveforms from the binary system. The companion star induces a time-dependent perturbation to the Kerr metric, which then introduces a time-dependent shift of gravitational potential to the Schrodinger equation that dominates the dynamics of bosons. Under certain conditions, the time-dependent perturbation can induce resonant transitions between growing modes and decaying modes [25]. There are two types of resonances, the hyperfine (or Rabi) resonance and the Bohr resonance. The resonance occurs at a specific orbital separation. For the hyperfine resonance, the orbital separation is given by \[R_{\rm res}^{(h)}=144^{1/3}\alpha^{-4}(1+q)^{1/3}\tilde{a}^{-2/3}r_{g}, \tag{5}\] while for the Bohr resonance, the orbital separation is given by \[R_{\rm res}^{(b)}=\bigg{(}\frac{144}{5}\bigg{)}^{2/3}\alpha^{-2}(1+q)^{1/3}r_{g}. \tag{6}\] In general, the orbital separation of the hyperfine resonance is much larger than that of the Bohr resonance. This is because the hyperfine energy gap is much narrower than the Bohr energy gap. This implies that the companion star has to be closer to the black hole in order to excite the Bohr resonance. ## III Boson orbits in a binary black hole system We are concerned with the dynamics of the boson cloud when a black hole and its companion are sufficiently close so that mass transfer between them cannot be ignored. The process of mass transfer is generally very complicated and numerical simulation is needed to fully uncover the cloud dynamics. It has been shown by using effective field theory techniques [31] and numerical calculation [32] that a gravitational molecule can form in a binary black hole system. Here, we use a simple model to describe the mass transfer and the time evolution of a BH-cloud-companion system. The model is based on the analogy between the hydrogen molecule ion \(H_{2}^{+}\) and the BH-cloud-companion system, and it can capture the main characteristics of mass transfer and cloud depletion. For simplicity, we assume (1) the companion is also a black hole, therefore forming a binary black hole system; (2) two black holes have the same mass and spin; (3) their spin orientation is parallel and perpendicular to the orbital plane. The schematic of the configuration is shown in Fig. 1. These assumptions allow us to focus on the effect of mass transfer, and they can be relaxed to incorporate more complicated effects. Under these assumptions, the orbits of the boson in the BH-cloud-BH system is analogous to that of the electron in the hydrogen molecule ion \(H_{2}^{+}\), except that the two black holes rotate with each other and that the existence of a horizon results in boson absorption. ### Orbits of hydrogen molecule ion The hydrogen molecule ion \(H_{2}^{+}\) consists of two protons and a single electron. The electron moves in the potential produced by the two protons with a fixed distance. The potential is time independent, so the electron wave functions can be derived by solving the stationary Schrodinger equation. Approximate energy eigenvalues and eigenfunctions can be solved using the variational method. The energy eigenvalues are the stationary points of the expectation value of the Hamiltonian. To approximate the energy eigenvalue, one starts from a trial wave function and then vary the parameters in the trial wave function to find the stationary point of the Hamiltonian. For the hydrogen molecule ion, the initial trial wave functions can be selected by using the symmetric properties of the system. For example, one can choose a linear superposition of the two ground-state wave functions as a trial function to find the ground state of the hydrogen molecule ion. The excited states can also be found in a similar way. In this paper, we are mainly interested in the excited states with \(n=2\) because these states are relevant to the states that dominate the evolution of the boson cloud. It can be shown that there exists two molecular \(\sigma\) orbits obtained from the atomic orbit \(2p_{x}\)[34] (see Appendix B for details), \[|\sigma\rangle = \frac{1}{\sqrt{2}N_{1}}\big{(}\,|\varphi^{1}_{2p_{x}}\rangle-| \varphi^{2}_{2p_{x}}\rangle\,\big{)}\] \[= \frac{1}{2N_{1}}\big{(}\,|\varphi^{1}_{2,1,1}\rangle+|\varphi^{1} _{2,1,-1}\rangle-|\varphi^{2}_{2,1,1}\rangle-|\varphi^{2}_{2,1,-1}\rangle\, \big{)},\] \[|\sigma^{*}\rangle = \frac{1}{\sqrt{2}N_{2}}\big{(}\,|\varphi^{1}_{2p_{x}}\rangle+| \varphi^{2}_{2p_{x}}\rangle\,\big{)}\] \[= \frac{1}{2N_{2}}\big{(}\,|\varphi^{1}_{2,1,1}\rangle+|\varphi^{1} _{2,1,-1}\rangle+|\varphi^{2}_{2,1,1}\rangle+|\varphi^{2}_{2,1,-1}\rangle\, \big{)};\] and another two molecular \(\pi\) orbits obtained from the atomic orbit \(2p_{y}\), \[|\pi\rangle = \frac{1}{\sqrt{2}N_{3}}\big{(}\,|\varphi^{1}_{2p_{y}}\rangle+| \varphi^{2}_{2p_{y}}\rangle\,\big{)}\] \[= \frac{1}{2iN_{3}}\big{(}\,|\varphi^{1}_{2,1,1}\rangle-|\varphi^{1 }_{2,1,-1}\rangle+|\varphi^{2}_{2,1,1}\rangle-|\varphi^{2}_{2,1,-1}\rangle\, \big{)},\] \[|\pi^{*}\rangle = \frac{1}{\sqrt{2}N_{4}}\big{(}\,|\varphi^{1}_{2p_{y}}\rangle-| \varphi^{2}_{2p_{y}}\rangle\,\big{)}\] \[= \frac{1}{2iN_{4}}\big{(}\,|\varphi^{1}_{2,1,1}\rangle-|\varphi^{1 }_{2,1,-1}\rangle-|\varphi^{2}_{2,1,1}\rangle+|\varphi^{2}_{2,1,-1}\rangle\, \big{)},\] where \(N_{1},N_{2},N_{3}\) and \(N_{4}\) are introduced to normalize Figure 1: Configuration of the BH-cloud-BH system and the coordinate system used to describe the boson. The origin of the coordinate system is located at the central black hole, and the \(z\)-axis is parallel to the spin of the central black hole and the \(x\)-axis is pointing towards the companion black hole. Spherical coordinates \((r_{1},\theta_{1},\varphi_{1})\) are used to represent the position of the boson relative to the central black hole, and \((r_{2},\theta_{2},\varphi_{2})\) are used to denote the position of the boson relative to the companion black hole. Here \(a_{1}\) and \(a_{2}\) represent the spin of the central and companion black holes, respectively, and \(R\) denote the orbital separation. these states, \[N_{1} = \sqrt{1-\langle\varphi_{2p_{x}}^{1}\,|\varphi_{2p_{x}}^{2}\rangle}, \quad N_{2}=\sqrt{1+\langle\varphi_{2p_{x}}^{1}\,|\varphi_{2p_{x}}^{2} \rangle},\] \[N_{3} = \sqrt{1+\langle\varphi_{2p_{y}}^{1}\,|\varphi_{2p_{y}}^{2} \rangle}, \quad N_{4}=\sqrt{1-\langle\varphi_{2p_{y}}^{1}\,|\varphi_{2p_{y}}^{2} \rangle}.\] It is evident that \(N_{i}\) depends on the distance between two protons. In the following, we will use the notation \(i=1,2,3,4\) to denote the four molecular orbits \(\sigma,\sigma^{*},\pi\) and \(\pi^{*}\), respectively. ### Born-Oppenheimer approximation We now turn to the problem of solving the orbits of the boson in a binary black hole system with the two black holes have the same mass and spin. This system is analogous to the hydrogen molecule ion \(H_{2}^{+}\) except that the two black holes rotate with each other and their separation shrinks due to the emission of gravitational waves. The bosons experience a time-dependent field rather than a static one. An important question is how to take into account the effects of rotation and orbital shrinking. In Ref. [25], the rotation of the companion star relative to the central black hole produces a time-varying tidal field that perturbs the central black hole, which then induces hyperfine and Bohr mixing of growing and decaying modes, resulting in boson cloud depletion. Whilst we are concerned with the transfer of bosons between the two black holes, the framework developed in Ref. [25] does not apply here. As a first approximation, we neglect the effects of rotation and orbital shrinking, and treat the two black holes as stationary with a fixed orbital separation. This is known as the Born-Oppenheimer approximation, in which the binary black hole system with a boson cloud is analogous to the hydrogen molecule ion at any given time. Before performing the detailed calculation, we compare various timescales to validate the use of Born-Oppenheimer approximation. The velocity of a boson can be estimated using the Virial theorem and the uncertainty principle. The boson's velocity \(v_{a}\) satisfies \[\frac{1}{2}\mu v_{a}^{2}\sim\frac{GM\mu}{r_{c}},\qquad\mu v_{a}\sim\frac{\hbar }{r_{c}}, \tag{10}\] where \(r_{c}\) is the typical length scale of the radial profile of the boson cloud. This implies \[v_{a}\sim\frac{GM\mu}{\hbar c}c\sim\alpha c\sim\alpha. \tag{11}\] The relaxation timescale for the boson can be approximated by \[\tau_{r}\sim\frac{R}{v_{a}}\sim\frac{R}{\alpha}. \tag{12}\] The time \(\tau_{r}\) characterizes the timescale for the boson moving from one black hole to the other. The period of the binary black holes is simply given by \[T=\frac{2\pi}{\Omega}=2\pi\sqrt{\frac{R^{3}}{M(1+q)}}. \tag{13}\] Therefore the ratio between \(\tau_{r}\) and \(T\) is \[\frac{\tau_{r}}{T}\sim\frac{\sqrt{1+q}}{2\pi}\sqrt{\frac{M}{\alpha^{2}R}}\sim \frac{\sqrt{1+q}}{2\pi}\sqrt{\frac{r_{b}}{R}}. \tag{14}\] For \(q=1\) and \(R=32\,r_{b}\), \(\tau_{r}/T\sim 1/8\pi\sim 0.04\). As we will see in the following discussion, the mass transfer occurs when \(R\gtrsim 50\,r_{b}\). This shows that for an intermediate orbital separation, the relaxation time of the boson is much shorter than the period of the binary. Therefore, the two black holes can be treated as quasi-static when considering the transfer of bosons and the Born-Oppenheimer approximation applies. The orbit of the binary black holes shrinks due to the emission of gravitational waves [35], \[R(t)=\bigg{[}\frac{M(1+q)}{\Omega_{0}^{2}}\bigg{]}^{1/3}\bigg{(}-\frac{t}{ \tau_{0}}\bigg{)}^{1/4}, \tag{15}\] where we set \(t=0\) as the moment of merger, and \(\tau_{0}\) is the time to merger for an initial orbital frequency \(\Omega_{0}\). The initial time \(\tau_{0}\) and initial orbital frequency \(\Omega_{0}\) are related via \[\frac{\tau_{0}}{M(1+q)}=\frac{5}{256}\frac{(1+q)^{2}}{q}\bigg{[}\frac{1}{M(1+ q)\Omega_{0}}\bigg{]}^{8/3}. \tag{16}\] The characteristic timescale for coalescence can be estimated via \[\tau_{p}=\bigg{|}\bigg{(}\frac{dR}{dt}\bigg{)}^{-1}\bigg{|}R\approx\frac{5}{ 64}\frac{R^{4}}{M^{3}}\frac{1}{q(1+q)}. \tag{17}\] This timescale is evidently much longer than the period of the binary \(T\), and it is thus also much longer than the boson relaxation timescale \(\tau_{r}\). \[\frac{\tau_{r}}{\tau_{p}}\sim\frac{32}{5}\alpha^{5}\bigg{(}\frac{r_{b}}{R} \bigg{)}^{3}q(1+q). \tag{18}\] When the orbital separation of the two black holes is sufficiently large and \(q\) is not so large, these two ratios are both much smaller than one, so the Born-Oppenheimer approximation is sound. ### Eigenfunctions and energy eigenvalues To find the exact eigenfunctions and eigenfrequencies, one in principle needs to solve the Klein-Gordon equation of the boson in the background spacetime of the binary black hole system, which is quite a challenging task. When the mass of the boson is small, the boson cloud is far away from the black hole and Newtonian approximation can be used to derive the eigenfunctions [25]. To the order of \(1/r\), the wave function of the boson around a single rotating black hole satisfies \[i\hbar\frac{\partial}{\partial t}\psi(t,\mathbf{r})=\bigg{(}-\frac{1}{2\mu}\nabla^{ 2}-\frac{\alpha}{r}\bigg{)}\psi(t,\mathbf{r}), \tag{19}\] which is exactly in the same form as the Schrodinger equation for the electron in a hydrogen atom. When the two black holes are not so close to each other and the boson lingers around a regime far away from both black holes, then the Newtonian approximation applies. To the order of \(1/r\), the wave function of the boson in the binary black hole system satisfies \[i\hbar\frac{\partial}{\partial t}\psi(t,\mathbf{r})=\bigg{(}-\frac{1}{2\mu}\nabla^ {2}-\frac{\alpha}{r_{1}}-\frac{\alpha}{r_{2}}\bigg{)}\psi(t,\mathbf{r}), \tag{20}\] where \(r_{1}\) is the distance between the boson and the central black hole, and \(r_{2}\) is the distance between the boson and the companion black hole. Equation (20) is exactly in the same form as the Schrodinger equation for the electron in the hydrogen molecule ion, without including the interaction between two protons. In the Born-Oppenheimer approximation, the eigenfunctions of the boson at any given time can be derived straightforwardly. They are exactly in the same form as those given by Eqs. (7) and (8), with \(\ket{\varphi_{n\ell m}}\) the eigenfunctions of the boson in a single isolated rotating black hole. Furthermore, the normalization constants \(N_{i}\) depend on the configuration of the BH-cloud-BH system. For the case that we consider here, the normalization constants have no analytic expressions and are needed to be evaluated numerically. Once the eigenfunctions are known, the energy eigenvalues can be calculated straightforwardly, which are simply the expectation values of the Hamiltonian. From Eq. (20) it is evident that the Hamiltonian of the boson in the Newtonian limit is \[\hat{H}=-\frac{1}{2\mu}\nabla^{2}-\frac{\alpha}{r_{1}}-\frac{\alpha}{r_{2}}. \tag{21}\] To simplify the calculation, we divide the Hamiltonian \(\hat{H}\) into two parts: the Hamiltonian of the boson in the central black hole and the potential produced by the companion black hole, namely, \(\hat{H}=\hat{H}_{1}-\frac{\alpha}{r_{2}}\) with \(\hat{H}_{1}=-\frac{1}{2\mu}\nabla^{2}-\frac{\alpha}{r_{1}}\). Then the eigenfrequencies are given by \[\omega_{1} = \bra{\sigma}\hat{H}\ket{\sigma}=\frac{1}{N_{1}^{2}}\bigg{[}- \frac{1}{8}\mu\alpha^{2}\big{(}1-\bra{\varphi_{2p_{x}}^{1}}\ket{\varphi_{2p_{ x}}^{2}}\big{)}-\bra{\varphi_{2p_{x}}^{1}}\frac{\alpha}{r_{2}}\ket{\varphi_{2p_{ x}}^{1}}+\bra{\varphi_{2p_{x}}^{1}}\frac{\alpha}{r_{2}}\ket{\varphi_{2p_{x}}^{2}} \bigg{]},\] \[\omega_{2} = \bra{\sigma^{*}}\hat{H}\ket{\sigma^{*}}=\frac{1}{N_{2}^{2}} \bigg{[}-\frac{1}{8}\mu\alpha^{2}\big{(}1+\bra{\varphi_{2p_{x}}^{1}}\ket{ \varphi_{2p_{x}}^{2}}\big{)}-\bra{\varphi_{2p_{x}}^{1}}\frac{\alpha}{r_{2}} \ket{\varphi_{2p_{x}}^{1}}-\bra{\varphi_{2p_{x}}^{1}}\frac{\alpha}{r_{2}}\ket{ \varphi_{2p_{x}}^{2}}\bigg{]},\] \[\omega_{3} = \bra{\pi}\hat{H}\ket{\pi}=\frac{1}{N_{3}^{2}}\bigg{[}-\frac{1}{8 }\mu\alpha^{2}\big{(}1+\bra{\varphi_{2p_{y}}^{1}}\ket{\varphi_{2p_{y}}^{2}} \big{)}-\bra{\varphi_{2p_{y}}^{1}}\frac{\alpha}{r_{2}}\ket{\varphi_{2p_{y}}^{ 1}}-\bra{\varphi_{2p_{y}}^{1}}\frac{\alpha}{r_{2}}\ket{\varphi_{2p_{y}}^{2}} \bigg{]},\] \[\omega_{4} = \bra{\pi^{*}}\hat{H}\ket{\pi^{*}}=\frac{1}{N_{4}^{2}}\bigg{[}- \frac{1}{8}\mu\alpha^{2}\big{(}1-\bra{\varphi_{2p_{y}}^{1}}\ket{\varphi_{2p_{ y}}^{2}}\big{)}-\bra{\varphi_{2p_{y}}^{1}}\frac{\alpha}{r_{2}}\ket{\varphi_{2p_{ y}}^{1}}+\bra{\varphi_{2p_{y}}^{1}}\frac{\alpha}{r_{2}}\ket{\varphi_{2p_{y}}^{2}} \bigg{]}. \tag{22}\] For the case that we consider here, there exists no analytic expressions for \(\omega_{i}\), so they have to be evaluated numerically. Note that in deriving the eigenfunctions and eigenfrequencies, we have ignored the higher order corrections to the Hamiltonian. This results in degeneracy of all four energy levels when the two black holes are infinitely far away. ## IV Cloud Depletion Assume that the central black hole has a boson cloud surrounding it while the companion black hole does not. When the two black holes are very far away from each other, the bosons move around the central black hole and cannot escape to the companion. When the two black holes become closer to each other, the boson orbits that belong to the central black hole and that belong to the companion have more overlap, forming boson orbits that are analogous to the molecular orbits of the hydrogen molecule ion. Therefore, the boson around the central black hole may jump to the companion. This can have two consequences: first, the bosons redistribute in the binary black hole system, changing its quadruple and thus modifying the gravitational waveform; second, the bosons may jump to the decaying modes of the companion, and therefore could be absorbed by the companion black hole. To estimate the importance of the above two consequences, one needs to model the process of boson transfer from the central black hole to its companion. The binary black hole system emits gravitational waves so its orbit shrinks, namely, the orbital separation be tween the two black holes shrinks due to the radiation of energy. The orbital separation, denoted as \(R(t)\), is thus time dependent. This implies that the eigenfunctions and eigenfrequencies are also time dependent because they are solved at a given time assuming the orbital separation is fixed. These solutions are meaningful only in the case where the time scale of the orbital shrinking is much larger than that of the boson relaxation. This is exactly the case as we have discussed in Sec. III.2. In order to know the evolution of the boson cloud in the binary black hole system, one needs to solve a time-dependent Schrodinger equation, in which the Hamiltonian varies slowly. This sort of problem can be solved using the adiabatic approximation if the energy levels are not degenerate and the energy gap is sufficiently large. ### Adiabatic approximation We numerically calculate the eigenfrequencies given by Eq. (22) and plot them in Figs. 2 and 3. It can be seen from Fig. 2 that the four energy levels split and the energy gap is larger when the two black holes are closer. The gap decreases as the orbital separation increases, which is consistent with the fact that the four energy levels are degenerate when \(R\rightarrow\infty\). Figure 3 shows that the energy gaps between orbits \(\sigma\) and \(\sigma^{*}\), and orbits \(\pi\) and \(\pi^{*}\) decrease much faster than that between \(\sigma\)'s and \(\pi\)'s orbits. Because the energy gaps between orbits \(\sigma\) and \(\sigma^{*}\), and orbits \(\pi\) and \(\pi^{*}\) are very close, a question arises as to whether the adiabatic approximation is still valid. In the adiabatic approximation, the eigenfunctions and eigenfrequencies vary slowly with time, but the boson remains in the initial energy level during the subsequent evolution, namely, no transition from one energy level to others. In order to see whether the adiabatic approximation is valid for the problem we consider here, we need to check whether the transition between orbits \(\sigma\) and \(\sigma^{*}\), and orbits \(\pi\) and \(\pi^{*}\) is negligible. Consider an arbitrary quantum state \[\ket{\psi(t)}=\sum_{n}c_{n}(t)\ket{\psi_{n}(t)}, \tag{23}\] where \(\ket{\psi_{n}(t)}\) and \(c_{n}(t)\) are the time-dependent energy eigenstates and their corresponding coefficients, respectively. In the adiabatic approximation, we have \(\ket{c_{n}(t+dt)}=\ket{c_{n}(t)}\), namely, there is no transition between different energy levels. To estimate the accuracy of the adiabatic approximation, one can check the amplitude of the time derivative of the coefficient \(c_{n}(t)\). It can be shown that (see Appendix C for details) \[\frac{dc_{k}}{dt}=-\sum_{n\neq k}c_{n}\frac{1}{E_{n}-E_{k}}\bra{\psi_{k}}\frac {\partial\hat{H}}{\partial t}\ket{\psi_{n}}-ic_{k}E_{k}, \tag{24}\] where the summation characterizes the transition from the initial energy level to other energy levels, and the last term represents the free evolution in the initial energy level. Now consider the set of state \(\{\ket{\psi_{k}}\}\) as the four orbits \(\ket{\sigma},\ket{\sigma^{*}},\ket{\pi}\) and \(\ket{\pi^{*}}\). Due to the symmetry of the system, it can be shown (see Appendix C for details) that \[\bra{\psi_{k}}\frac{\partial\hat{H}}{\partial t}\ket{\psi_{n}}=0, \tag{25}\] for \(n\neq k\). This implies that the adiabatic approximation is a good approximation to the time evolution of the boson. Taking into account the emission of gravitational waves and the orbital shrinking, the eigenfrequencies and eigenstates are all time dependent, denoted as \(\omega_{i}(t)\), \(\ket{\sigma(t)},\ket{\sigma^{*}(t)},\ket{\pi(t)}\) and \(\ket{\pi^{*}(t)}\), respectively. When the two black holes are infinitely far away, namely, \(t\rightarrow-\infty\) and \(R\rightarrow\infty\), the normalization constants defined in Eq. (9) are the same and equal to one. This implies Figure 2: Energy eigenvalues of the boson molecular orbits for small orbital separation. Here we choose \(q=1\) and \(\alpha=0.1\) as an example. Figure 3: Energy eigenvalues of the boson molecular orbits for large orbital separation. Here we choose \(q=1\) and \(\alpha=0.1\) as an example. that the initial state of the boson can be written as \[|\Phi(-\infty)\rangle\equiv|\varphi^{1}_{2,1,1}\rangle \tag{26}\] \[= \frac{1}{2}\big{[}\,|\sigma(-\infty)\rangle+|\sigma^{*}(-\infty) \rangle+i\,|\pi(-\infty)\rangle+i\,|\pi^{*}(-\infty)\rangle\,\big{]}.\] According to the adiabatic approximation, the subsequent evolution of the state is \[|\Phi(t)\rangle=\frac{1}{2}\bigg{[}e^{-i\int_{-\infty}^{t}\omega_ {1}(\tau)\mathrm{d}\tau}\,|\sigma(t)\rangle+e^{-i\int_{-\infty}^{t}\omega_{2} (\tau)\mathrm{d}\tau}\,|\sigma^{*}(t)\rangle \tag{27}\] \[+i\,e^{-i\int_{-\infty}^{t}\omega_{3}(\tau)\mathrm{d}\tau}\,|\pi( t)\rangle+i\,e^{-i\int_{-\infty}^{t}\omega_{4}(\tau)\mathrm{d}\tau}\,|\pi^{*}(t) \rangle\,\bigg{]},\] We now substitute Eqs. (7) and (8) into Eq. (27) and obtain an expansion of \(|\Phi(t)\rangle\) in terms of the isolated orbits \(|\varphi^{1}_{2,1,\pm 1}\rangle\) and \(|\varphi^{2}_{2,1,\pm 1}\rangle\). The coefficient of the state \(|\varphi^{2}_{2,1,-1}\rangle\) is given by \[C_{-}(t)=-\frac{1}{4N_{1}}e^{-i\int_{-\infty}^{t}\omega_{1}( \tau)\mathrm{d}\tau}+\frac{1}{4N_{2}}e^{-i\int_{-\infty}^{t}\omega_{2}(\tau) \mathrm{d}\tau} \tag{28}\] \[-\frac{1}{4N_{3}}e^{-i\int_{-\infty}^{t}\omega_{3}(\tau)\mathrm{ d}\tau}+\frac{1}{4N_{4}}e^{-i\int_{-\infty}^{t}\omega_{4}(\tau)\mathrm{d}\tau}.\] The nonzero overlap between the isolated orbit \(|\varphi^{2}_{2,1,-1}\rangle\) and \(|\Phi(t)\rangle\) implies that bosons can jump to the orbits of the companion black hole during their evolution. The modular square of \(C_{-}(t)\), \[|C_{-}(t)|^{2}=\sum_{i}\frac{1}{16N_{i}^{2}}\] \[+\sum_{i<j}(-1)^{i+j}\frac{1}{8N_{i}N_{j}}\cos\bigg{[}\int_{- \infty}^{t}\big{(}\omega_{i}(\tau)-\omega_{j}(\tau)\big{)}\mathrm{d}\tau \bigg{]},\] represents the probability of jumping to the isolated orbit \(|\varphi^{2}_{2,1,-1}\rangle\). If we only consider the effect of gravitational wave emission and neglect other effects like back reaction of the boson cloud onto the orbital evolution, then we can use the relation defined in Eq. (15) to express the coefficient as a function of \(R\), namely, \(C_{-}(R)\); and the time integration in Eq. (IV) can be replaced by an integration over the orbital separation \(R\). When \(R\) is sufficiently large, the energy levels of orbits \(\sigma\) and \(\sigma^{*}\) are almost degenerate (\(\omega_{1}\approx\omega_{2}\)), as well as that of orbits \(\pi\) and \(\pi^{*}\) (\(\omega_{3}\approx\omega_{4}\)), as can be seen from Fig. 3. In addition, the normalization factors satisfy \(N_{1}(R)\approx N_{2}(R)\) and \(N_{3}(R)\approx N_{4}(R)\). This implies that the coefficient \(C_{-}(R)\) approaches to zero when the two black holes are sufficiently far away from each other, which is consistent with the fact that the boson cannot escape to a companion that is very far away. When the two black holes are closer, the energy levels of orbits \(\sigma\) and \(\sigma^{*}\) become nondegenerate (\(\omega_{1}\neq\omega_{2}\)), as well as those of orbits \(\pi\) and \(\pi^{*}\). The coefficient \(C_{-}(R)\) gradually becomes nonzero, indicating that the boson can transfer to the companion black hole. From Eq. (IV) we can see that the probability \(|C_{-}(R)|^{2}\) is the sum of six oscillating terms having frequencies \(|\omega_{i}-\omega_{j}|\), with \(i,j=\{\sigma,\sigma^{*},\pi,\pi^{*}\}\). The frequencies \(|\omega_{i}-\omega_{j}|\) with \(i\in\{\sigma,\sigma^{*}\}\) and \(j\in\{\pi,\pi^{*}\}\), or \(j\in\{\sigma,\sigma^{*}\}\) and \(i\in\{\pi,\pi^{*}\}\), are much higher than that when \(i,j\in\{\sigma,\sigma^{*}\}\) or \(i,j\in\{\pi,\pi^{*}\}\). Figure 4 shows the occupation probability \(|C_{-}(R)|^{2}\) of the orbit \(|\psi^{2}_{2,1,-1}\rangle\), which is one of the decaying modes of the companion black hole. In the numerical calculation, we use a sufficiently large distance \(R=80\,r_{b}\) as a replacement for the infinitely large distance (\(t=-\infty\)) in the lower limit of the integration in Eq. (IV), which introduces a negligible error. The blue curve takes into account the full expression of \(|C_{-}(R)|^{2}\) and includes highly oscillating terms. If we remove the highly oscillating terms and only keep two terms with oscillating frequencies \(|\omega_{\pi}-\omega_{\pi^{*}}|\) and \(|\omega_{\sigma}-\omega_{\sigma^{*}}|\), we obtain the profile of the probability \(|C_{-}(R)|^{2}\), which is the orange curve shown in Fig. 4. The profile captures the change of the probability that the boson occupies the decaying mode of the companion black hole during the evolution of the binary black holes. When \(R>63\,r_{b}\), the probability is almost zero. It gradually increases as \(R\) decreases and reaches its first maximum around \(R=55.5\,r_{b}\). The first maximum of the probability is about \(0.3\), showing that a significant amount of boson transfers to the companion black hole and occupies the decaying mode \(|\psi^{2}_{2,1,-1}\rangle\). As the orbital separation \(R\) further decreases, the profile of the probability oscillates with a higher and higher frequency. Note that the orbital separation corresponding to the first maximum is larger than the Loche limit, which is about \(10\,r_{b}\) for \(q=1\). The boson occupying the decaying mode may decay into the black hole, resulting in the depletion of the boson cloud. The boson that transfers to the decaying mode of the companion therefore may decay into the companion black hole. This is another channel other than the hyperfine mixing and Bohr mixing that could result in cloud Figure 4: Probability for a boson jumping to the decaying mode \(|\psi^{2}_{2,1,-1}\rangle\) of the companion black hole. Here we choose \(q=1\) and \(\alpha=0.1\). The blue curve includes both the rapidly oscillating and slowly oscillating terms, while the orange curve includes only the slowly oscillating terms. depletion. Assuming that the back reaction is small and the decay rate of the black hole is not affected by the presence of another black hole, the time evolution of the cloud mass can be described as \[\frac{dM_{c}}{dt}=2\sum_{i=1}^{2}\Gamma_{2,1,-1}|C_{2,1,-1}^{i}|^{2}M_{c}, \tag{30}\] where \(M_{c}\) is the mass of the boson cloud and \(\Gamma_{2,1,-1}\), defined by Eq. (3), is the decay rate of the decaying mode \(|\psi_{2,1,-1}\rangle\). Here \(|C_{2,1,-1}^{2}|^{2}\) is the occupation probability of the decaying mode \(|\psi_{2,1,-1}^{1}\rangle\) of the central black hole and \(|C_{2,1,-1}^{2}|^{2}\) is the occupation probability of the decaying mode \(|\psi_{2,1,-1}^{2}\rangle\) of the companion black hole, namely, \(|C_{2,1,-1}^{2}|^{2}=|C_{-}(R)|^{2}\). The summation takes into account the decay of the boson into both the central and companion black holes. To see the effect of mass transfer onto the cloud depletion, we calculate the time evolution of the cloud mass under the assumption that the bosons only decay into the companion black hole. Suppose the total mass of the cloud right before the boson escapes to the companion, e.g., \(R=80\,r_{b}\), is \(M_{c,0}\). We further assume that the spin of the two black holes are both \(\chi=\frac{4\alpha}{1+4\alpha^{2}}\), which is the critical spin for \(|\varphi_{2,1,1}\rangle\) to be saturated [25]; and the companion black hole has no bosons around it (Otherwise, we have to modify the initial condition of the state, which in principle can also be handled in our framework). Back reaction of the boson cloud onto the black holes are neglected. The result is shown as the blue dashed curve in Fig. 5. We can see that the boson cloud quickly decays around \(R=60\,r_{b}\) and finally almost all bosons are absorbed by the companion black hole. To be more specific, the cloud remains unchanged when \(R\gtrsim 67\,r_{b}\) and almost completely disappears when \(R\lesssim 57\,r_{b}\). This means all bosons decay into the companion black hole when the probability \(|C_{-}(R)|^{2}\) is approaching to its first maximum. At first glance this seems impossible because the maximal probability is about \(0.3\), which is smaller than one. From the physical perspective this is reasonable. Each boson moves between the central and companion black holes. It may be absorbed by the companion black hole when it occupies the orbit \(|\varphi_{2,1,-1}\rangle\) of the companion. Though the probability of being absorbed within one round trip is quite small, the boson can travel many round trips when the orbital separation decreases from \(67\,r_{b}\) to \(57\,r_{b}\). Therefore, with a very high probability, the boson has been absorbed by the companion black hole when \(R<57\,r_{b}\). This is also the key difference from the depletion mechanism due to the hyperfine resonance. It is true that at the resonance point, nearly all bosons jump to the decaying mode, however, the bosons stay in the decaying mode only for a short time, so the total mass that absorbed by the black hole may not be large, as demonstrated in Ref. [25]. We can also include the cloud depletion due to the hyperfine mixing. Instead of directly calculating the coefficients of the orbits \(|\varphi_{2,1,\pm 1}^{1}\rangle\) from Eq. (27), we use the method developed in Ref. [25] to calculate the probability that the boson jumps to the decaying mode of the central black hole due to the perturbation of the companion black hole. Firstly, the effect of rotation is not included in Eq. (27). Secondly, according to the adiabatic theorem, the slow orbital shrinking cannot induce a direct transition from the growing mode \(|\varphi_{2,1,1}^{1}\rangle\) to the decaying mode \(|\varphi_{2,1,-1}^{1}\rangle\) of the central black hole. A boson may jump to the decaying mode \(|\varphi_{2,1,-1}^{1}\rangle\) by first moving to the companion black and then back to the central black hole. However, this is a second order effect and the probability is at the order of \(|C_{-}(R)|^{4}\), which therefore can be neglected. For the parameters that we consider in this paper, the hyperfine resonance occurs at the orbital separation \(R\approx 660\,r_{b}\), which is much larger than the orbital separation where the mass transfer occurs. We therefore expect that the effect of hyperfine mixing is subdominant in the regime \(R<80\,r_{b}\), which is confirmed by Fig. 5. The orange curve represents the cloud depletion only due to the hyperfine mixing, which decreases rather slowly as compared to that due to the mass transfer to the companion black hole. The reason is that the probability for a boson jumping to the decaying mode of the central black hole is very small in this regime. Combining the effects of the mass transfer and hyperfine mixing together, we obtain the total cloud depletion, which is shown in Fig. 5. When \(R\gtrsim 67\,r_{b}\), the depletion is dominated by the hyperfine mixing; and when \(R\lesssim 67\,r_{b}\), the depletion is dominated by the mass transfer to the companion black hole and the decay into it. At about \(R\sim 57\,r_{b}\), the cloud has completely decayed into the two black holes, which is much earlier than the prediction of hyperfine mixing alone. In the above discussion we assume that a boson cloud exists when the orbital separation is about \(R=80\,r_{b}\). This could happen in several cases. The boson cloud Figure 5: Evolution of the boson cloud mass for \(\alpha=0.1\). The blue dashed curve denotes the cloud depletion due to the mass transfer to the decaying mode of the companion black hole, the orange curve denotes the cloud depletion due to the hyperfine mixing of the boson in the central black hole and the purple dashed-dotted curve denotes the total cloud depletion. may form when the two black holes are close enough, in particular, when the orbital separation is much smaller than the orbit separation when the hyperfine resonance occurs. In this case the cloud depletes very slowly due to the hyperfine mixing and a significant amount of bosons may still remain when the mass transfer starts to dominate the depletion. The boson cloud may also form when the two black holes are very far away, in particular, when the orbital separation is larger than the orbit separation when the hyperfine resonance occurs. There are two possibilities in this case. If the companion black hole and the cloud are counter-rotating, then the hyperfine resonance never occurs. The hyperfine and Bohr mixing are both weak so the cloud depletes very slowly, as shown in Fig. 6, and therefore a substantial amount of cloud may still remain. If the companion black hole and the cloud are co-rotating, then the hyperfine resonance occurs. The hyperfine resonance may cause a strong depletion of the cloud so that only a small amount of cloud is left when the mass transfer starts to dominate the depletion. If this is the case, then the effect of mass transfer does not play a dominant role in the cloud depletion. As an example, we consider the case where the companion black hole and the cloud are co-rotating, and the cloud forms after the hyperfine resonance occurs. The cloud has a life time \(\tau_{c}\) that determined only by its own gravitational radiation, which is given by [22; 25; 36] \[\tau_{c}\sim 10^{7}\bigg{(}\frac{M}{3M_{\odot}}\bigg{)}\bigg{(}\frac{0.07}{ \alpha}\bigg{)}^{15}\ \text{years}, \tag{31}\] where \(M\) is the mass of the central black hole. We set the initial time of the cloud evolution as the half lifetime of the cloud. The result is shown in Fig. 6, where we have included the contribution from the hyperfine mixing, Bohr mixing, mass transfer and gravitational radiation by the cloud itself. For a fixed value of \(\alpha\), the cloud depletion due to the hyperfine mixing, Bohr mixing and gravitational radiation dominates at large orbital separation, which proceeds very slowly. When the two black holes are close enough and the mass transfer occurs, the cloud quickly depletes and almost all remaining bosons are absorbed by the companion black hole. For a smaller value of \(\alpha\), the cloud depletes slower during the stage of hyperfine mixing and the mass transfer occurs earlier, i.e., at a larger orbital separation. This implies that a larger fraction of boson depletes due to the mass transfer and is absorbed by the companion black hole. ## V Conclusions We develop a framework to study the transfer of bosons between two black holes in a binary black hole system. The framework is formulated through the analogy between the BH-cloud-BH system and the hydrogen molecule ion system in which an electron moves in the potential generated by two protons. When two black holes are sufficiently close, the bosons initially confined around the central black hole can escape to the companion. In the language of quantum mechanics, molecular orbits of the boson form and the boson moves back and forth between two black holes. This results in cloud mass redistribution in the binary black hole system. Furthermore, the boson escapes to the companion may occupy the decaying mode and therefore may decay into the companion black hole. We find that the boson cloud that exists right before the mass transfer occurs completely disappears thogough this mechanism of cloud depletion. The cloud depletion to the companion black hole may have important consequences to the orbital evolution of the binary black hole system and the gravitational waveform emitted from it. Our framework is used to study the simplest model where the two black holes have the same mass and spin, and their spin orientation is parallel. It can be straightforwardly generalized to explore more realistic models, for example, the two black holes may have unequal mass and spin, or their spin orientation may be different. This would require a modification to the variational method used to calculate the molecular orbits and the energy eigenvalues. Another interesting case is that the companion may not be a black hole but a compact star. Then there is only cloud mass redistribution but no cloud depletion due to the decay into the companion. However, this could also have important consequences to the evolution of the binary system and their gravitational waveforms. **Acknowledgements:** D. S. is supported by the Fundamental Research Funds for the Central Universities, HUST (Grant No. 5003012068). Y. M. is supported by the University start-up funding provided by Huazhong University of Science and Technology. ## Appendix A Choice of coordinates To describe the orbits of the boson in the binary black hole system, we need to set up an appropriate coordinate system, which is schematically shown in Fig. 1. The Figure 6: Evolution of the boson cloud mass for different \(\alpha\) and initial cloud mass \(M_{c,0}=\alpha M\)[22]. origin of the coordinate system is located at the central black hole, and the \(z\)-axis is parallel to the spin of the central black hole and the \(x\)-axis is pointing towards the companion black hole. We use spherical coordinates \((r_{1},\theta_{1},\varphi_{1})\) to represent the position of the boson relative to the central black hole, and \((r_{2},\theta_{2},\varphi_{2})\) to denote the position of the boson relative to the companion black hole. Whilst one set of coordinates is sufficient, we introduce the coordinates \((r_{2},\theta_{2},\varphi_{2})\) only for convenience since the isolated wave functions of the boson belonging to the companion black hole can be conveniently expressed using the coordinates \((r_{2},\theta_{2},\varphi_{2})\). The coordinates \((r_{2},\theta_{2},\varphi_{2})\) can be written in terms of the coordinates \((r_{1},\theta_{1},\varphi_{1})\), \[r_{2} = \sqrt{r_{1}^{2}+R^{2}-2r_{1}R\sin\theta_{1}\cos\varphi_{1}}\,\] \[\cos\theta_{2} = \frac{r_{1}\cos\theta_{1}}{r_{2}},\] \[\sin\theta_{2} = \sqrt{1-(\cos\theta_{2})^{2}}\,\] \[\cos\varphi_{2} = \frac{r_{1}\sin\theta_{1}\cos\varphi_{1}-R}{r_{2}\sin\theta_{2}},\] \[\sin\varphi_{2} = \frac{r_{1}\sin\theta_{1}\sin\varphi_{1}}{r_{2}\sin\theta_{2}}. \tag{10}\] By using these relations we can carry out all the calculation using only one set of coordinates, namely, the coordinates \((r_{1},\theta_{1},\varphi_{1})\). ## Appendix B Wave functions and overlap integrals The wave functions for the \(n=2\) boson around an isolated black hole are given by \[\varphi_{2,1,1}(\mathbf{r}) = \frac{1}{8}\sqrt{\frac{1}{\pi}}r_{b}^{-5/2}re^{-r/2r_{b}}\sin \theta e^{i\varphi},\] \[\varphi_{2,1,-1}(\mathbf{r}) = \frac{1}{8}\sqrt{\frac{1}{\pi}}r_{b}^{-5/2}re^{-r/2r_{b}}\sin \theta e^{-i\varphi},\] \[\varphi_{2p_{x}}(\mathbf{r}) = \frac{1}{8}\sqrt{\frac{2}{\pi}}r_{b}^{-5/2}re^{-r/2r_{b}}\sin \theta\cos\varphi,\] \[\varphi_{2p_{y}}(\mathbf{r}) = \frac{1}{8}\sqrt{\frac{2}{\pi}}r_{b}^{-5/2}re^{-r/2r_{b}}\sin \theta\sin\varphi, \tag{11}\] where \(r_{b}\) is the Bohr radius of the boson. The wave functions for the molecular orbits of the boson in the binary black hole system are given by \[\varphi_{\sigma}(\mathbf{r}) = \frac{1}{8N_{1}}\sqrt{\frac{1}{\pi}}\left(r_{b1}^{-5/2}r_{1}e^{-r _{1}/2r_{b1}}\sin\theta_{1}\cos\varphi_{1}-r_{b2}^{-5/2}r_{2}e^{-r_{2}/2r_{b2} }\sin\theta_{2}\cos\varphi_{2}\right)\!,\] \[\varphi_{\sigma^{*}}(\mathbf{r}) = \frac{1}{8N_{2}}\sqrt{\frac{1}{\pi}}\left(r_{b1}^{-5/2}r_{1}e^{-r _{1}/2r_{b1}}\sin\theta_{1}\cos\varphi_{1}+r_{b2}^{-5/2}r_{2}e^{-r_{2}/2r_{b2} }\sin\theta_{2}\cos\varphi_{2}\right)\!,\] \[\varphi_{\pi}(\mathbf{r}) = \frac{1}{8N_{3}}\sqrt{\frac{1}{\pi}}\left(r_{b1}^{-5/2}r_{1}e^{-r _{1}/2r_{b1}}\sin\theta_{1}\sin\varphi_{1}+r_{b2}^{-5/2}r_{2}e^{-r_{2}/2r_{b2} }\sin\theta_{2}\sin\varphi_{2}\right)\!,\] \[\varphi_{\pi^{*}}(\mathbf{r}) = \frac{1}{8N_{4}}\sqrt{\frac{1}{\pi}}\left(r_{b1}^{-5/2}r_{1}e^{-r _{1}/2r_{b1}}\sin\theta_{1}\sin\varphi_{1}-r_{b2}^{-5/2}r_{2}e^{-r_{2}/2r_{b2} }\sin\theta_{2}\sin\varphi_{2}\right)\!, \tag{12}\] where \(r_{b1}\) and \(r_{b2}\) are the Bohr radius for the boson in central and companion black holes, respectively. The overlap integrals are given by \[\langle\varphi_{2p_{x}}^{1}\left|\varphi_{2p_{x}}^{2}\right. = \frac{1}{32\pi}r_{b1}^{-\frac{5}{2}}r_{b2}^{-\frac{5}{2}}\int r_ {1}^{3}r_{2}e^{-\frac{r_{1}}{2r_{b1}}-\frac{r_{2}}{2r_{b2}}}\sin^{2}\theta_{1} \sin\theta_{2}\cos\varphi_{1}\cos\varphi_{2}\,\mathrm{d}r_{1}\mathrm{d}\theta_{ 1}\mathrm{d}\varphi_{1},\] \[\left\langle\varphi_{2p_{x}}^{1}\left|\frac{\alpha}{r_{2}} \right|\varphi_{2p_{x}}^{2}\right. = \frac{\alpha}{32\pi}r_{b1}^{-\frac{5}{2}}r_{b2}^{-\frac{5}{2}}\int r _{1}^{3}e^{-\frac{r_{1}}{2r_{b1}}-\frac{r_{2}}{2r_{b2}}}\sin^{2}\theta_{1}\sin \theta_{2}\cos\varphi_{1}\cos\varphi_{2}\,\mathrm{d}r_{1}\mathrm{d}\theta_{1} \mathrm{d}\varphi_{1},\] \[\left\langle\varphi_{2p_{y}}^{1}\left|\frac{\alpha}{r_{2}} \right|\varphi_{2p_{x}}^{1}\right\rangle = \frac{\alpha}{32\pi}r_{b1}^{-5}\int\frac{r_{1}^{4}}{r_{2}}e^{- \frac{r_{1}}{r_{b1}}}\sin^{3}\theta_{1}\cos^{2}\varphi_{1}\,\mathrm{d}r_{1} \mathrm{d}\theta_{1}\mathrm{d}\varphi_{1},\] \[\left\langle\varphi_{2p_{y}}^{1}\left|\frac{\alpha}{r_{2}} \right|\varphi_{2p_{y}}^{2}\right. = \frac{1}{32\pi}r_{b1}^{-\frac{5}{2}}r_{b2}^{-\frac{5}{2}}\int r_ {1}^{3}r_{2}e^{-\frac{r_{1}}{2r_{b1}}-\frac{r_{2}}{2r_{b2}}}\sin^{2}\theta_{1} \sin\theta_{2}\sin\varphi_{1}\sin\varphi_{2}\,\mathrm{d}r_{1}\mathrm{d}\theta_{ 1}\mathrm{d}\varphi_{1},\] \[\left\langle\varphi_{2p_{y}}^{1}\left|\frac{\alpha}{r_{2}} \right|\varphi_{2p_{y}}^{1}\right\rangle = \frac{\alpha}{32\pi}r_{b1}^{-\frac{5}{2}}r_{b2}^{-\frac{5}{2}}\int r _{1}^{3}e^{-\frac{r_{1}}{2r_{b1}}-\frac{r_{2}}{2r_{b2}}}\sin^{2}\theta_{1} \sin\theta_{2}\sin\varphi_{1}\sin\varphi_{2}\,\mathrm{d}r_{1}\mathrm{d}\theta_{ 1}\mathrm{d}\varphi_{1},\] \[\left\langle\varphi_{2p_{y}}^{1}\left|\frac{\alpha}{r_{2}} \right|\varphi_{2p_{y}}^{1}\right\rangle = \frac{\alpha}{32\pi}r_{b1}^{-5}\int\frac{r_{1}^{4}}{r_{2}}e^{- \frac{r_{1}}{r_{b1}}}\sin^{3}\theta_{1}\sin^{2}\varphi_{1}\,\mathrm{d}r_{1} \mathrm{d}\theta_{1}\mathrm{d}\varphi_{1}. \tag{13}\] ## Appendix C Quantum adiabatic theorem In this appendix, we show that the adiabatic approximation can be applied in our case. Since the potential generated by the two black holes changes very slowly, we assume that the energy eigenvalues also evolve slowly and continuously with time, and they satisfy the equation \[\hat{H}(t)\ket{\psi_{n}(t)}=E_{n}(t)\ket{\psi_{n}(t)}. \tag{120}\] We assume that the energy levels are not degenerate. An arbitrary state at a given time can be written as \[\ket{\psi(t)}=\sum_{n}c_{n}(t)\ket{\psi_{n}(t)}, \tag{121}\] and satisfies the Schrodinger equation \[i\frac{\partial}{\partial t}\ket{\psi(t)}=\hat{H}(t)\ket{\psi(t)}, \tag{122}\] By substituting Eq. (121) into Eq. (122), and multiplying both sides from the left by \(\bra{\psi_{k}}\), we have \[i\frac{\partial c_{k}}{\partial t}+i\sum_{n}c_{n}\bra{\psi_{k}}\frac{\partial \psi_{n}}{\partial t}=c_{k}E_{k}. \tag{123}\] Now we are going to derive the expression for \(\bra{\psi_{k}}\frac{\partial\psi_{n}}{\partial t}\). We start with a time derivation of Eq. (120), \[\frac{\partial\hat{H}}{\partial t}\ket{\psi_{n}}+\hat{H}\ket{\frac{\partial \psi_{n}}{\partial t}}=\frac{\partial E_{n}}{\partial t}\ket{\psi_{n}}+E_{n} \ket{\frac{\partial\psi_{n}}{\partial t}}. \tag{124}\] For \(k\neq n\), we multiply both sides from the left by \(\bra{\psi_{k}}\) and obtain \[\bra{\psi_{k}}\frac{\partial\psi_{n}}{\partial t}=\frac{1}{E_{n}-E_{k}}\bra{ \psi_{k}}\frac{\partial\hat{H}}{\partial t}\ket{\psi_{n}}. \tag{125}\] We also need to consider the case when \(k=n\), namely, the expression for \(\bra{\psi_{k}}\frac{\partial\psi_{n}}{\partial t}\). Taking time derivative of the normalization condition \[\bra{\psi_{k}}\psi_{k}=1, \tag{126}\] we have \[\bra{\psi_{k}}\frac{\partial\psi_{k}}{\partial t}+\bra{\frac{\partial\psi_{k} }{\partial t}}\ket{\psi_{k}}=0. \tag{127}\] It is evident that \(\bra{\psi_{k}}\frac{\partial\hat{H}}{\partial t}\ket{\psi_{n}}\) is purely imaginary, which in principle can be canceled by appropriately adding a phase. Therefore, the time derivative of the coefficient \(c_{n}\) is given by \[\frac{dc_{k}}{dt}=-\sum_{n\neq k}c_{n}\frac{1}{E_{n}-E_{k}}\bra{\psi_{k}}\frac {\partial\hat{H}}{\partial t}\ket{\psi_{n}}-ic_{k}E_{k}. \tag{128}\] We now show that \(\bra{\psi_{k}}\frac{\partial\hat{H}}{\partial t}\ket{\psi_{n}}=0\) when \(n\neq k\), which is the equality given by Eq. (25). Since \(n,k\in\{\sigma,\sigma^{*},\pi,\pi^{*}\}\), so there are six off-diagonal elements for \(\frac{\partial\hat{H}}{\partial t}\). Let us first consider \(\bra{\sigma}\frac{\partial\hat{H}}{\partial t}\ket{\sigma^{*}}\) and \(\bra{\pi}\frac{\partial\hat{H}}{\partial t}\ket{\pi^{*}}\). \[\left\langle\sigma\left|\frac{\partial\hat{H}}{\partial t}\right| \sigma^{*}\right\rangle \tag{129}\] \[= \left\langle\varphi_{2p_{x}}^{1}\left|\frac{\partial\hat{H}}{ \partial t}\right|\varphi_{2p_{x}}^{1}\right\rangle-\left\langle\varphi_{2p_{x }}^{2}\left|\frac{\partial\hat{H}}{\partial t}\right|\varphi_{2p_{x}}^{2}\right\rangle\] \[+ \left\langle\varphi_{2p_{x}}^{1}\left|\frac{\partial\hat{H}}{ \partial t}\right|\varphi_{2p_{x}}^{2}\right\rangle-\left\langle\varphi_{2p_{ x}}^{2}\left|\frac{\partial\hat{H}}{\partial t}\right|\varphi_{2p_{x}}^{1}\right\rangle.\] In our simple model we assume that two black holes have exactly the same parameters, so the isolated orbits of the boson are the same for the central and companion black holes. Due to the symmetry of the configuration of the BH-cloud-BH system, we have \(\bra{\varphi_{2p_{x}}^{1}}\frac{\partial\hat{H}}{\partial t}\ket{\varphi_{2p_ {x}}^{1}}=\bra{\varphi_{2p_{x}}^{2}}\frac{\partial\hat{H}}{\partial t}\ket{ \varphi_{2p_{x}}^{2}}\). The wave functions \(\dot{\varphi}_{2p_{x}}^{2}\) are real and the time derivative of the Hamiltonian is Hermitian, so we have \(\bra{\varphi_{2p_{x}}^{1}}\frac{\partial\hat{H}}{\partial t}\ket{\varphi_{2p_ {x}}^{2}}=\bra{\varphi_{2p_{x}}^{2}}\frac{\partial\hat{H}}{\partial t}\ket{ \varphi_{2p_{x}}^{1}}\). Therefore, we find that \(\bra{\sigma}\frac{\partial\hat{H}}{\partial t}\ket{\sigma^{*}}=0\). By using similar arguments we also have \(\bra{\pi}\frac{\partial\hat{H}}{\partial t}\ket{\pi^{*}}=0\). To calculate other elements of \(\frac{\partial\hat{H}}{\partial t}\), we need to know its explicit expression, which is given by \[\frac{\partial\hat{H}}{\partial t} = \frac{\partial\hat{H}}{\partial r_{2}}\frac{\partial r_{2}}{ \partial R}\frac{\partial R}{\partial t}=\frac{\alpha_{2}^{2}}{\frac{\partial r _{2}}{\partial R}}\frac{\partial R}{\partial t} \tag{130}\] \[= -\frac{64}{5}\alpha\,q(1+q)\frac{M^{3}}{R^{3}}\frac{R-r_{1}\sin \theta_{1}\cos\varphi_{1}}{r_{2}^{3}}.\] By using the explicit expressions of the orbits given by Eqs. (7) and (8), we find that other elements of \(\frac{\partial\hat{H}}{\partial t}\) can be expanded using \(\bra{\varphi_{2p_{x}}^{i}}\frac{\partial\hat{H}}{\partial t}\ket{\varphi_{2p_ {y}}^{j}}\), with \(i,j\in\{1,2\}\). We now show that \(\bra{\varphi_{2p_{x}}^{i}}\frac{\partial\hat{H}}{\partial t}\ket{\varphi_{2p_ {y}}^{j}}=0\) for all choices of \(i,j\). Neglecting the constant factor that is independent of \((r_{1},\theta_{1},\varphi_{1})\), we find \[\left\langle\varphi_{2p_{x}}^{1}\left|\frac{\partial\hat{H}}{ \partial t}\right|\varphi_{2p_{y}}^{1}\right\rangle \tag{131}\] \[\sim \int\mathrm{d}V\,\frac{R-r_{1}\sin\theta_{1}\cos\varphi_{1}}{r_{2 }^{3}}r_{1}^{2}e^{-r_{1}/r_{b}}\sin^{2}\theta_{1}\sin\varphi_{1}\cos\varphi_{1}\] \[\sim \int\mathrm{d}x\mathrm{d}y\mathrm{d}z\,\frac{x(R-x)y}{(r_{1}^{2}+R ^{2}-2Rx)^{3/2}}\,e^{-r_{1}/r_{b}},\] where we have used the Cartesian coordinates \((x,y,z)\), \[x=r_{1}\sin\theta_{1}\cos\varphi_{1},\ \ y=r_{1}\sin\theta_{1}\sin\varphi_{1},\ \ z=r_{1}\cos\theta_{1}.\] It is evident that the integrand is an odd function of \(y\), so the integral is zero. \[\left\langle\varphi_{2p_{x}}^{1}\left|\frac{\partial\hat{H}}{ \partial t}\right|\varphi_{2p_{y}}^{2}\right\rangle \tag{13}\] \[\sim \int\mathrm{d}V\frac{R-r_{1}\sin\theta_{1}\cos\varphi_{1}}{r_{2}^ {3}}e^{-(r_{1}+r_{2})/2r_{b}}r_{1}\sin\theta_{1}\cos\varphi_{1}\] \[\times r_{2}\sin\theta_{2}\sin\varphi_{2}\] \[\sim \int\mathrm{d}x\mathrm{d}y\mathrm{d}z\,\frac{x(R-x)y}{(r_{1}^{2 }+R^{2}-2Rx)^{3/2}}\,e^{-(r_{1}+r2)/2r_{b}},\] where \(r_{2}=\sqrt{r_{1}^{2}+R^{2}-2Rx}\), and we have used the equality \(y=r_{1}\sin\theta_{1}\cos\varphi_{1}=r_{2}\sin\theta_{2}\cos\phi_{2}\). It is evident that the integrand is an odd function of \(y\), so the integral is zero. By using the symmetry of the BH-cloud-BH system, we have \[\left\langle\varphi_{2p_{x}}^{2}\left|\frac{\partial\hat{H}}{ \partial t}\right|\varphi_{2p_{y}}^{2}\right\rangle = -\left\langle\varphi_{2p_{x}}^{1}\left|\frac{\partial\hat{H}}{ \partial t}\right|\varphi_{2p_{y}}^{2}\right\rangle=0,\] \[\left\langle\varphi_{2p_{x}}^{2}\left|\frac{\partial\hat{H}}{ \partial t}\right|\varphi_{2p_{y}}^{2}\right\rangle = -\left\langle\varphi_{2p_{x}}^{1}\left|\frac{\partial\hat{H}}{ \partial t}\right|\varphi_{2p_{y}}^{2}\right\rangle=0.\] Therefore, We have \(\left\langle\varphi_{2p_{x}}^{i}\left|\frac{\partial\hat{H}}{\partial t} \right|\varphi_{2p_{y}}^{j}\right\rangle=0\) for all choices of \(i,j\). As a result, all other elements of \(\frac{\partial\hat{H}}{\partial t}\) are zero. In summary, we have \[\left\langle\psi_{k}\left|\frac{\partial\hat{H}}{\partial t} \right|\psi_{n}\right\rangle=0 \tag{15}\] \[\left\langle\psi_{k}\left|\frac{\partial\hat{H}}{\partial t} \right|\psi_{n}\right\rangle=0 \tag{16}\]
2309.08448
Advancing the Evaluation of Traditional Chinese Language Models: Towards a Comprehensive Benchmark Suite
The evaluation of large language models is an essential task in the field of language understanding and generation. As language models continue to advance, the need for effective benchmarks to assess their performance has become imperative. In the context of Traditional Chinese, there is a scarcity of comprehensive and diverse benchmarks to evaluate the capabilities of language models, despite the existence of certain benchmarks such as DRCD, TTQA, CMDQA, and FGC dataset. To address this gap, we propose a novel set of benchmarks that leverage existing English datasets and are tailored to evaluate language models in Traditional Chinese. These benchmarks encompass a wide range of tasks, including contextual question-answering, summarization, classification, and table understanding. The proposed benchmarks offer a comprehensive evaluation framework, enabling the assessment of language models' capabilities across different tasks. In this paper, we evaluate the performance of GPT-3.5, Taiwan-LLaMa-v1.0, and Model 7-C, our proprietary model, on these benchmarks. The evaluation results highlight that our model, Model 7-C, achieves performance comparable to GPT-3.5 with respect to a part of the evaluated capabilities. In an effort to advance the evaluation of language models in Traditional Chinese and stimulate further research in this field, we have open-sourced our benchmark and opened the model for trial.
Chan-Jan Hsu, Chang-Le Liu, Feng-Ting Liao, Po-Chun Hsu, Yi-Chang Chen, Da-shan Shiu
2023-09-15T14:52:23Z
http://arxiv.org/abs/2309.08448v2
# Advancing the Evaluation of Traditional Chinese Language Models: ###### Abstract The evaluation of large language models is an essential task in the field of language understanding and generation. As language models continue to advance, the need for effective benchmarks to assess their performance has become imperative. In the context of Traditional Chinese, there is a scarcity of comprehensive and diverse benchmarks to evaluate the capabilities of language models, despite the existence of certain benchmarks such as DRCD, TTQA, CMDQA, and FGC dataset. To address this gap, we propose a novel set of benchmarks that leverage existing English datasets and are tailored to evaluate language models in Traditional Chinese. These benchmarks encompass a wide range of tasks, including contextual question-answering, summarization, classification, and table understanding. The proposed benchmarks offer a comprehensive evaluation framework, enabling the assessment of language models' capabilities across different tasks. In this paper, we evaluate the performance of GPT-3.5, Taiwan-LLMa-v1.0, and Model 7-C, our proprietary model series, on these benchmarks. The evaluation results highlight that Model 7-C achieves performance comparable to GPT-3.5 with respect to a part of the evaluated capabilities. In an effort to advance the evaluation of language models in Traditional Chinese and stimulate further research in this field, we have open-sourced our benchmark and opened the model for trial. ## 1 Introduction The evaluation of large language models has long been a crucial task. With the advancement of technology, language models have become more sophisticated, providing higher-quality responses akin to human responses to open-ended questions. However, evaluating these models is challenging, and there is a need for well-designed benchmarks to assess their performance comprehensively and consistently. Existing English benchmarks such as MMLU [1], IMDB [11], and XSum [21] cover measurements of models' capabilities in question answering, sentiment classification, and summarization, respectively. In Traditional Chinese, while there exist some benchmarks such as Delta Reading Comprehension Dataset (DRCD) [22], Taiwanese Trivia Question Answering (TTQA) [14], and Formosa Grand Challenge (FGC) dataset [23], there is limited availability of comprehensive and diverse benchmarks for evaluating language models' capabilities. In this paper, to address the need for a comprehensive suite of evaluations in Traditional Chinese, we propose a set of new benchmarks. The benchmarks are built upon available Traditional Chinese and English datasets to test the capabilities of language models in Traditional Chinese. Our proposed benchmarks assess the capabilities of tasks related to contextual question answering, world knowledge, summarization, classification, table understanding. In terms of evaluating world knowledge, we further propose a new dataset - Taiwan Massive Multitask Language Understanding (TMMLU) - encompassing exams from high school entrance exams to vocational exams across 55 subjects in total. We evaluate the performance of proprietary and open-source models, namely GPT-3.5, Taiwan-LLMa-v1.0 [13], Model 7-C (ours) and Model 7-C-Chat (the fine-tuned version of Model 7-C for chatting capability), using our proposed Traditional Chinese benchmarks. Notably, our proposed benchmarks provide a comprehensive set of evaluation tasks for language models, allowing us to assess their performance on various tasks. For some of the evaluated capabilities, the evaluation outcomes demonstrate that Model 7-C matches the performance of the state-of-the-art GPT-3.5 model in Traditional Chinese. To promote more research on advancing state-of-the-art language models in Traditional Chinese, we have open-sourced our benchmark code and relevant datasets and opened for trial of our proprietary model, Model 7-C, for comparison3. Footnote 3: [https://github.com/mtxresearch/MR-Models](https://github.com/mtxresearch/MR-Models) ## 2 Related work There exists a wealth of English benchmarks for evaluating different capabilities of language models. EluehterAI's Language Model Evaluation Harness [1] is a unified framework to test generative language models on a large number of different evaluation tasks. Holistic Evaluation of Language Models (HELM) [Liang et al., 2022] is an evaluation framework that consists of evaluations in 42 scenarios. BIG-bench [BIG-bench authors, 2023] is a collaborative benchmark designed to examine large language models across diverse task topics ranging from linguistics and childhood development to software development and social bias. AGLEval [Zhong et al., 2023] is a benchmark tailored to assess models on human cognition and problem-solving, derived from 20 prominent admission and qualification exams including the Gaokao, SAT, law school tests, and civil service exams. These English benchmarks and the evaluations therein are commonly evaluated at the release of the models such as BLOOM [Scao et al., 2022], Pythia [Biderman et al., 2023], Falcon [Penedo et al., 2023], Llama (1 [Touvron et al., 2023b] and 2 [Touvron et al., 2023a]), and their fine-tuned variants. As to the notable open benchmarks in Traditional Chinese, at the time of this writing (mid-August, 2023), we summarize them below. DRCD, a reading comprehension peer-reviewed dataset, contains 30k question-answer pairs based on Wikipedia articles. TTQA [Ennen et al., 2023], a trivia question-answering not-peer-reviewed dataset, consists of 64 expert-selected paragraphs from Wikipedia for testing a model's knowledge on Taiwanese-specific topics. Chinese Movie Dialogue Question Answering (CMDQA) [Luo et al., 2022], a dialogue-based information-seeking question-answering dataset, contains 10k QA dialogues (40k turns in total) about movie information parsed from Wikipedia. Formosa Grand Challenge (FGC) dataset is a passage question answering dataset of 750 samples created from Taiwanese news articles and government announcements. Language models have been shown to provide responses akin to human responses to open-ended questions. The open-ended types of questions however cannot easily be mapped 1-on-1 to a single answer. At the time of this writing, notable evaluation benchmarks with GPT-4 as judge have been wildly adopted by the community, albeit its tendency to favour longer text and texts generated by LLM [Lin and Chen, 2023a, Liu et al., 2023]. Vienna [Chiang et al., 2023] consists of 80 questions spanning across 8 tasks. Similar to Vienna, WizardLM [Xu et al., 2023] constructed a test set of 218 open-ended questions covering 29 areas such as writing, role-play, and philosophy. As for Traditional Chinese open-ended questions, a translated version of Vienna benchmark is used to test Taiwan-LLMa [Lin and Chen, 2023a,b]. In this study, we focus on benchmarks where ground truths are readily available. ## 3 Benchmark Here we give a succinct introduction to each benchmark we will use in this study. We categorize the proposed set of benchmarks into capabilities. Table 1 lists the evaluation benchmarks used in this study and Appendix A shows some examples. As source datasets in Traditional Chinese are limited, we translated the listed English datasets to Traditional Chinese for the evaluation. ### Capabilities Below are summaries of the benchmarked capabilities as listed in Table 1 and the corresponding datasets used in evaluating the respective capabilities in this study. **Contextual Question Answering** is the task in which a model is given a contextual input and is asked to respond to a given question related to the input. This task is most similar to standard benchmarks in closed QA or common sense reasoning. DRCD is a Traditional Chinese machine reading comprehension dataset containing 10,014 paragraphs from 2,108 Wikipedia articles and over 30,000 questions. FGC dataset is a passage question answering dataset of 750 samples created from Taiwanese news articles and government announcements. **World Knowledge** task requires a model to have a certain level of knowledge about the real world. TTQA is for assessing language models' common sense abilities on Taiwanese terms, comprising 64 passages from Wikipedia about diverse Taiwanese cultural topics, necessitating model comprehension and reasoning. Taiwan Massive Multitask Language Understanding (TMMLU) is curated from examinations in Taiwan, consisting of 55 subjects spanning across multiple disciplines, from vocational to academic fields, and covering elementary to professional proficiency levels. It is designed to identify a model's knowledge and problem-solving blind spots similar to human evaluations. See Appendix B for the list of subjects. **Summarization** task requires a model to summarize a given passage in an abstract manner. Extreme Summarization (XSum) dataset evaluates abstractive summarization with 226,711 BBC news articles across \begin{table} \begin{tabular}{|c|c|c|} \hline **Capabilities** & **Evaluation Dataset** & **Source Language** \\ \hline Contextual QA & \begin{tabular}{c} DRCD [Shao et al., 2019] \\ FGC [STPT, 2020] \\ \end{tabular} & Traditional Chinese \\ \hline \multirow{2}{*}{World Knowledge} & TTQA [Ennen et al., 2023] & Traditional Chinese \\ & TMMLU (ours) & Traditional Chinese \\ \hline Summarization & XSum-TC [Narayan et al., 2018] & English \\ \hline Classification & IMDB-TC [Maas et al., 2011] & English \\ \hline Table Understanding & Penguins-in-a-Table-TC [BIG-bench authors, 2023] & English \\ \hline \end{tabular} \end{table} Table 1: The datasets and their respective nature for benchmarking capabilities in this study. We translated English datasets to Traditional Chinese for the evaluation, which is indicated by the “-TC” suffix. diverse domains, aiming for one-sentence summaries. **Classification** task is defined as requesting a model to determine the category of given input text, such as sentiment analysis and natural language inference. IMDB dataset offers binary sentiment classification with 25,000 polar movie reviews each for training and testing sentiment classifiers. **Table Understanding** task evaluates a model's capacity to construct an accurate depiction of the data presented to it in both tabular and natural language formats, and its ability to identify and retrieve the pertinent details required to address a straightforward query. The "penguins in a table" task contained in BIG-bench asks a language model to answer questions about the animals contained in a table, or multiple tables, described in the context. To assess the capability of the models, we adopt metrics from academic benchmarks like HELM. For evaluations in areas like Contextual QA, World Knowledge, Classification, and Table Understanding, we provide the prefix exact match (EM) scores. In terms of Summarization, ROUGE-2 is reported. ### Helpfulness To assess language models' ability to provide helpful answers to open-ended questions, we use TAIDE-14 [TAIDE, 2023]. TAIDE-14 consists of 14 different text generation tasks covering 50 topics and includes a total of 140 prompts specifically designed to evaluate Traditional Chinese LLM. These prompts were created by GPT-4 using the provided task, domain, and keywords, and were further validated by human experts. The 14 task types are listed in the following: open-ended generation, classification, question answering, summarization, writing, translation, text analysis, commonsense reasoning, letter writing, extraction, recommendation, sentiment analysis, providing suggestions, and dialogue generation. ## 4 Results ### Models Compared In this study, we analyze the performances of three models: GPT-3.5, Taiwan-LLaMa-v1.0, Model 7-C and Model 7-C-Chat. The version of GPT-3.5 utilized for this comparison is a snapshot titled GPT-3.5-Turbo-0613, dated June 13, 2023. Taiwan-LLaMa-v1.0, on the other hand, is a refined version of the Llama 2 model, configured for Traditional Chinese. It has been pre-trained on a dataset encompassing over 5 billion tokens and further fine-tuned using a rich set of more than 490,000 instruction-response samples. ### Capabilities Benchmark Results Table 2 illustrates the comparative performance of various models on designated datasets. We carry out all evaluation zero-shot and use greedy decoding for a fair comparison. It is evident that GPT-3.5 predominantly surpasses other models in benchmark tests spanning all assessed capabilities. Both Taiwan-LLaMa-v1.0 and Model 7-C manage to approximate GPT-3.5's performance in limited instances, exhibiting less than a 5% discrepancy in certain benchmarks. Specifically, Taiwan-LLaMa-v1.0 showcases parallel performance in the IMDB-TC benchmark, whereas Model 7-C is comparable in the DRCD and XSum-TC benchmarks. Notwithstanding, the table understanding task reveals a discernible deficiency in both Taiwan-LLaMa-v1.0 and Model 7-C, with frequent hallucinations evident in numerous samples. Moreover, summarization tasks delineated suboptimal results; even though the models were instructed to condense the context into a single concise sentence, they demonstrated low Rouge-2 scores universally. This underperformance was manifested as over-extended summaries in GPT-3.5 and Model 7-C, and occasional lack of summaries in Taiwan-LLaMa-v1.0. We assessed the XSum dataset and found the presence of summaries incorporating elements not delineated in the original documents, potentially a causal factor in the diminished performance metrics observed in the tasks. \begin{table} \begin{tabular}{|c|c|c c c c|} \hline **Capability tested** & **Dataset (metric)** & \multicolumn{4}{c|}{**Models**} \\ \cline{2-6} & & & & \\ & & & & \\ \hline Contextual QA & DRCD (EM) & 0.771 & 0.719 & 0.761 & \\ & FGC (EM) & 0.48 & 0.33 & 0.41 & \\ \hline World Knowledge & TTPA (EM) & 1.00 & 0.59 & 0.86 & 0.56 \\ & TAMIU (EM) & 0.515 & 0.307 & 0.391 & \\ \hline Summarization & XSum-TC (R Rouge-2)2 & 0.032 & 0.001 & 0.035 & \\ \hline Classification & IMDB-TC (EM) & 0.941 & 0.929 & 0.842 & 0.916 \\ \hline Table Understanding & Penguins-in-a-Table-TC (EM)3 & 0.32 & 0.00 & 0.01 & 0.08 \\ \hline \end{tabular} \end{table} Table 2: The benchmark result of models. ### Helpfulness Benchmark Results We present the win-rate chart to demonstrate the helpfulness of the language models on TAIDE-14 tasks, judged by GPT-4 on a scale of 1 to 6. Our proprietary model fine-tuned for chatting capability, Model7-C-C-Chat, outperforms GPT-3.5 on 19, and matches GPT-3.5 on 53, out of all the 140 test samples. Though, regarding TADE-14, Taiwan-LLaMa-v1.0 shows better capability than our Model 7-C series. See Figure 1 for reference. ## 5 Conclusion In conclusion, the evaluation of large language models, particularly in the context of Traditional Chinese, is a critical and challenging task. This study proposes a comprehensive set of benchmarks, built upon existing Traditional Chinese and English datasets, to assess the capabilities of these models across various tasks. The evaluation of models such as GPT-3.5, Taiwan-LLaMa-v1.0, and our proprietary model, Model 7-C, demonstrated the effectiveness of these benchmarks. Notably, Model 7-C showed comparable performance to the state-of-the-art GPT-3.5 model regarding certain of evaluated Traditional Chinese tasks. The introduction of these benchmarks is a significant step towards advancing the evaluation of language models in Traditional Chinese. By making our benchmark code and relevant datasets open-source, and releasing our base model, Model 7-C, for trial, we aim to stimulate further research in this field. We believe that these resources will provide a valuable foundation for future studies aiming to improve the capabilities of language models in Traditional Chinese.
2309.09756
Privileged to Predicted: Towards Sensorimotor Reinforcement Learning for Urban Driving
Reinforcement Learning (RL) has the potential to surpass human performance in driving without needing any expert supervision. Despite its promise, the state-of-the-art in sensorimotor self-driving is dominated by imitation learning methods due to the inherent shortcomings of RL algorithms. Nonetheless, RL agents are able to discover highly successful policies when provided with privileged ground truth representations of the environment. In this work, we investigate what separates privileged RL agents from sensorimotor agents for urban driving in order to bridge the gap between the two. We propose vision-based deep learning models to approximate the privileged representations from sensor data. In particular, we identify aspects of state representation that are crucial for the success of the RL agent such as desired route generation and stop zone prediction, and propose solutions to gradually develop less privileged RL agents. We also observe that bird's-eye-view models trained on offline datasets do not generalize to online RL training due to distribution mismatch. Through rigorous evaluation on the CARLA simulation environment, we shed light on the significance of the state representations in RL for autonomous driving and point to unresolved challenges for future research.
Ege Onat Özsüer, Barış Akgün, Fatma Güney
2023-09-18T13:34:41Z
http://arxiv.org/abs/2309.09756v1
# Privileged to Predicted: ###### Abstract Reinforcement Learning (RL) has the potential to surpass human performance in driving without needing any expert supervision. Despite its promise, the state-of-the-art in sensorimotor self-driving is dominated by imitation learning methods due to the inherent shortcomings of RL algorithms. Nonetheless, RL agents are able to discover highly successful policies when provided with privileged ground truth representations of the environment. In this work, we investigate what separates privileged RL agents from sensorimotor agents for urban driving in order to bridge the gap between the two. We propose vision-based deep learning models to approximate the privileged representations from sensor data. In particular, we identify aspects of state representation that are crucial for the success of the RL agent such as desired route generation and stop zone prediction, and propose solutions to gradually develop less privileged RL agents. We also observe that bird's-eye-view models trained on offline datasets do not generalize to online RL training due to distribution mismatch. Through rigorous evaluation on the CARLA simulation environment, we shed light on the significance of the state representations in RL for autonomous driving and point to unresolved challenges for future research. ## I Introduction The effort involved in engineering autonomous driving (AD) systems is immense, prone to failure, and has not yielded a full AD agent yet. As a result, the popularity of learning-based methods is on the rise. Learning from experts, also known as Behavior Cloning (BC), has shown success in simulated environments with careful design components. However, these are limited by the need for expert quality supervision and suffer from the distribution shift problem making real-world deployment problematic. Reinforcement Learning (RL) offers a promising alternative by utilizing existing data and/or environment interaction to correct errors, improve sub-par behaviors, and reinforce good ones, and potentially surpass human expert performance. However, RL has fallen short of its promises in self-driving, consistently trailing BC approaches in benchmark tests like the CARLA AD Challenge. In this paper, we investigate the reasons for this disparity and propose potential solutions, with the aim of unlocking the potential of RL for self-driving. There are RL agents that achieve impressive driving performance in simulation but with the significant caveat of using _privileged_ information. This includes any ground truth information relevant to driving. State representations obtained from such information significantly simplify the learning task. Chen et al. [1], in their seminal work, use these "expert" BC agents to supervise a "student" agent which uses only sensorimotor input. This paradigm, has led to the development of the RL agent ROACH [2]. ROACH attains impressive driving performance on CARLA, surpassing the performance of its contemporary BC agents, even those that benefit from privileged information (see Table I). This underscores the potential of RL, albeit with privileged information. Given the success of these privileged RL agents, a natural follow-up question arises: Can we construct high-quality state representations without relying on such privileged information? In this paper, we delve into this question by analyzing the factors contributing to the success of privileged RL agents, using ROACH as a case study. ROACH adopts a bird's eye view (BEV) state representation, where critical elements like roads, lane lines, the desired route, stop-zones, other vehicles, and pedestrians are encoded as binary images in separate channels. Using BEV prediction methods, such as LSS [3] or SimpleBEV [4], to eliminate the dependence on privileged information fail due to poor BEV prediction performance. The primary culprits of this failure are class imbalance between BEV entities and distribution discrepancies between BEV training data and RL training data. More involved BEV predictors, such as BEVFormer [5], prohibit RL training due to space and computational requirements. In this paper, we perform a fine-grained analysis of ROACH's privileged state representation, dissecting its individual components to understand their contributions. Our goal is to discover methods to mitigate the reliance on privileged information, ultimately paving the way for a fully unprivileged RL AD agent. An interesting finding of our analysis is the significance of the desired route component. The desired route component has not found application Fig. 1: **Privileged vs. Sensorimotor Agents.** State representations can make or break RL agents which is all too real for autonomous driving. Our aim is to investigate the BEV representations of a successful privileged agent, propose ways to reduce privileged information with learned vision models and discuss potential ways toward sensorimotor RL agents. beyond ROACH, and its prediction remains unexplored in the realm of learning-based AD. We introduce a middle-ground approach to predict the desired route from the BEV input. Furthermore, we demonstrate that a smaller BEV predictor, focusing only on the roads and the lane lines, can be trained to sufficient performance. Lastly, we integrate an unprivileged traffic light predictor, completely replacing the privileged stop-zone input. Our overall idea is depicted in Fig. 1. Our evaluations, conducted on the CARLA simulator, highlight that harnessing purpose-built predictors is a viable path forward for constructing a fully unprivileged state representation. ## II Related Work BC methods made significant strides in AD since the inception of the CARLA simulator [8] and Chen et al. [9] present a detailed overview of the field. However, BC methods suffer from the distribution shift problem, which causes the learned policy to make mistakes as it diverges from the states present in the expert demonstrations due to compounding errors. In order to circumvent this, Chen et al. [1] use ground truth BEV semantic segmentation maps as input to train an expert agent, which can provide supervision to a sensorimotor agent as it is deployed in the simulation. The availability of expert supervision for on-policy data allows the usage of data aggregation techniques like DAGGER [10] to mitigate the distribution shift. Another approach, called Learning from All Vehicles (LAV) [11], achieves higher performance by using every agent in the scene as a source of supervision. An important aspect of autonomous driving is input representation. AD agents utilize multiple sensor inputs such as RGB images, LIDAR point clouds, GNSS coordinates, and IMU readings. Which inputs to use and how to combine them are important engineering decisions. One possibility for processing model inputs is using intermediate representations. Intermediate representations are inputs for an autonomous driving system, usually generated through processing the sensor inputs with a deep learning model or by accessing simulator variables. The expert model of Learning by Cheating (LbC) [1] is an example where a BEV semantic segmentation map is used as an intermediate representation. Behl et al. [12] investigate intermediate representations for autonomous driving, focusing on semantic segmentation and analyzing the task-relevant object classes, showing that a good intermediate representation can play a crucial role in driving performance. Bird's eye view (BEV) as an intermediate representation is closely related to our work. Among the state-of-the-art models, Lift-Splat-Shoot (LSS) [3] uses a depth prediction module and projects encoded features from the camera space to the BEV space based on this depth estimation. The success of this method depends on successfully predicting the depth values of the RGB image. More recent BEVFormer [5] instead utilizes deformable attention to learn a mapping between image features and the BEV grid. BEVFormer achieves great performance in both semantic segmentation and object detection tasks. However, their heavy transformer-based architecture combined with the learned projection method results in a very large model that is difficult to use with RL. Finally, SimpleBEV [4] presents a more efficient approach with a similar performance by using a parameter-free bilinear interpolation between RGB and BEV space. We utilize SimpleBEV for RL training, due to its favorable trade-off between performance and computational efficiency. The first deep RL method on CARLA, proposed as a baseline in the CARLA challenge [8], uses the A3C[13] algorithm with discrete actions [14] but falls short of BC methods. Liang et al. [15] present one of the first RL approaches that achieved impressive performance by using DDPG [16] with continuous actions where the policy network is initialized from an imitation learning agent. Toromanoff et al. [17] first train a network to predict affordances related to the environment along with semantic segmentation maps. They then freeze this network and use its bottleneck features to train an RL agent using the Rainbow-IQN algorithm [18]. Chen et al. [7] follow a model-based approach by factorizing the driving state and the world model. The world is assumed to be independent of the agent's actions which allows training in pre-recorded driving logs. Despite this limiting assumption, which almost never holds, it outperforms model-free RL methods. However, all of these are outperformed by BC methods. Privileged agents outperforming sensorimotor agents is expected but surprisingly the gap is larger in the case of RL, leading to the conclusion that BC agents cannot utilize the same privileged data as effectively. We examine this by comparing the performance of four contemporary methods1 on CARLA. Despite being the best-performing sensorimotor RL method at the time of writing, the World On Rails (WoR) [7] is the least performant. On the other hand, the vision-based NEAT [6] performs on par with the privileged Learning by Cheating (LbC) [1], both BC methods. Finally, the privileged RL method ROACH [2] outperforms the rest. This implies that at least one major problem with sensorimotor RL agents lies in their noisy, high-dimensional state representations. As such, we investigate the privileged state space components of ROACH and how they could be replaced with sensor-based approaches to improve sensorimotor RL agent performance. ## III Toward Unprivileged RL for Self-Driving The privileged information is critical to the success of the RL agent proposed in ROACH [2]. Our goal is to understand the reasons behind the success of the privileged RL agent and approximate its performance with an unprivileged sensorimotor agent. In particular, we separately focus on various components of the state representation. While some channels can be predicted from RGB only, others like the desired route require additional information such as GNSS and IMU readings. Predicting the location of stop zones without accessing the simulator's internal representation presents additional challenges. This chapter presents our method to address challenges associated with replacing privileged information in ROACH's BEV representation toward developing an unprivileged RL agent for driving. ### _State Space of ROACH_ The state representation of ROACH consists of two components: the BEV representation and the measurement vector. The BEV representation consists of binary BEV segmentation maps of road topology, desired route, objects, and stop zones as shown in Fig. 2. The measurement vector is a vector of scalar measurements including the steering, throttle, brake, vehicle gear, and lateral and horizontal speed values observed in the last time step. These measurements are relatively easy to obtain and are not considered privileged. The BEV masks cover a square area with sides of 38.4 meters, where the ego-vehicle is aligned to be horizontally centered, and 8 meters up vertically from the bottom of the map area. The BEV masks are rendered in \(192\times 192\) resolution and contain 15 channels. There is one channel each for roads, the "desired route", and lane lines. The desired route is the fine-grained path that the agent should follow to reach the next target waypoint. There are 4 temporally stacked channels each for vehicles, pedestrians, and "stop zones". The stop zones are rectangular regions where a vehicle should not move due to a traffic light or stop sign that controls the corresponding region. When a traffic light for a stop zone turns red, the corresponding area is rendered white in BEV. The information in the BEV masks is calculated by accessing the simulation internals and, therefore considered privileged. ### _BEV Perception_ For BEV segmentation, we take SimpleBEV [4] which is designed and optimized for the real-world NuScenes [19] dataset, and adapt it to CARLA. The dominant paradigm in BEV segmentation, which is also followed by SimpleBEV, is to train a separate model for each class to segment. While training and using separate models for each class improves performance, it is infeasible to perform multiple forward passes for a single state representation in the RL loop. Therefore, we modify the SimpleBEV architecture to simultaneously predict multiple binary segmentation masks corresponding to each class. To that end, we simply increase the output dimension of the final 1D convolution layer of the segmentation head to match the number of classes. Instead of multiple classes competing with a cross-entropy loss, we use a binary cross-entropy loss for training. This is necessary as a cell on the BEV grid does not necessarily belong to a single class, instead, multiple channels can be active together, for example, a vehicle positioned on the road. We also apply positive weighting when predicting classes that cover a very small portion of the grid, and, therefore dominated by negative samples such as pedestrians or lane lines. We found this weighting to be critical in our experiments. While the desired route and traffic light can be predicted as additional channels in the output of SimpleBEV in a straightforward manner, we found it to be infeasible in our experiments. We suspect that there are two reasons for this. First, the desired route prediction needs to consider the relative positions of future waypoint coordinates. Second, the stop zones are spatially distant from the traffic lights, both in the image and the BEV spaces. This is especially true in US-style farside traffic lights. Given the limited receptive field sizes, it is challenging to relate a small red light on the RGB image to a distant rectangular area on the BEV grid. Therefore, we propose specialized solutions for these two. Fig. 3: **Potential Representations of Desired Route Information.** ROACH uses privileged information to create the desired route. We consider an unprivileged alternative target heatmap representation and also propose a model to predict the desired route from road topology and target waypoints. Fig. 2: **State Representation of ROACH.** ROACH uses a state representation where driving information regarding road topology, desired route, objects, and stop zones are stored in separate binary channels. ### _Desired Route Generation_ The desired route is the shortest path towards the future GNSS coordinates of target waypoints, calculated with access to the internal road topology representation of CARLA. If access to the simulator internals is out of the question, as in the case of sensorimotor agents, generating this path becomes a novel problem. To the best of our knowledge, the critical importance of the desired route component has not been studied before. In this work, we show its importance for driving and propose an approach to generate the desired route from BEV. To predict the desired route, we propose a model that takes information regarding the surrounding road topology, and the target waypoints as input. For the first input, we assume that we have access to the road and lane line masks of the surrounding area in BEV. For a completely unprivileged agent, the road topology also needs to be predicted from raw RGB images but we leave it as future work. For the second input, each agent on CARLA is provided with target waypoints to follow in the form of a list of GNSS coordinates where each coordinate is a vector containing longitude, latitude, and altitude. Following the convention, we first convert these GNSS coordinates into relative coordinates with respect to the ego vehicle's frame of reference using the GNSS and IMU sensors on the vehicle. After converting coordinates to relative target vectors we use the next five waypoints as input to the model. We encode these two sources of information with separate encoders and fuse information from encoded features with a cross-attention layer as shown in Fig. 4. We encode the road and lane line masks using convolutional layers, tokenize the resulting feature map, and apply self-attention, following vision transformers [20]. We encode the target waypoints using a simple MLP and use the encoded waypoints as the query in the cross-attention layer to generate a BEV representation of the desired route conditioned on the target waypoints to follow. Finally, we reshape the tokenized BEV representation to create a two-dimensional feature map and process it with a segmentation head to predict the mask corresponding to the desired route mask. We train the model using binary cross-entropy loss. ### _Stop Zone Prediction_ The final component we replace in the privileged BEV representation is the stop zones due to traffic lights. While object detection methods can be used to detect the presence and status of traffic lights in the scene, what matters for driving is the road region that is affected by the traffic light, so-called the _stop zone_, rather than the position of the traffic light on the image. Moreover, not every detected traffic light affects the ego-vehicle, as there might be traffic lights signaling other vehicles that are visible from the viewpoint of the ego-vehicle. We propose to predict whether the ego vehicle is currently inside an active stop zone or not. Rather than learning to segment the stop-zones in BEV, we predict a binary variable to inform the agent when it must stop for a red light. To that end, we use a simple off-the-shelf EfficientNet-B0 [21] with a binary classification head. The model takes a single RGB input from a frontal camera. For training, we use a binary cross-entropy loss with a positive weighting to address the scarcity of red light examples in the collected data. During rollouts in the simulator, we apply a threshold of 0.4 to the model's predictions. We observe that this simple approach can effectively predict whether the vehicle is currently affected by a red light for both the European and US-style traffic lights. ## IV Evaluation ### _Experimental Setup_ **Camera Setup:** Following state-of-the-art BEV methods on CARLA such as Transfuser [22] and NEAT [6], we use 3 cameras and restrict the BEV prediction to the area ahead of the vehicle. The cameras are positioned on the front of the vehicle, one of them facing the front and two of them facing 60 degrees to the left and right. Each camera has a field of view of 100 degrees and generates images of resolution \(320\times 160\). Our predicted BEV maps cover the same area as the ROACH [2]. **Data Collection:** We collect 20 hours of driving data at 10 Hertz using the CARLA autopilot in the ROACH RL environment. We collect data from towns 1 through 6 of the CARLA simulator, reserving 10% of the data for validation. We apply triangular perturbations to the expert agent's actions in order to reduce the effects of distribution shift, following the conventional approach to augment the expert trajectories first proposed by Codevilla et al. [23]. The weather, time of day, and lighting conditions are randomly selected and change dynamically during episodes. We use the same dataset for BEV perception, desired route generation, and stop zone prediction. **Training Details:** We train BEV perception for 50,000 steps using the 1-cycle learning rate scheduler proposed by Fig. 4: **Desired Route Generation. We propose a model to predict the desired route in BEV given road topology in the form of road and line masks and five target waypoints to follow.** Smith et al. [24]. We apply a positive weighting factor of 15 for pedestrians, 8 for lane lines, and 5 for vehicles. We train the desired route generation model for 40 epochs, using binary cross-entropy loss with the Adam [25] optimizer and a learning rate of 0.001. We train the stop zone prediction model similarly using binary cross-entropy loss and Adam, but for 10 epochs with a learning rate of 0.0001. **Details of RL Agents:** For all our experiments where an RL agent is trained, we use the same environment and the same training scheme as the RL expert of ROACH. We leave the deep learning architecture of the RL agent untouched except for the initial layers to accommodate different input sizes. As RL agents can have considerable variations in performance depending on the random seed, we report the mean and standard deviation over 3 runs. ### _Experimental Results_ #### Iv-B1 BEV Perception Results With modified SimpleBEV, our initial goal was to segment the road, lane line, vehicle, and pedestrian classes. We report the results for each of these 4 classes in terms of intersection-over-union (IoU) in the first row (All) of Table II. These results show that while the model can learn to segment the frequent classes such as roads and vehicles, it can not segment less frequent classes such as pedestrians and lane lines. This reveals a shortcoming of state-of-the-art BEV perception models in the face of severe data imbalance that cannot be fixed with a simple positive weighting strategy. Considering the vast literature on object detection in computer vision, we conjecture that specialized architectures can be deployed for object detection. Omitting the vehicles and pedestrians, we train SimpleBEV to segment only the static parts of the scene, i.e. roads and lane lines. As can be seen in the second row (Static) of Table II, this significantly improves the performance of both classes, especially the lane lines. #### Iv-B2 Driving with Predicted BEV We test the performance of a privileged RL agent by replacing the ground truth road and lane line segmentations with predicted ones from BEV perception. For everything else, we use ground truth information including objects, the desired route, and stop zones. We report the mean and the standard deviation over three runs in Table III. Without requiring any fine-tuning, the results are surprisingly close to the privileged agent. **Discussion:** We inspected the driving behavior along with predicted BEV visualizations qualitatively and acquired two critical insights. First, while BEV perception models can reach very high IoU scores on static datasets, as in our generated dataset or the NuScenes [19], that performance does not always map to successful predictions during online RL training. As the RL agent explores, it ends up visiting previously unseen state-action pairs, especially in the initial phases of training, prior to achieving successful driving. Unseen states cause our BEV prediction model to fail much more severely than it did on our static validation set. This problem persists even when we apply data augmentation techniques that are commonly used in BEV segmentation. We believe that future work on BEV perception for autonomous driving should consider driving performance with online metrics in addition to segmentation metrics, as the two are rarely correlated [26]. The second critical insight that we acquired is related to our motivation to focus on the desired route. We realized that the agent is able to drive even when the predictions for the road and lane lines are critically low quality. We suspect that this is caused by the dependency of the agent on the desired route channel by ignoring road and lane line channels. Next, we investigate this claim and its repercussions. #### Iv-B3 Importance of Desired Route We perform an experiment to confirm the dependency of the ROACH expert on the desired route component while ignoring road and lane line information. We first train an RL agent by removing the road and lane line channels from the input. This agent needs to learn to navigate by using only the desired route channel for road information. Second, we replace the desired route channel with a less informative but unprivileged alternative. We project the GNSS coordinates of the next target waypoint of the current route trajectory and render it as a heatmap of the target location. We train another RL agent with this heatmap representation instead of the privileged route. We compare the driving performance of these two agents to the privileged agent in Table IV. These results confirm our suspicion regarding the importance of the desired route, as the agent learns faster and achieves higher driving performance with less variance when it only sees the desired route (w/o Static). Moreover, we see that a less privileged route representation (w/ Target Heatmap) causes the agent to fail despite ground truth information for everything else. #### Iv-B4 Route Prediction Results We train an RL agent by replacing the desired route channel with the predicted route. We found that the model requires longer training (20M vs. 10M in ROACH) with the predicted route due to the prediction errors. The driving results are shown in the last row of Table IV. Due to longer training, we cannot report the results over 3 runs. While there is a drop in performance compared to the privileged agent, the agent can learn a meaningful driving behavior with the predicted route. This is an important step towards developing RL agents that can learn to drive with less privileged information. #### Iv-B5 Stop Zone Prediction Results We explore various ways of incorporating binary stop zone prediction into the input. We first remove the stop zone channel from the BEV input and add a binary variable to the measurements. with this straightforward approach, the agent performs poorly and fails to achieve a driving score over 0.05. We suspect that the low-dimensional measurement vector is ignored by the agent relying on the BEV representation for the most part. To test this, we replace the ground truth stop zone channel with a binary channel, which is set to all ones when the agent is affected by a red light. The experiment results are shown in Table V. We first test this new representation by using ground truth information from the simulator to determine if the agent is affected by a red light (GT Binary). We see that replacing the stop zone channel with a ground truth binary channel results in only a modest drop in performance, showing that the new representation is safe. The agent that uses predictions to fill the binary traffic light channel (Predicted Binary) still manages to achieve a good driving performance albeit with a drop compared to the expert. ## V Discussion and Future Work In this paper, we investigated the reasons behind the success of privileged RL agent ROACH and addressed the challenges of replicating that success with a sensorimotor agent to bridge the gap between the two. We first adapted a state-of-the-art BEV perception model, SimpleBEV, to efficiently output multiple classes on CARLA. Our evaluation showed impressive results for the static part of the scene but failed for small objects like pedestrians. However, we observed that the successful validation performance of the static part did not generalize to the out-of-distribution observations encountered in the online RL training, yet the agent was successful. Further investigation revealed a more important factor in the success of the RL agent: the desired route component. We then proposed two alternatives to replace the privileged desired route information. The heatmap representation failed but predicting the desired route from road topology and waypoints showed promising results. We hope that our initial investigation in this paper, related to desired route prediction will lead to future research on better predictions of the desired route, ideally from raw images. This way, we can provide the agent with a better understanding of the path to follow. Without the need to plan a short-term path, the agent can focus on solving other aspects of driving such as infractions. We also investigated whether a privileged representation of the "stop zone" areas affected by traffic lights is necessary for the success of RL agents. We were able to replace the privileged stop zone region channel with a binary traffic light detector. Incorporating the output of this detector directly as another measurement did not work. However, providing it as an entire channel in the BEV state, either complete zeros or ones, resulted in a close to expert driving performance. Our investigation led to an unprivileged representation of the stop zone that is still acceptable for driving. We did not investigate vehicle and pedestrian related information and kept them as privileged throughout our work. Predicting pedestrians in the BEV space remains an open challenge. As an alternative, object detection methods can be used to detect these objects, which can then be projected to the BEV space. Another direction is to improve the computational efficiency of larger BEV methods like BEVFormer and incorporate it in RL. On the other hand, even when these problems are solved on static datasets, offline task metrics do not directly translate to good driving behavior for the agent. There is a significant mismatch in the observed states between a driving dataset like NuScenes, where accidents or wild maneuvers are rare, and an online RL agent exploring different state-action pairs in the environment. This results in BEV predictions failing when the RL agent strays from paths available in the training dataset during policy rollouts. A challenging benchmark to test the robustness of BEV perception methods under unusual configurations could encourage the community to work on this misalignment issue. Although RL agents fall behind behavior cloning agents on CARLA, we argue that there are still discoveries to be made. Even though our work does not present a state-of-the-art sensorimotor RL agent, it investigates why such an agent has not been discovered yet. We put forth the importance of desired route prediction for RL. We also pinpoint the needs of autonomous driving agents from BEV perception and highlight areas that need improvement. Computational efficiency constitutes a bottleneck in benefiting from the latest developments in computer vision. The distribution differences between offline BEV datasets and online RL training presents another challenge. Overall, our results highlight the critical role of efficient, informative, and accurate state representations in handling complex driving environments. We argue that such representations hold the key to the discovery of successful sensorimotor RL agents for autonomous driving. ## Acknowledgements Ege Onat Ozsuer is supported by the KUIS AI Center Fellowship.
2310.00177
A Neural-preconditioned Poisson Solver for Mixed Dirichlet and Neumann Boundary Conditions
We introduce a neural-preconditioned iterative solver for Poisson equations with mixed boundary conditions. Typical Poisson discretizations yield large, ill-conditioned linear systems. Iterative solvers can be effective for these problems, but only when equipped with powerful preconditioners. Unfortunately, effective preconditioners like multigrid require costly setup phases that must be re-executed every time domain shapes or boundary conditions change, forming a severe bottleneck for problems with evolving boundaries. In contrast, we present a neural preconditioner trained to efficiently approximate the inverse of the discrete Laplacian in the presence of such changes. Our approach generalizes to domain shapes, boundary conditions, and grid sizes outside the training set. The key to our preconditioner's success is a novel, lightweight neural network architecture featuring spatially varying convolution kernels and supporting fast inference. We demonstrate that our solver outperforms state-of-the-art methods like algebraic multigrid as well as recently proposed neural preconditioners on challenging test cases arising from incompressible fluid simulations.
Kai Weixian Lan, Elias Gueidon, Ayano Kaneda, Julian Panetta, Joseph Teran
2023-09-29T22:49:47Z
http://arxiv.org/abs/2310.00177v5
# A Neural-preconditioned Poisson Solver for Mixed Dirichlet and Neumann Boundary Conditions ###### Abstract We introduce a neural-preconditioned iterative solver for Poisson equations with mixed boundary conditions. The Poisson equation is ubiquitous in scientific computing: it governs a wide array of physical phenomena, arises as a subproblem in many numerical algorithms, and serves as a model problem for the broader class of elliptic PDEs. The most popular Poisson discretizations yield large sparse linear systems. At high resolution, and for performance-critical applications, iterative solvers can be advantageous for these--but only when paired with powerful preconditioners. The core of our solver is a neural network trained to approximate the inverse of a discrete structured-grid Laplace operator for a domain of arbitrary shape and with mixed boundary conditions. The structure of this problem motivates a novel network architecture that we demonstrate is highly effective as a preconditioner even for boundary conditions outside the training set. We show that on challenging test cases arising from an incompressible fluid simulation, our method outperforms state-of-the-art solvers like algebraic multigrid as well as some recent neural preconditioners. ## 1 Introduction The solution of linear systems of equations involving discrete Laplace operators is the bottleneck in many engineering and scientific applications. These large, symmetric positive definite and sparse systems of equations are notoriously ill-conditioned. Fast Fourier Transforms (Cooley & Tukey, 1965) are optimal for these problems when discretized over trivial geometric domains, however they are not applicable for practical domain shapes. Direct methods like Cholesky factorization (Golub & Loan, 2012) resolve conditioning issues, but suffer from loss of sparsity/fill-in and are prohibitively costly in practice when per-time-step refactoring is necessary (_e.g._, with changing domain shape). Iterative methods like preconditioned conjugate gradient (PCG) (Saad, 2003) and multigrid (Brandt, 1977) can achieve good performance, however an optimal preconditioning strategy is not generally available, and though multigrid can guarantee modest iteration counts, computational overhead associated with solver creation and other per-iteration costs can dominate runtimes in practice. Unfortunately, there is no clear algorithmic solution. Recently, machine learning techniques have shown promise for these problems. Tompson et al. (2017) showed that a network (FluidNet) can be used to generate an approximate inverse across domain shapes, albeit only with Neumann boundary conditions. Kaneda et al. (2023) developed DCDM, which improves on this approach by using a similar network structure and an iterative technique where gradient descent in the matrix-norm of the error is preconditioned with a neural network. While their approach is similar to PCG, the nonlinearity of their approximate inverse required a generalization of the PCG method which proved effective. We build on this approach and generalize it to domains with mixed Dirichlet and Neumann boundary conditions. Notably, these problems arise in simulating free-surface liquid flows. The DCDM approach cannot handle these cases, however we show that a novel, more lightweight network structure can be used in DCDM's iterative formalism that is both linear and capable of handling mixed boundary conditions over time-varying fluid domains. Furthermore, we show that this structure drastically improves performance over that in DCDM. We design our network structure to represent the dense nature of the inverse of a discrete Laplacian matrix. That is, the inverse matrix for a discrete Laplace operator has the property that local perturbations anywhere in the domain have non-negligible effects at all other points in the domain. Our network structure uses a hierarchy of grid scales to improve the resolution of this behavior over what is possible with the DCDM structure. In effect, the process of transferring information across the hierarchy from fine grid to increasingly coarse grids and back again facilitates rapid propagation of information across the domain. This structure has similarities with multigrid, however there are some important differences. We incorporate the effects of the Dirichlet and Neumann conditions at irregular boundaries with a novel convolution design. Specifically, we use stencils that learn spatially varying weights based on a voxel's proximity to the boundary and the boundary condition types encoded there. Although our approximate inverses are linear (unlike the DCDM preconditioner) we still adopt the DCDM iterative formalism. We do this because we cannot guarantee that our neural network produces a symmetric and positive definite approximate inverse as required for standard PCG. It is possible to use a flexible PCG technique (Golub & Ye, 1999) in this case though (as in (Bouwmeester et al., 2015)), however we show that the matrix-orthogonal gradient descent iteration in DCDM provides superior results. We show that our network outperforms state-of-the-art preconditioning strategies, including DCDM, FluidNet, algebraic multigrid and incomplete Cholesky. We perform our comparison across a number of representative free-surface liquid and fluid flow problems. To promote reproducibility we have released our full code and a link to our pretrained model at [https://anonymous.4open.science/r/MLPCG-2102](https://anonymous.4open.science/r/MLPCG-2102). ## 2 Related Work Many recent approaches leverage machine learning techniques to accelerate numerical linear algebra computations. Ackmann et al. (2020) use supervised learning to compute preconditioners from fully-connected feed-forward networks in semi-implicit time stepping for weather and climate models. Sappl et al. (2019) use convolutional neural networks (CNNs) to learn banded approximate inverses for discrete Poisson equations arising in incompressible flows discretized over voxelized spatial domains. However, their loss functions is the condition number of the preconditioned operator which is prohibitively costly at high resolution. Ozbay et al. (2021) also use CNN to approximate solutions to Poisson problems arising in incompressible flow discretized over voxelized domains, however they do not learn a preconditioner and their approach only supports two-dimensional square domains. Our approach is most similar to those of Tompson et al. (2017) and Kaneda et al. (2023) who also consider discrete Poisson equations over voxelized fluid domains, however our lighter-weight network outperforms them and generalizes to a wider class of boundary conditions. Li et al. (2023) build on the approach of Sappl et al. (2019), but use a more practical loss function based on the supervised difference between the inverse of their preconditioner times a vector and its image under the matrix under consideration. Their preconditioner is the product of easily invertible, sparse lower triangular matrices. Notably, their approach works on discretizations over unstructured meshes. Gotz & Anzt (2018) learn Block-Jacobi preconditioners using deep CNNs. The choice of optimal blocking is unclear for unstructured discretizations, and they use machine learning techniques to improve upon the selection. Various works use hybrid deep learning/multigrid techniques. For example, the UNet (Ronneberger et al., 2015) and MSNet architectures (Mathieu et al., 2016) are similar to a multigrid V-cycle in terms of data flow, as noted by Cheng et al. (2021) and Azulay & Treister (2023). Cheng et al. (2021) use the multi-scale network architecture MSNet to approximate the solution of Poisson equations arising in plasma flow problems. However, they only consider flows over a square domain in 2D. Azulay & Treister (2023) note the similarity between the multi-scale UNet architecture and a multigrid V-cycle. They use this structure to learn preconditioners for the solution of heterogeneous Helmholtz equations. Eliasof et al. (2023) also use a multigrid-like architecture for a general class of problems. Huang et al. (2023) use deep learning to generate multigrid smoothers at each grid resolution that effectively smooth high frequencies: CNNs generate the smoothing stencils from matrix entries at each level in the multigrid hierarchy. This is similar to our boundary-condition-dependent stencils, however we note that our network is lighter-weight and allowed to vary at a larger scale during learning. Furthermore, optimal stencils are known for the problems considered in this work, and we provide evidence that our solvers outperforms them. ## 3 Motivation: Incompressible Fluids With Mixed B.C.s While our solver architecture can be applied to any Poisson equation discretized on a structured grid, our original motivation was to accelerate a popular method for incompressible inviscid fluid simulation based on the splitting scheme introduced by Chorin (1967). The fluid's velocity \(\mathbf{u}(\mathbf{x},t)\) is governed by the incompressible Euler equations: \[\rho\left(\frac{\partial\mathbf{u}}{\partial t}+(\mathbf{u}\cdot\nabla) \mathbf{u}\right)+\nabla p=\mathbf{f}^{\text{ext}}\quad\text{s.t.}\quad\nabla \cdot\mathbf{u}=0\quad\text{in }\Omega, \tag{1}\] where \(\Omega\) is the domain occupied by fluid, pressure \(p\) is the Lagrange multiplier for the incompressibility constraint \(\nabla\cdot\mathbf{u}=0\), \(\rho\) is the mass density, and \(\mathbf{f}^{\text{ext}}\) accounts for external forces like gravity. These equations are augmented with initial conditions \(\mathbf{u}(\mathbf{x},0)=\mathbf{u}^{0}(\mathbf{x})\) and \(\rho(\mathbf{x},0)=\rho^{0}\) as well as the boundary conditions discussed in Section 3.1. Incompressibility implies that the initial homogeneous mass density is conserved throughout the simulation (\(\rho\equiv\rho^{0}\)). Chorin's scheme employs finite differences in time and splits the integration from time \(t^{n}\) to \(t^{n+1}=t^{n}+\Delta t\) into two steps. First, a provisional velocity field \(\mathbf{u}^{*}\) is obtained by an _advection step_ that neglects the pressure and incompressibility constraint: \[\frac{\mathbf{u}^{*}-\mathbf{u}^{n}}{\Delta t}+(\mathbf{u}^{n}\cdot\nabla) \mathbf{u}^{n}=\frac{1}{\rho^{0}}\mathbf{f}^{\text{ext}}. \tag{2}\] Second, a _projection step_ obtains \(\mathbf{u}^{n+1}\) by eliminating divergence from \(\mathbf{u}^{*}\): \[-\nabla\cdot\frac{1}{\rho^{0}}\nabla p^{n+1} =-\frac{1}{\Delta t}\nabla\cdot\mathbf{u}^{*}, \tag{3}\] \[\frac{\mathbf{u}^{n+1}-\mathbf{u}^{*}}{\Delta t} =-\frac{1}{\rho^{0}}\nabla p^{n+1}. \tag{4}\] Equations 2-4 hold inside \(\Omega\), and we have deferred discussion of boundary conditions to Section 3.1. The bottleneck of this full process is (3), which is a Poisson equation since \(\rho^{0}\) is spatially constant. ### Boundary Conditions Our primary contribution is handling both Neumann and Dirichlet boundary conditions for the Poisson equation. We assume the computational domain \(\mathcal{D}\) is decomposed into \(\mathcal{D}=\Omega\cup\Omega_{a}\cup\Omega_{s}\), as sketched in the inset, where \(\Omega_{a}\) denotes free space and \(\Omega_{s}\) the region filled with solid. This decomposition induces a partition of the fluid boundary \(\partial\Omega=\Gamma_{n}\cup\Gamma_{d}\). Boundary \(\Gamma_{n}\) represents the fluid-solid interface as well as the intersection \(\partial\Omega\cap\partial\mathcal{D}\) (_i.e._, the region outside \(\mathcal{D}\) is treated as solid); on it a free-slip boundary condition is imposed: (1), \(\mathbf{u}(\mathbf{x},t)\cdot\hat{\mathbf{n}}(\mathbf{x})=u_{n}^{\Gamma}( \mathbf{x},t)\), where \(\hat{\mathbf{n}}\) denotes the outward-pointing unit surface normal. This condition on \(\mathbf{u}\) translates via (4) into a Neumann condition on (3): \[\hat{\mathbf{n}}\cdot\nabla p^{n+1}=\frac{\rho_{0}}{\Delta t}(\hat{\mathbf{n} }\cdot\mathbf{u}^{*}-u_{n}^{\Gamma})\quad\text{on }\Gamma_{n}. \tag{5}\] Free-surface boundary \(\Gamma_{d}\) represents the interface between the fluid and free space. Ambient pressure \(p_{a}\) then imposes on (3) a Dirichlet condition \(p^{n+1}=p_{a}\) on \(\Gamma_{d}\). In our examples, we set \(p_{a}=0\). The Dirichlet conditions turn out to make solving (3) fundamentally more difficult: while the DCDM paper Kaneda et al. (2023) discovered that a preconditioner blind to the domain geometry and trained solely on an empty box is highly effective for simulations featuring pure Neumann conditions, the same is not true for Dirichlet (see Figure 5). ### Spatial Discretization We discretize the full domain \(\mathcal{D}\) using a regular marker-and-cell (MAC) staggered grid with \(n_{c}\) cubic elements Harlow (1964). The disjoint subdomains \(\Omega\), \(\Omega_{a}\), and \(\Omega_{s}\) are each represented by a per-cell rasterized indicator field; these are collected into a 3-channel image, stored as a tensor \(\mathcal{I}\). In the case of a 2D square with \(n_{c}=N^{2}\), this tensor is of shape \((3,N,N)\), and summing along the first index yields a single-channel image filled with ones. Velocities and forces are represented at the _corners_ of this grid, and for smoke simulations the advection step (2) is implemented using an explicit semi-Lagrangian method (Stam, 1999; Robert, 1981). For free-surface simulations, advection is performed by interpolating fluid velocities from the grid onto particles responsible for tracking the fluid state, advecting those particles, and then transferring their velocities back to the grid. In our examples, we use a PIC/FLIP blend transfer scheme with a 0.99 ratio (Zhu & Bridson, 2005). Pressure values are stored at element _centers_, and the Laplace operator in (3) is discretized into a sparse symmetric matrix \(A^{\mathcal{I}}\in\mathbb{R}^{n_{c}\times n_{c}}\) using the standard second-order accurate finite difference stencil (with 5 points in 2D and 7 in 3D) but with modifications to account for Dirichlet and Neumann boundary conditions: stencil points falling outside \(\Omega\) are dropped, and the central value (_i.e._, the diagonal matrix entry) is determined as the number of neighboring cells belonging to either \(\Omega\) or \(\Omega_{a}\). Examples of these stencils are visualized in 2D in the inset. Rows and columns corresponding to cells outside \(\Omega\) are left empty, meaning \(A^{\mathcal{I}}\) typically has a high-dimensional nullspace. These empty rows and columns are removed before solving, obtaining a smaller positive definite matrix \(\tilde{A}^{\mathcal{I}}\in\mathbb{R}^{n_{f}\times n_{f}}\), where \(n_{f}\) is the number of fluid cells. The right-hand side of (3) is discretized using the standard MAC divergence finite difference stencil into a vector \(\mathbf{b}\in\mathbb{R}^{n_{c}}\), which also receives contributions from the Neumann boundary. Entries of this vector corresponding to cells outside \(\Omega\) are removed to form right-hand side vector \(\tilde{\mathbf{b}}\in\mathbb{R}^{n_{f}}\) of the reduced linear system representing the discrete Poisson equation: \[\tilde{A}^{\mathcal{I}}\tilde{\mathbf{x}}=\tilde{\mathbf{b}}, \tag{6}\] where \(\tilde{\mathbf{x}}\in\mathbb{R}^{n_{f}}\) collects the fluid cells' unknown pressure values (a discretization of \(p^{n+1}\)). The constantly changing domains and boundary conditions of a typical fluid simulation mean traditional preconditioners for (6) like multigrid or incomplete Cholesky, as well as direct sparse Cholesky factorizations, need to be _rebuilt at every frame_. This prevents their high fixed costs from being amortized across frames and means they struggle to outperform a highly tuned GPU implementation of unpreconditioned CG. This motivates our neural-preconditioned solver which, after training, instantly adapts to arbitrary subdomain shapes encoded in \(\mathcal{I}\). ## 4 Neural-preconditioned Steepest Descent with Orthogonalization Our neural-preconditioned solver combines a carefully chosen iterative method (Section 4.1) with a preconditioner based on a novel neural network architecture (Section 4.2.1) inspired by multigrid. ### Algorithm For symmetric positive definite matrices \(A\) (like the discrete Laplacian \(\tilde{A}^{\mathcal{I}}\) from (6)), the preconditioned conjugate gradient (PCG) algorithm (Shewchuk, 1994) is by far the most efficient iterative method for solving linear systems \(A\mathbf{x}=\mathbf{b}\) when an effective preconditioner is available. Unfortunately, its convergence rate is known to degrade when the preconditioner itself fails to be symmetric, as is the case for our neural preconditioner. Bouwmeester et al. (2015) have shown that good convergence can be recovered for nonsymmetric multigrid preconditioners using the "flexible PCG" variant at the expense of an additional dot product. However, this variant turns out to perform sub-optimally with our neural preconditioner, as shown in Table 1. Instead, we adopt the preconditioned steepest descent with orthogonalization (PSDO) method proposed in Kaneda et al. (2023), which was shown to perform well even for their nonlinear preconditioning operator. the preconditioned residual as the starting point for generating search directions and, consequently, cannot enjoy many of the simplifications baked into the traditional algorithm. Most seriously, \(A\)-orthogonalizing against only the previous search direction no longer suffices to achieve \(A\)-orthogonality to all past steps. Therefore, iteration \(k\) of PSDO obtains its step direction \(\mathbf{d}_{k}\) by explicitly \(A\)-orthogonalizing the preconditioned residual against the last \(n_{\text{ortho}}\) directions (where \(n_{\text{ortho}}\) is a tunable parameter) and determines step length \(\alpha_{k}\) with an exact line search. PSDO reduces to standard preconditioned steepest descent (PSD) when \(n_{\text{ortho}}=0\), and it is mathematically equivalent to unpreconditioned CG when \(n_{\text{ortho}}\geq 1\) and the identity operator is used as the preconditioner. In the case of a symmetric preconditioner \(P=LL^{\top}\), PSDO differs from PCG by taking steps that are \(A\)-orthogonal rather than \(L^{-1}AL^{-\top}\)-orthogonal. When combined with our neural preconditioner, we call this algorithm NPSDO, presented formally in Algorithm 1 in the appendix. We empirically determined \(n_{\text{ortho}}=2\) to perform well, and we use this value in all reported experiments. ### Neural Preconditioner The ideal preconditioner for all iterative methods described in Section 4.1 is the exact inverse \(A^{-1}\); with it, each method would converge to the exact solution in a single step. Of course, the motivation for using an iterative solver is that inverting or factorizing \(A\) is too costly (Figure 6), and instead we must seek an inexpensive approximation of \(A^{-1}\). Examples are incomplete Cholesky, which does its best to factorize \(A\) with a limited computational budget, and multigrid, which applies one or more iterations of a multigrid solver. Our method approximates the map \(\mathbf{r}\mapsto A^{-1}\mathbf{r}\) by our neural network \(\boldsymbol{\mathcal{P}}^{\text{net}}(\boldsymbol{\mathcal{I}},\mathbf{r})\). Departing from recent works like Kaneda et al. (2023), we use a novel architecture that both substantially boosts performance on pure-Neumann problems and generalizes to the broader class of Poisson equations with mixed boundary conditions by considering geometric information from \(\boldsymbol{\mathcal{I}}\). The network performs well on 2D or 3D Poisson equations of varying sizes, but to simplify the exposition, our figures and notation describe the method on small square grids of size \(N\times N\). We note that Algorithm 1 runs on linear system \(\tilde{A}^{\boldsymbol{\mathcal{I}}}\tilde{\mathbf{x}}=\tilde{\mathbf{b}}\), featuring vectors of smaller size \(n_{f}\), but the network always operates on input vectors of full size \(n_{c}\), reshaped into \((N,N)\) tensors. Therefore, to evaluate \(\tilde{\mathbf{d}}=\boldsymbol{\mathcal{P}}^{\text{net}}(\boldsymbol{\mathcal{ I}},\tilde{\mathbf{r}})\), \(\tilde{\mathbf{r}}\) is first padded by inserting zeros into locations corresponding to cells in \(\Omega_{a}\) and \(\Omega_{s}\), and then those locations of the output are removed to obtain \(\tilde{\mathbf{d}}\in\mathbb{R}^{n_{f}}\). #### 4.2.1 Architecture Our neural network architecture (Figure 1) is inspired by geometric multigrid, aiming to propagate information across the computational grid faster than the one-cell-per-iteration of unpreconditioned CG. The architecture is constructed recursively, consisting of levels \(1\leq\ell\leq\mathcal{L}\). A given level \(\ell\) operates on an input image \(\boldsymbol{\mathcal{I}}^{(\ell)}\) and input vector \(\mathbf{r}^{(\ell)}\). It performs a special image-dependent convolution operation on \(\mathbf{r}^{(\ell)}\) and then downsamples the resulting vector \(\mathbf{y}^{(\ell)}\), as well as \(\boldsymbol{\mathcal{I}}^{(\ell)}\), to the Figure 1: Our network architecture sketched for a 2D preconditioner with \(\mathcal{L}=3\) levels. next-coarser level \(\ell+1\) using average pooling (analogous to restriction in multigrid). The output of the level \(\ell+1\) subnetwork is then upsampled (analogous to prolongation), run through another convolution stage, and finally linearly combined with \(\mathbf{y}^{(\ell)}\) to obtain the output. At the finest level, \(\boldsymbol{\mathcal{I}}^{(1)}=\boldsymbol{\mathcal{I}}\) and \(\mathbf{r}^{(1)}=\mathbf{r}\), while at the coarsest level only a single convolution operation is performed. One crucial difference between our network and existing neural solvers like FluidNet (Tompson et al., 2017) is how geometric information from \(\boldsymbol{\mathcal{I}}\) is incorporated. Past architectures treat this geometric data on the same footing as input tensor \(\mathbf{r}\), _e.g._ feeding both into standard multi-channel convolution blocks. However, we note that \(\boldsymbol{\mathcal{I}}\) determines the entries of \(A^{\boldsymbol{\mathcal{I}}}\), and so if the convolutions are to act analogously to the smoothing operations of multigrid, really this geometry information should inform the weights of convolutions applied to \(\mathbf{r}\). This motivates our use of custom convolutional blocks whose _spatially varying kernels_ depend on local information from \(\boldsymbol{\mathcal{I}}\). Each custom convolutional block (at the right corner in Figure 1) at level \(\ell\) learns an affine map from a \(3\times 3\) sliding window in \(\boldsymbol{\mathcal{I}}^{(\ell)}\) to a \(3\times 3\) kernel \(\boldsymbol{\mathcal{K}}^{(i,j)}\). This affine map is parametrized by a weights tensor \(\boldsymbol{\mathcal{W}}\) of shape \((3^{2},3,3,3)\) and a bias vector \(\boldsymbol{\mathcal{B}}\in\mathbb{R}^{3^{2}}\). Entry \(y_{i,j}\) of the block's output is computed as: \[y_{i,j}=\sum_{a,b=-1}^{1}\mathcal{K}_{a,b}^{(i,j)}x_{i+a,j+b},\qquad\mathcal{ K}_{a,b}^{(i,j)}:=\sum_{c=0}^{2}\sum_{l,m=-1}^{1}\mathcal{W}_{3a+b,c,l,m} \mathcal{I}_{c,i+l,j+m}^{(\ell)}+\mathcal{B}_{3a+b}.\] Out-of-bounds accesses in these formulas are avoided by padding \(\boldsymbol{\mathcal{I}}^{(\ell)}\) with solid pixels (_i.e._, the values assigned to cells in \(\Omega_{s}\)) and \(\mathbf{x}\) with zeros. In multigrid, the solutions obtained on the coarser grids of the hierarchy are _corrections_ that are added to the finer grids' solutions; likewise, our network includes connections labeled "linear combination" in Figure 1 that mix in upsampled data from the lower level. Our network determines each of the two coefficients in this combination by learning affine functions of the input image defined by (i) convolving \(\boldsymbol{\mathcal{I}}^{(\ell)}\) with a (spatially constant) kernel \(\overline{\boldsymbol{\mathcal{K}}}\) of shape \((3,3,3)\); (ii) averaging to produce a scalar; and (iii) adding a scalar bias \(\overline{\mathcal{B}}\). For efficiency, these evaluation steps are fused into a custom linear block (indicated by blue arrows in Figure 1) that implements the formula: \[z=\overline{\mathcal{B}}+\frac{1}{3^{2}n_{c}}\sum_{i,j=0}^{N-1}\sum_{c=0}^{2} \sum_{l,m=-1}^{1}\overline{\mathcal{K}}_{c,l,m}\mathcal{I}_{c,i+l,j+m}^{(\ell )}.\] Our custom network architecture has numerous advantages. Its output is a linear function of the input vector (unlike the nonlinear map learned by Kaneda et al. (2023)), making it easier to interpret as a preconditioner. The architecture is also very lightweight: a model with \(\mathcal{L}=4\) coarsening levels has only \(\sim\!25\)k parameters. Its simplicity accelerates network evaluations at solve time, critical to make NPSDO competitive with the state-of-the-art solvers used in practice. We note that our solver is fully matrix free, with \(\boldsymbol{\mathcal{P}}^{\text{net}}\) relying only on the image \(\boldsymbol{\mathcal{I}}\) of the simulation scene to infer information about \(A^{\boldsymbol{\mathcal{I}}}\). Furthermore, since all network operations are formulated in terms of local windows into \(\boldsymbol{\mathcal{I}}\) and \(\mathbf{r}\), it can train and run on _problems of any size divisible by \(2^{\mathcal{L}}\)_. The 3D version of our architecture is a straightforward extension of the 2D formulas above, simply using larger tensors with additional indices to account for the extra dimension, as well as extending the sums to run over these indices. #### 4.2.2 Training We train our network \(\boldsymbol{\mathcal{P}}^{\text{net}}\) to approximate \(A^{\boldsymbol{\mathcal{I}}}\backslash\mathbf{b}\) when presented with image \(\boldsymbol{\mathcal{I}}\) and input vector \(\mathbf{b}\). We calculate the loss for an example \((\boldsymbol{\mathcal{I}},A^{\boldsymbol{\mathcal{I}}},\mathbf{r})\) from our training dataset as the residual norm: \[Loss=\left\|\mathbf{b}-A^{\boldsymbol{\mathcal{I}}}\boldsymbol{\mathcal{P}}^{ \text{net}}(\boldsymbol{\mathcal{I}},\mathbf{b})\right\|_{2}.\] We found the more involved loss function used in Kaneda et al. (2023) not to benefit our network. Our training data set consists of 183 matrices collected from 10 different simulation scenes, some of domain shape (128, 128, 128) and others (256, 128, 128). For each matrix, we generate 800 right-hand side vectors using a similar approach to Kaneda et al. (2023), but with far fewer Rayley-Ritz vectors. We first compute 1600 Ritz vectors using Lanczos iterations (Lanczos, 1950) and then generate from them 800 random linear combinations. These linear combinations are finally normalized and added to the training set. To accelerate data generation, we create the right-hand sides for different matrices in parallel; it takes between 0.5 and 3 hours to generate the data for each scene. Since Ritz vector calculation is expensive, we experimented with other approaches, like picking random vectors or constructing analytical eigenmodes for the Laplacian on \(\mathcal{D}\) and masking out entries outside \(\Omega\). Unfortunately these cheaper generation techniques led to degraded performance. In each epoch of training, we loop over the matrices of our dataset in shuffled order. For each matrix, we process all of its 800 right-hand sides in batches of 128, repeating five times. The full training process takes 5-7 days on an AMD EPYC 9554P 64-Core Processor with an NVIDIA RTX 6000 GPU. The training and validation losses are computed every five epochs, and we found it beneficial to terminate after 50 epochs. #### 4.2.3 Implementation We built our network using PyTorch (Paszke et al. (2019)), but implemented our custom convolutional and linear blocks as custom CUDA extensions. The neural network was trained using single precision floating point. ## 5 Results and Analysis We evaluate the effectiveness and efficiency of our neural preconditioned solver by comparing it to high-performance state-of-the-art implementations of several baseline methods: unpreconditioned CG provided by the CuPy library (Okuta et al., 2017), as well as CG preconditioned by the algebraic multigrid (AMG) and incomplete Cholesky (IC) implementations from the AMGCL library (Demidov, 2020). All of these baseline methods are accelerated by CUDA backends running on the GPU, with the underlying IC implementation coming from NVIDIA's cuSparse library. Where appropriate, we FluidNet (Tompson et al., 2017) and DCDM (Kaneda et al., 2023). Finally, we included characteristic performance statistics of a popular sparse Cholesky solver CHOLMOD (Chen et al., 2008). In all cases, our method outperforms these baselines, often dramatically. We executed all benchmarks on a workstation featuring an AMD Ryzen 9 5950X 16-Core Processor and an NVIDIA GeForce RTX 3080 GPU. We used as our convergence criterion for all methods a reduction of the residual norm by a factor of \(10^{6}\), which is sufficiently accurate to eliminate visible simulation artifacts. We evaluate our neural preconditioner in single precision floating point but implement the rest of the NPSDO algorithm in double precision for numerical stability. We benchmarked on twelve simulation scenes with various shapes--\((128,128,128)\), \((256,128,128)\), and \((256,256,256)\)--each providing 200 linear systems to solve. For each solve, we recorded the number of iterations and runtime taken by each solver. These performance statistics are summarized visually in Figures 3-6 and in tabular form in Appendix A.2. Figure 2(a) summarizes timings from all solves in our benchmark suite: for each system, we divide the unpreconditioned CG solve time by the other methods' solve times to calculate their speedups and plot a histogram. We note that our method significantly outperforms the others on a majority of solves: ours is fastest on 94% of the linear systems, which account for 98% of our total solve time. Our improvements are more substantial on larger problems, (Figures 2(b) and c) for two reasons. First, condition numbers increase with size, impeding solvers without effective preconditioners; this is seen clearly by comparing results from two different resolutions (Figures 2(d) and e). Second, the Figure 2: Renderings of some benchmark scenes. small matrices \(\hat{A}^{\mathbf{\mathcal{Z}}}\) correspond to simulation grids with mostly non-fluid cells. While CG, AMGCL and IC timings shrink significantly as fluid cells are removed, our network's evaluation cost does not: it always processes all of \(\mathcal{D}\) regardless of occupancy. This scaling behavior is visible in Figure 6. Our speedups are also greater for examples with \(\Gamma_{d}=\emptyset\). DCDM is applicable for these, and so we included in it Figure (d)d (but not in Figure (e)e due to the network overspilling GPU RAM). DCDM's failure to outperform CG and IC in these results, contrary to (Kaneda et al., 2023), can be attributed to the higher-performance CUDA-accelerated implementations of those baselines used in this work. With Dirichlet conditions (Figure (f)f), our preconditioner is less effective, and yet we still outperform the rest on 91% of the frames, which account for 96% of our total solve time. Statistics are not reported in this setting for DCDM and FluidNet, which struggle to reduce the residual (Figure 5). Figure 4: Comparisons among AMG, IC, CG and NSPDO (Ours) on a single frame at \(256^{3}\) with Neumann only BC (left two) and mixed BC (right two). Figure 5: Comparisons among AMG, IC, CG, DCDM, FluidNet (FN) and NSPDO (Ours) on a single frame at \(128^{3}\) with Neumann only BC (left two) and mixed BC (right two). Figure 3: Histograms of solution speedup vs. a baseline of unpreconditioned CG (a) for all solves; and (b-f) for certain subsets of the systems to help tease apart the different modes of the distribution. Further insights can be obtained by consulting Figures 4 and 5, which show the convergence behavior of each iterative solver on characteristic example problems. AMG is clearly the most effective preconditioner, but this comes at the high cost of rebuilding the multigrid hierarchy before each solve: its iterations cannot even start until long after our solver already converged. Our preconditioner is the second most effective and, due to its lightweight architecture, achieves the fastest solves. DCDM is also quite effective at preconditioning for Neumann-only problems, but its iterations are slowed by costly network evaluations. IC's setup time is shorter than AMG but still substantial, and it is much less effective as a preconditioner. We note that the smoke example (Figure 5) also includes a comparison to FluidNet _applied as a preconditioner_ for PSDO. In the original paper, FluidNet was presented as a standalone solver, to be run just once per simulation frame. However, in this form it cannot produce highly accurate solutions. Incorporating it as a preconditioner as we do here in theory allows the system to be solved to controlled accuracy, but this solver ended up stalling before reaching a \(10^{6}\) reduction in our experiments; for this reason it was omitted from Figure 3. On average, our solver spends 81% of its time evaluating \(\mathbf{\mathcal{P}}^{\text{net}}\), 4% of its time in orthogonalization, and the remaining 15% in other CG operations. In contrast, AMG takes a full 90% of its time in its setup stage. IC's quicker construction and slower convergence mean it takes only 23% in setup. Our architecture also confers GPU memory usage benefits: for \(128^{3}\) grids, our solver uses 1.5GiB of RAM, while FluidNet and DCDM consume 5GiB and 8.3GiB, respectively (Appendix A.2). ## 6 Conclusions The neural-preconditioned solver we propose not only addresses more general boundary conditions than past machine learning approaches for the Poisson equation (Tompson et al., 2017; Kaneda et al., 2023) but also dramatically outperforms these solvers. It even surpasses state-of-the art high-performance implementations of standard methods like algebraic multigrid and incomplete Cholesky. It achieves this through a combination of its strong efficacy as a preconditioner and its fast evaluations enabled by our novel lightweight architecture. Nevertheless, we see several opportunities to improve and extend our solver in future work. First, although we implemented our spatially-varying convolution block in CUDA, it remains the computational bottleneck of the network evaluation and is not yet fully optimized. We are also excited to try porting our architecture to special-purpose acceleration hardware like Apple's Neural Engine; not only could this offer further speedups, but also it would free up GPU cycles for rendering the results in real-time applications like visual effects and games. Second, we would like to explore was to explicitely enforce symmetry and even positive definiteness of our preconditioning operator so that the less expensive PCG algorithm could be used rather than PSDO. Third, for applications where fluid occupies only a small portion of the computational domain, we would like to develop techniques to exploit sparsity for better scaling (Figure 6). Finally, we look forward to extending our ideas to achieve competitive performance for problems posed on unstructured grids as well as equations with non-constant coefficients, vector-valued unknowns (_e.g._, elasticity), and nonlinearities. Figure 6: Solver scaling for mixed BC system matrices originating from a fixed-resolution domain \((n_{c}=256^{3})\); matrix row/col size \(n_{f}\) is determined by the proportion of cells occupied by fluid. The vast majority of total solve time is contributed by the high-occupancy systems clustered to the right, where our method outperforms the rest.
2309.12410
Dissecting the morphology of star forming complex S193
We have studied a star-forming complex S193 using near-infrared (NIR) observations and other archival data covering optical to radio wavelengths. We identified stellar clusters in the complex using the NIR photometric data and estimated the membership and distance of the clusters. Using the mid-infrared (MIR) and far-infrared (FIR) images, the distribution of the dust emission around H\,{\sc ii} regions is traced in the complex. The $Herschel$ column density and temperature maps analysis reveal 16 cold dust clumps in the complex. The H$\alpha$ image and 1.4 GHz radio continuum emission map are employed to study the ionised gas distribution and infer the spectral type and the dynamical age of each H\,{\sc ii} region/ionised clump in the complex. The $^{12}$CO(J =3$-$2) and $^{13}$CO(J =1$-$0) molecular line data hint at the presence of two velocity components around [-43,-46] and [-47,-50] km/s, and their spatial distribution reveals two overlapping zones toward the complex. By investigating the immediate surroundings of the central cluster [BDS2003]57 and the pressure calculations, we suggest that the feedback from the massive stars seems responsible for the observed velocity gradient and might have triggered the formation of the central cluster [BDS2003]57.}
Rakesh Pandey, Saurabh Sharma, Lokesh Dewangan, D. K. Ojha, Neelam Panwar Arpan Ghosh, Tirthendu Sinha, Aayushi Verma, Harmeen Kaur
2023-09-21T18:19:09Z
http://arxiv.org/abs/2309.12410v1
# Dissecting the morphology of star forming complex S193 ###### Abstract We have studied a star-forming complex S193 using near-infrared (NIR) observations and other archival data covering optical to radio wavelengths. We identified stellar clusters in the complex using the NIR photometric data and estimated the membership and distance of the clusters. Using the mid-infrared (MIR) and far-infrared (FIR) images, the distribution of the dust emission around H ii regions is traced in the complex. The \(Herschel\) column density and temperature maps analysis reveal 16 cold dust clumps in the complex. The H\(\alpha\) image and 1.4 GHz radio continuum emission map are employed to study the ionised gas distribution and infer the spectral type and the dynamical age of each H ii region/ionised clump in the complex. The \({}^{12}\)CO(J =3\(-\)2) and \({}^{13}\)CO(J =1\(-\)0) molecular line data hint at the presence of two velocity components around [-43,-46] and [-47,-50] km/s, and their spatial distribution reveals two overlapping zones toward the complex. By investigating the immediate surroundings of the central cluster [BDS2003]57 and the pressure calculations, we suggest that the feedback from the massive stars seems responsible for the observed velocity gradient and might have triggered the formation of the central cluster [BDS2003]57. keywords: stars: luminosity function, mass function - stars:formation - dust, extinction - H ii regions ## 1 Introduction Understanding the formation process of massive stars has been one of the leading research goals of many current and past studies (Andre et al., 2016; Torii et al., 2017; Motte et al., 2018; Fukui et al., 2021, 2021) as they have a profound effect on the Galactic evolution. The formation processes of massive stars are also less understood as they are of a short time scale and are less frequent than the lower mass stars. Since massive stars start affecting their natal environment soon after their formation, these processes are even more complex (Azimlu & Fich, 2011). Stellar collision (Bonnell et al., 1998), competitive accretion (Bonnell et al., 2001) and monolithic collapse of a dense cloud are considered in the literature as plausible scenarios to explain the formation process of massive stars. In recent years, cloud-cloud collision (CCC) has emerged as a very promising mechanism to explain massive star formation, as demonstrated by numerous theoretical and observation works (Furukawa et al., 2009; Dewangan et al., 2017; Fukui et al., 2018, 2021; Liang et al., 2021). The CCC model was first conceptualized by Habe & Ohta (1992) where they realized the possibility of massive star formation in the compressed layer between one small and another large cloud formed via supersonic collision. Recently, Fukui et al. (2021), with a detailed magneto-hydrodynamic simulation, show that the dense gas cores form in the compressed layer between two clouds, which preferentially triggers the formation of O and early B-type stars. The powerful winds, jets, and outflows from the massive stars change the physical conditions such as temperature, density, and turbulence in the surrounding molecular cloud (Gritschmeder et al., 2009). For example, H ii regions are formed through the expansion of ionised gas in the surrounding cloud due to the highly energetic UV radiation and winds from these massive stars. As most of the star formation occurs in a group or cluster, massive stars are often associated with young star clusters and H ii regions in a star-forming region. The feedback from the massive stars thus plays a vital role in governing the star formation process around them. It can push forward and in crease the star formation rate, known as positive feedback or slow down or terminate star formation with negative feedback. The outcome depends not only on the process but the surrounding environment itself (Shima et al., 2017). Lying between the star-forming regions W4 and W5, the S193 complex (\(\alpha_{2000}=\)02\({}^{h}\)47\({}^{m}\)25\({}^{s}\), \(\delta_{2000}=61^{\circ}56^{\prime}57^{\prime\prime}\)) consists of three H ii regions Sh 2-192 (hereafter, S192), Sh 2-193 (hereafter, S193) and Sh 2-194 (hereafter, S194) (Qin et al., 2008; Blitz et al., 1982). This region also hosts three previously identified massive stars, i.e., S192-1 (B2.5V), S193-1 (B2.5V) & S193-3 (B1.5V) (Russeli et al., 2007), along with three IRAS sources, IRAS 02435+6144 (\(\alpha_{2000}=\)02\({}^{h}\)47\({}^{m}\)25\({}^{s}\), \(\delta_{2000}=61^{\circ}56^{\prime}57^{\prime\prime}\)), IRAS 02437+6145 (\(\alpha_{2000}=\)02\({}^{h}\)47\({}^{m}\)40.4\({}^{s}\), \(\delta_{2000}=61^{\circ}45^{\prime}55^{\prime\prime}\)), IRAS 02439+6143 (\(\alpha_{2000}=\)02\({}^{h}\)47\({}^{m}\)53.6\({}^{s}\), \(\delta_{2000}=61^{\circ}56^{\prime}01^{\prime\prime}\)) and a Be star D750 (\(\alpha_{2000}=\)02\({}^{h}\)47\({}^{m}\)35\({}^{s}\), \(\delta_{2000}=61^{\circ}55^{\prime}30^{\prime\prime}\)) (Gkouvelis et al., 2016). Two open clusters, i.e, [BDS200357] (Russeli et al., 2007) and Teutsch 162 (hereafter, T162) (Qin et al., 2008), are also part of this region. In the S193 complex, Azimlu & Fich (2011) studied the molecular gas in a velocity range of [-44,-50] km/s (see also Qin et al. (2008)) and identified ten molecular clumps. The velocities of the ionised gas toward S192, S193, and S194 were reported to be -44.7, -49.6, and -44.7 km/s, respectively (Anderson et al., 2014). The S193 complex is thus an exciting region to investigate the star formation processes involving massive stars, young star clusters, and molecular clumps. In this paper, we have done a comprehensive multiwavelength study of the S193 complex, which is presented as follows. Section 2 has information about the multiwavelength data sets used in this study and explains the data reduction procedures regarding our near-infrared (NIR) observations. In Section 3, we analyzed the multiwavelength data sets to estimate the various morphological parameters and probed the physical environment around the S193 complex. In section 4, we have discussed the star formation scenario in the S193 complex and summarized our study in Section 5. ## 2 Observation and data reduction ### Near-infrared (NIR) Imaging data We observed the S193 complex in broad-band NIR (\(JHK\)) filters using the TIFR Near Infrared Spectrometer and Imager (TIRSPEC)1. The instrument is mounted on the 2 m Himalayan \(Chandra\) Telescope (HCT), Hanle, Ladakh, India. Ninan et al. (2014) have provided the details of this instrument and its detector array specifications. TIRSPEC covers \(307^{\prime\prime}\times 307^{\prime\prime}\) of the sky in the imaging mode; thus it took four pointings to cover the entire S193 complex (\(\sim 10^{\prime}\times 10^{\prime}\)). In each pointing, we took five dithered positions with seven frames in each dithered position. The exposure time of frames in all three bands (\(JHK\)) is 20 secs. We have also observed the central embedded cluster '[BDS2003]57' (FOV \(\sim 1^{\prime}\times 1^{\prime}\) ) in \(JHK\) bands using the TIFR-ARIES Near Infrared Spectrometer (TANSPEC; Sharma et al., 2022) mounted on the 3.6m Devasthal Optical Telescope (DOT), Nainital, India. We observed the source in six sets, each containing seven dithered positions with five frames in each position. The exposure time of frames in the K band is 20 secs, while for the J and H bands, it is 50 secs. The NIR data reduction, which includes cleaning the raw images, and performing photometry and astrometry, is done using the procedure explained in Pandey et al. (2020). Using the transformation equations below, we transformed our instrumental \(JHK\) magnitudes into a standard Vega system. Footnote 1: [http://www.tifr.res.in/](http://www.tifr.res.in/)\(\sim\)data/tirspec/ \[(J-K)=(0.92\pm 0.12)\times(j-k)+(0.64\pm 0.06) \tag{1}\] \[(H-K)=(0.99\pm 0.03)\times(h-k)+(-0.04\pm 0.02) \tag{2}\] \[(K-k)=(-0.04\pm 0.10)\times(H-K)+(-4.85\pm 0.02) \tag{3}\] In the above equations, \(jhk\) and \(JHK\) denote the instrumental and standard magnitudes, respectively. The coefficients in the above equations were generated using the stars common in our observations and the 2MASS catalogue for one of the pointings in the TIRSPEC observations. Similarly, the coefficients were estimated separately for each pointing and instrument. We made a combined catalogue that includes the TIRSPEC and TANSPEC observations by considering only those stars with less than 0.1 mag photometric uncertainty. The magnitudes of the stars which are saturated in our observations are taken from the 2MASS catalogue. Table 1 provides the statistics of our NIR observations. ### Archival data The multiwavelength archival data sets used in the present study are tabulated in Table 2. The optical photometric data in the giry bands (i.e., g, r, i, z, and y) were taken from the Pan-STARRS1 Surveys (PS1). The 5\(\sigma\) limiting magnitudes of point sources in g, r, i, z, and y bands are 23.3, 23.2, 23.1, 22.3, and 21.4 mag, respectively (Chambers et al., 2016). We particularly used g and i band photometric data from the survey, to plot the colour-magnitude diagram (CMD, cf. Section 3.3). We accessed the point-source catalogue of 2MASS survey in NIR \(J\), \(H\), and \(K\) bands. The catalogue is 99 % complete up to the \(J\)= 15.8 mag, \(H\)=15.1 mag, and \(K\)=14.3 mag (Skrutskie et al., 2006), we used the 2MASS NIR data to generate the surface density map of the stars in the targeted region (cf. Section 3.1). The Gaia Early Data Release 3 (DR3) (Gaia Collaboration et al., 2021) data are used in the present work for estimating the member stars of the clusters using their proper motion. Gaia DR3 provides astrometry, photometric, and radial velocity measurements of the nearly \(\sim\) 1.8 billion sources, the data shows completeness of 99 % upto the G \(\sim\) 20 (for b \(\sim\) 30\({}^{\circ}\)), and G \(\sim\) 22 (for higher latitudes). We used Mid-infrared (MIR) and Far-infrared (FIR) images from \(Spitzer\) and \(Herschel\), respectively to trace the heated dust emission in the S193 complex, while Submillimetre (Sub-mm) images from \(Herschel\) are effective in tracing the cold dust emission. The radio continuum image from the NRAO VLA Sky Survey (NVSS) along with the H\(\alpha\) image from the INT Photometric H\(\alpha\) Survey of the Northern Galactic Plane (IPHAS), are used in this work, to trace the distribution of the ionised gas (cf. Section 3.6.2). **We also used the \({}^{12}\)CO(J =3\(-\)2) line data from the 15m JCMT telescope (James Clerk Maxwell Telescope) archive and the \({}^{13}\)CO(J =1\(-\)0) data from the 13.7m Purple Mountain Observatory (PMO, China) acquired as a part of the Milky Way Imaging Scroll Painting (MWISP)project 2. The molecular line data are used for tracing the kinematics of the molecular gas in the targeted region (cf. Section 3.6.4). Footnote 2: [http://english.dlh.pmo.cas.cn/ic/in/](http://english.dlh.pmo.cas.cn/ic/in/) ## 3 Results and Analysis ### Stellar clustering/groupings in the S193 complex To identify clustering/grouping in S193 complex, we estimated the stellar surface density distribution (Gutermuth et al., 2009; Sharma et al., 2016). We used the nearest neighbour (NN) method to generate surface density maps in the region (Gutermuth et al., 2005). We took a grid size of 6'' and determined local surface density by varying radial distance so that it encompasses through the nearest 6\({}^{th}\) NN in 2MASS catalog. This technique has been used and described in our previous work also (Pandey et al., 2020, 2022). We have shown the density contours in Figure 1a with the yellow colour, and the lowest contour is drawn at 1\(\sigma\) above the mean stellar density (8.7 stars/arcmin\({}^{2}\)), and step size of 1\(\sigma\) (2.9 stars/arcmin\({}^{2}\)). We can see a fragmented overdensity of stars in the S193 region with three sub-groupings, one at the centre (CC), another at the North-East (NE) direction, and another at the South-East direction (SEC). All three sub-groupings are connected through the lowest density contours and seem to be part of a similar population. The clusterings at the centre and NE direction coincide with the previously identified star clusters, [BDS2003]57 and T162, respectively. We have roughly estimated the cluster sizes by visually inspecting the surface density contours. The box enclosing different cluster regions are marked in Figure 1 with red boxes. In this way, the size of the clustering [BDS2003]57 is found to be \(\sim\) (1\(\times\)1) arcmin\({}^{2}\) while the size of the Tuesch 162 and SEC is found to be \(\sim\) (1.4\(\times\)1.4) arcmin\({}^{2}\), (1.5\(\times\)1.4 arcmin\({}^{2}\)), respectively. From the 2MASS NIR data, we found 40 stars inside the [BDS2003]57, it has a mean stellar density of \(\sim\) 34 stars/arcmin\({}^{2}\). The Tuesch 162 contains 29 stars inside it with a mean stellar density of \(\sim\) 14.1 stars/arcmin\({}^{2}\), and the SEC has 30 stars inside and it has a mean stellar density of 13.3 stars/arcmin\({}^{2}\). As the 2MASS data is bit shallow, there is a probability of finding more low-mass/embedded stars within the cluster region, thus above estimates can be taken as lower limits. Figure 1 shows the colour composite image of three clusters (see b, c, and d panels) made using \(JHK\) band images. The central clustering [BDS2003]57 (cf. Figure 1b) is covered in our TANSPEC observations, while the other two clusterings (cf. Figure 1 c, d) are covered in our TIRSPEC observations (cf. Section 2.1). The identified groupings ([BDS2003]57, T162, and SEC) lie within a circle centred at \(\alpha_{2000}\): \(02^{h}47^{m}34^{s}.29\), \(\delta_{J2000}\): \(61^{\circ}57^{\prime}20.93^{\prime\prime}\) and radius of 3' (shown with a white circle in Figure 1a). Footnote 2: [http://english.dlh.pmo.cas.cn/ic/in/](http://english.dlh.pmo.cas.cn/ic/in/) ### Membership analysis of the stars in the S193 complex Recently released Gaia data release 3 (Gaia Collaboration et al., 2021) is becoming highly impactful in the study of star clusters by providing precise measurements of the proper motion (PM) and parallax of stars. As we have found three inter-connected stellar groupings/clusterings in the S193 complex (Section 3.1), it is now important to find whether these clusterings are connected in the proper motion space and further estimate their member stars. In Figure 2, we have shown the vector point diagrams (VPDs) of three clusterings in which proper motions in RA and DEC are plotted as X and Y axes, respectively. In the VPDs, we can see that the stars in all three clusterings show the distribution around \(\mu_{\alpha}\)cos(\(\delta\)) = -0.70 mas yr\({}^{-1}\) and \(\mu_{\delta}\) = 0.17 mas yr\({}^{-1}\), this suggests that the three clusters are also interconnected in the proper motion space. We thus determined the membership probability of all the stars in a circular FOV of 3' (which accommodates all three clusterings, see Section 3.1) using the method described in Balaguer-Nunez et al. (1998) and Sharma et al. (2020). The procedure starts with determining frequency distributions of cluster stars (\(\phi_{c}^{\nu}\)) and field stars (\(\phi_{f}^{\nu}\)) using equations 3 and 4 of Balaguer-Nunez et al. (1998). The PM centre and its corresponding dispersion for the member stars (\(\mu_{\alpha}\)cos(\(\delta\)) = -0.70 mas yr\({}^{-1}\), \(\mu_{\delta}\) = 0.17 mas yr\({}^{-1}\), \(\sigma_{c}\), \(\sim\)0.5 mas yr\({}^{-1}\)) as well as for the field stars (\(\mu_{xf}\) = 0.29 mas yr\({}^{-1}\), \(\mu_{yf}\) = -0.50 mas yr\({}^{-1}\), \(\sigma_{xf}\) = 4.10 mas yr\({}^{-1}\) and \(\sigma_{yf}\) = 2.85 mas yr\({}^{-1}\)) is determined using the procedure previously used and described in Pandey et al. (2020, 2022) and Sharma et al. (2020). We calculated PM dispersion for the cluster by using the radial velocity dispersion of 1 kms\({}^{-1}\) for open clusters (Girard et al., 1989) and assuming a distance of 4.84 kpc (present estimate), the PM dispersion comes out to be 0.5 mas yr\({}^{-1}\). We used this value as a PM dispersion for the cluster stars (for both RA and DEC), the same method has been used in previous studies (Yadav et al., 2013; Sharma et al., 2020; Pandey et al., 2022). The field stars are taken from the VPD of all stars within a 3' circle enclosing all three clustering and roughly falling outside of the PM distribution of cluster stars (for more details, see Sharma et al. (2020)). Finally, we have determined the membership probability, i.e., the ratio of the distribution of cluster stars to all stars, using the below-mentioned equation. \[P_{\mu}(i)=\frac{n_{c}\times\phi_{c}^{\nu}(i)}{n_{c}\times\phi_{c}^{\nu}(i)+n_{f} \times\phi_{f}^{\nu}(i)} \tag{4}\] Where \(n_{c}\) (=0.19) and \(n_{f}\) (=0.81) correspond to the normalized number of stars for the cluster and field region, respectively. In Figure 3, membership probability, errors in proper motion, and parallaxes are plotted against the G magnitude. By applying the membership criteria of considering only those stars as a member which have a membership probability P\({}_{\mu}\)\(>\) 80 %, we identified 80 stars as the member of clustering in the S193 complex. Figure 3 shows the member stars as green circles. Figure 3 depicts that the member stars are quite effectively separated from the field stars in the brighter end while in the fainter end, they are not well separated (see top panel of Figure 3) because of the large uncertainties in the proper motion at fainter magnitudes (mid dle panel). In Table 3, we have tabulated the 80 member stars of the S193 complex. ### Reddening and distance of the S193 complex Few distance estimates are available in the literature for the S193 complex but with different values. Russeil et al. (2007) reported a distance of \(2.4\pm 0.32\) kpc based on the individual distances of S192 and S193 H ii regions, while, Qin et al. (2008) and Azimlu & Fich (2011) reported distances as 5.2 kpc and \(2.96\pm 0.54\) kpc, respectively. Qin et al. (2008) reports a distance based on the galactic rotation curve and the accompanying cloud's CO velocity, whereas Russeil et al. (2007) and Azimlu & Fich (2011) report spectrophotometric Figure 1: (a) Colour composite image of the S193 complex made using 2MASS \(K\) (red), \(H\) (green), \(J\) (blue) band images. The surface density contours from the present analysis (see Section 3.1) are overplotted with the yellow colour, while the core region of the three clusterings ([BDS2003]57, T162, and SEC) is marked with the red square. (b) TANSPEC \(K\) (red), \(H\) (green), and \(J\) (blue) band colour composite image of the core region of the [BDS2003]57 clustering [FOV \(\sim 56^{\prime\prime}\times 56^{\prime\prime}\)]. TIRSPEC \(K\) (red), \(H\) (green) and \(J\) (blue) band colour composite image of the core region of the T162 [FOV \(\sim 86^{\prime\prime}\times 86^{\prime\prime}\)] and the SEC [FOV \(\sim 90^{\prime\prime}\times 90^{\prime\prime}\)] are shown in (c) and (d), respectively. distances. Out of the 80 member stars of the S193 complex, the distance of the 23 stars having parallax values with good accuracy (i.e., error\(<\) 0.1 mas, red triangles in Figure 3) having distance estimation in Bailer-Jones et al. (2021), is further used to constrain the distance of the clusters and the S193 complex. We estimated the distance of each clustering in the S193 complex by taking the mean of the distance of the above stars provided in Bailer-Jones et al. (2021), which belongs to that particular cluster. In this way, the distance of the cluster [BDS2003]57 is found to be 4.59\(\pm\)1.4 kpc while the distance of the T162 and SEC is found to be 4.96\(\pm\)0.67 kpc and 4.51\(\pm\)1.6 kpc, respectively. The distance of the whole S193 complex is found to be 4.84\(\pm\)1.4 kpc. The distance of the clusters matches each other and the distance of the S193 complex within the error, which suggests that all the clustering are at the same distance and are part of the S193 complex. Our derived distance of the S193 complex matches with the distance derived by Qin et al. (2008) and Chan & Fich (1995), while differs with the distance estimated by Russell et al. (2007) and Azimlu & Fich (2011). We have also calculated the distance of massive stars in the complex using distance estimates from Bailer-Jones et al. (2021). The distance of the massive star S192-1 is 5.99\(\pm\)0.45 kpc, while the distance of the massive star S193-1 and S193-3 is found to be 5.06\(\pm\)0.64 and 10\(\pm\)2 kpc, respectively. The distance of the massive stars S192-1 and S193-1 matches the distance of the S193 complex within error, while the S193-3 is much far away. Russeil et al. (2007) also estimated the distance of the massive star S193-3 as 8.0 kpc and concluded that the S193-3 is a background star and not a part of the S193 complex. We have also used the CMD of the stars in the S193 complex to validate the derived distance of the S193 further. This technique is very much reliable and widely used by many authors (for example, Phelps & Janes, 1994; Sharma et al., 2006, 2017, 2020). In Figure 4, we have shown the \((g-y)\) vs. \(g\) CMD of the stars in the S193 complex (black dots) taken from the PS1 archive (see Table 2). The member stars in Figure 4 are shown with green circles. The reddening value in the direction of the S193 is taken from the Bayeset dust map by Green et al. (2019). We retrieved the E(g-y)=1.827 mag for our studied target from the extinction map of Green et al. (2019), then it is converted to Av using the color relations given by Wang & Chen (2019). We found the reddening value to be Av\({}_{V}\)= 2.41 mag in the direction of the S193 complex. The ZAMS (continuous blue curve) and PMS isochrone of 1.0 Myr (continuous red curve) Figure 4: g vs (g-y) CMD, plotted using the PS1 data, the black dots represent the star within the cluster while green dots represent the member star identified in the present analysis (see section 3.2). The ZAMS and 1 Myr isochrone by Pastorelli et al. (2019) corrected for distance 4.84 kpc and reddening \(E(g-y)\) = 1.827 mag, are shown with blue and red continuous curves, respectively. The star symbols denote the previously identified massive stars and the bright member stars. Figure 3: Membership probability (P\({}_{\mu}\)), PM errors (\(\sigma_{PM}\)) and parallax of all the stars within the clustering (circle having radius 3\({}^{\prime}\)) identified in the S193 complex as a function of \(G\) magnitude. The probable member stars (P\({}_{\mu}>\)80 %) are shown by green circles, while 24 members of the S193 complex having parallax values with good accuracy (err \(<\) 0.1 mas) are shown by red triangles. Figure 2: PM vector-point diagrams for [BDS2007]57 (a), Tuesch 162 (b), and SEC (c) clustering. is also shown in the figure, which is taken from Pastorelli et al. (2019). Both the ZAMS and isochrone are corrected for the reddening (E(g-y)= 1.827 mag) and distance (4.84 kpc). The ZAMS matches the distribution of the member stars, which gives confirmation regarding the distance and reddening of the S193 complex. Apart from the previously identified massive stars (S192-1, S193-1, and S193-3), the S193 complex hosts few other bright member stars which are shown with star symbols in Figure 4. The star BG2 lies at the boundary of the S193 complex and has no radio or 22 \(\mu\)m emissions traced around it (cf. Figure 9). This star is located at a distance of 2.75\(\pm\)0.21 kpc (Bailer-Jones et al., 2021), thus is probably a foreground star. The star BG1 and S194-1 lie inside the H ii region S193 and S194, respectively, and seem to be associated with the radio emission with an envelope kind of structure visible in the WISE 22 \(\mu\)m band (cf. Figure 9). The location of the BG1 and S194-1 in the CMD indicate their spectral types as B5V and B1.5V, respectively. The star S192-1 (previously B2.5V star) seems to be around B1.5V, while S193-1 (previously B2.5V) seems to be around B2.5V in the CMD. With our radio analysis (cf. Section 3.6.2), we found that the ionizing source responsible for creating the H ii regions S192, S193, and S194, should have a spectral type between B0-B0.5. Hence, the BG1 (B5V) along with the S193-1 (B2.5V) may be jointly responsible for creating the H ii region S193. The brightest massive stars S192-1 (B1.5V) and S194-1 (B1.5V) could be the ionizing source of S192 and S194, respectively. ### Mass Function The mass function (MF) is an important statistical tool for understanding the formation of stars and is expressed by a power law, \(N(\log m)\propto m^{\Gamma}\) and the slope of the MF is given as: \(\Gamma=d\log N(\log m)/d\log m\), where \(N(\log m)\) is the number of stars per unit logarithmic mass interval. We have used our deep NIR data from TANSPEC observations to generate the MF of the central cluster [BDS2003]57 which has the maximum number density among all three clusterings found in this region. We have utilized the NIR \(J\) versus \(J-H\) CMDs of the sources in the target region and that of the nearby field region of equal area and decontaminated the former sources from the foreground/background stars and corrected for data in-completeness using a statistical subtraction method (cf. Pandey et al., 2022, 2020; Sharma et al., 2020, 2017, 2012, 2008). We have calculated the completeness factor (CF) for both \(J\) and \(H\) bands and taken the minimum of them according to the mean \(J-H\) colours (\(\sim 1\) mag) of the MS stars as our final completeness value (see Sagar & Richtler, 1991). To determine the CF, we used the IRAF _ADDSTAR_ routine as described in our previous publications (Pandey et al., 2022, 2020). We have derived the CF for the TANSPEC and TIRSPEC data, which is shown as a function of \(J\)-band in the left panel of Figure 5. As expected, the incompleteness of the data increases with increasing magnitude and the photometric data is more than 50% complete up to \(\sim\)19.5 mag and \(\sim\)18.0 mag for the TANSPEC and TIRSPEC observations, respectively. As TIRSPEC observations are shallow and we do not have field region observations using the TANSPEC, we have used the distribution of a nearby reference field in TIRSPEC pointing and the completeness corrected luminosity function (right panel of Figure 5) of this region is used to calculate the number of field stars between \(J\sim 18.0\) to 19.5 mag. In Figure 6, we have shown the \(J\) versus \(J-H\) CMDs for the stars lying within the central cluster '[BDS2003]57' in panel (a) and for those in the reference field region in panel (b). In panel (c), we have plotted the statistically cleaned \(J\) versus \(J-H\) CMD for the central cluster '[BDS2003]57' of \(\sim\)2 Myr of age. The number of stars in different mass bins was then calculated by counting the stars along the reddening vector for a 2 Myr isochrone. The corresponding MF is plotted in Figure 6 (right panel). The 50% completeness limit (J \(\sim\)9.5 mag) of the TANSPEC photometry corresponds to the detection limit of \(\sim 0.3\) M\({}_{\odot}\) stars of an age of \(\simeq\)2 Myr age, embedded in the nebulosity up to \(A_{V}\simeq\)2.4 mag (foreground reddening) and \(\sim\)3 mag (differential reddening) (cf. Figure 6 (left panel) 'c'). There seems to be a break in the slope of the MF distribution of '[BDS2003]57 cluster at \(\sim\)1.5 M\({}_{\odot}\) and the slope 'I' in the mass range \(3>M_{\odot}>1.5\) and \(1.5>M_{\odot}>0.3\) is estimated as \(-2.46\pm 0.01\) and \(+0.64\pm 0.20\), respectively (cf. Figure 7). ### YSOs identification and classification We have used combined NIR catalogue (see Section 2) along with \(Spitzer\) MIR data to identify the YSOs in the \(\sim 10^{\prime}\times 10^{\prime}\) FOV of the S193 complex. The classification schemes of identifying YSOs are discussed and explained in our previous publications (Pandey et al., 2020, 2022). In Figure 8 (a) we show the NIR TCD (\([H-K]\) vs. \([J-H]\)) made by plotting the stars in our combined NIR catalogue (black dots), the thick magenta curve in the figure shows the reddened MS and giant branches, dotted magenta line shows the locus of dereddened CTTS, and dashed magenta lines show the reddenning lines. The classification scheme by (Gutermuth et al., 2009) is used to identify the YSOs using NIR TCD. Figure 8 (b) shows the \([[3.6]-[4.5]]_{0}\) vs. \([K]-[3.6]]_{0}\) TCD, made using the \(Spitzer\) data, we adopted the method by Gutermuth et al. (2009) to identify and classify the YSOs. Finally, we were able to identify 2 Class i and 25 class Class ii YSOs using MIR data, and 5 Class ii sources using NIR data. ### Multiwavelength view of the S193 complex #### 3.6.1 Distribution of the dust In Figure 9a, we have shown the color composite image of the S193 complex (\(\sim 10^{\prime}\times 10^{\prime}\) FOV) made by using the \(WISE\) 22 \(\mu\)m (red), \(Spitzer\) 3.6 \(\mu\)m (green), and \(IPHAS\) H\(\alpha\) (blue) images. The previously identified massive stars in the S193 complex (see Section 1) are shown with the star symbols, while a Be type star and candidate massive sources (see Section 1) are shown with the cross and plus symbols, respectively. The H ii regions in the S193 complex (S192, S193, and S194, marked in Figure 9b) are well traced by the H\(\alpha\) emission in the region. We can also see the warm dust envelopes around the massive stars and central cluster ([BDS2003]57) traced by the \(WISE\) 22 \(\mu\)m emission. Prominent poly-cyclic aromatic hydrocarbon (PAH) features at 3.3, and 11.3 \(\mu\)m are indicative of PDRs (photo-dissociation regions) which lies within the bandwidth of the 3.6 \(\mu\)m and 12 \(\mu\)m images (see e.g., Peeters et al., 2004). The PDR regions traced by the 3.6 \(\mu\)m emission are distributed around the H ii regions in arc-type or circular sub-structures. The S192 H ii region, having PDRs circularly distributed around the warm gas with massive stars at the center, looks like a classical example of a MIR bubble (cf. Churchwell et al., 2006). In Figure 9b, we have shown the color composite image of the S193 complex made by using the \(WISE\) 22 \(\mu\)m (red), \(WISE\) 12 \(\mu\)m (green), and \(IPHAS\) H\(\alpha\) (blue) images. The image is also over-plotted with the radio continuum contours from the \(CGPS\) 1420 MHz map. The spatial distribution of the young stellar objects (YSOs; see Section 3.5) also shown with the magenta color circles. The heated (22 \(\mu\)m emission) and the ionised gas (radio emission) seem to be distributed at the location of the massive stars and are surrounded by the PDRs (12 \(\mu\)m emission). The YSOs in this complex are generally located, away from the massive stars, in regions Figure 5: (a) Completeness factor as a function of \(J\) magnitude derived from the artificial star experiments (_ADDSTAR_, see Section 3.4 for details) on the TANSPEC, TIRSPEC \(J\) and \(H\) band images. The \(H\)-band completeness factor is off-setted by the mean colour of the MS stars (i.e., 1.0 mag). The continuous curves are the smoothed bezier curves for the data points for completeness. (b) Field stars distribution generated by using a nearby reference field (magenta circles) using TIRSPEC \(J\) band data. A straight line is a least square fit to the data points. Figure 6: \(J\) vs. \(J-H\) CMD for (a) stars in the [BDS2003]57 cluster region, (b) stars in the reference region, and (c) statistically cleaned sample of stars in the [BDS2003]57 cluster region. The ZAMS (blue curve) and isochrone of 2 Myr (red curve) by Pastorelli et al. (2019) along with reddening vector for different mass stars (black slanted dashed lines in panel (c)) are also shown. Stars which are on the left side of the 2 Myr isochrone in panel (c) are used to estimate the MF of the region. All the curves are corrected for the distance of 4.84 kpc and foreground extinction of A\({}_{V}\)=2.41 mag. The black horizontal dashed-line in panel (c) represents the 50% completeness limit of the TANSPEC data. with higher gas and dust emissions (12 \(\mu\)m emission). We have also shown the \(Spitzer\) ratio map (4.5 \(\mu\)m/3.6 \(\mu\)m) in the Figure 9c. The prominent molecular hydrogen (\(H_{2}\)) line emission (\(\nu\) = 0-0 \(S(9)\); 4.693 \(\mu\)m) and the Br-\(\alpha\) emission (at 4.05 \(\mu\)m) comes within the bandwidth of \(Spitzer\) 4.5 \(\mu\)m band. Therefore, the bright and dark regions in this ratio map trace mainly the Br-\(\alpha\) emission and PDRs, respectively (for more details, see Pandey et al., 2020). Note that both of these _Spitzer_ images have the same PSF, allowing to remove of point-like sources as well as continuum emission (for more details, see Dewangan et al., 2017). In Figure 9c, the bright regions representing the \(H_{2}\) and Br-\(\alpha\) emission almost mimic the radio continuum emission from the hot ionised gases. The distribution of the dark lanes (PDRs) around the bright regions/radio emission indicates the impact of massive stars in the S193 complex. In Figure 9d, we show the same image as in Figure 9a, also overplotted with the \(JCMT\) (\({}^{12}\)CO(J =3\(-\)2)) contours (see Section 3.6.4). Interestingly, we can also see a bright patch at P1 (see Figure 9c) located away from the radio continuum emission. The cavities in the molecular cloud are clearly visible which shows the impact the H ii regions have produced in their surrounding environment. The CO emission seems to peak at [BDS2003]57 (P1), above the H ii region S193 (P2), and at the SEC clustering. #### 3.6.2 Distribution of the ionised gas To trace the distribution of ionised gas in the S193 complex, we have used both the radio continuum (free-free emission) and the H\(\alpha\) images. In Figure 9b, we can see that the H\(\alpha\) emission in the S193 complex is mostly associated with the H ii regions. In the H ii region S192, the H\(\alpha\) emission is almost circularly enclosed by the PDRs like a MIR bubble. In the case of H ii regions S193 and S194, the H\(\alpha\) emission is more or less extended and is partially surrounded by the PDRs in arc-like sub-structures. Radio continuum contours from the 1.42 GHz map of \(CGPS\) are also overplotted in Figure 9b. They also show similar distribution with separate peaks for each of the H ii regions. The massive star S192-1 (located inside the H ii region S192) and S193-1 (located inside the H ii region S193) might be the ionizing sources of the S192 and S193 H ii regions, respectively. There is no previously identified massive star inside the boundary of H ii region S194, but there is a bright star showing an envelope of warm dust at 22 \(\mu\)m (plus symbol), which could be the possible ionizing source of S194. By estimating the Lyman continuum flux associated with the ionised gas in each of the H ii regions, we can also verify the spectral type of the ionizing sources. Thus, this could lead us to constrain the massive star responsible for creating the H ii regions S192, S193, and S194. We have used the following equation from Schmiedeke et al. (2016) to determine the Lyman continuum flux associated with each H ii region. \[\begin{split} N_{\rm UV}(s^{-1})=7.5\,\times\,10^{46}\,\left( \frac{S(\nu)}{\rm Jy}\right)\left(\frac{D}{\rm kpc}\right)^{2}\\ \left(\frac{T_{\rm c}}{10^{4}\rm K}\right)^{-0.45}\,\times\,\left( \frac{\nu}{\rm GHz}\right)^{0.1}\end{split} \tag{5}\] In the above equation, N\({}_{UV}\) denotes the Lyman continuum photons per second, T\({}_{\rm c}\) denotes the electron temperature, \(\nu\) is the frequency, S\({}_{\nu}\) is the integrated flux, D is the distance of the S193 complex, i.e., 4.84 kpc. We have assumed that all ionization flux in an H ii region is generated by a single massive star and adopted the value of \(T_{\rm e}\) as 10000 K. To calculate the integrated flux, we have used the task jmfit in AIPS on the NVSS (1.4 GHz) map. The values of the integrated flux for the H ii regions S192, S193 and S194 are 67 mJy (\(\sigma\)=0.63 mJy/beam), 42.9 mJy (\(\sigma\)=0.54 mJy/beam), and 11 mJy (\(\sigma\)=0.44 mJy/beam), respectively. The radius of the H ii regions S192, S193, and S194 are found to be 1.85 pc, 1.79 pc, and 1.20 pc, respectively, at a distance of 4.84 kpc. The log(N\({}_{UV}\)) values for the H ii regions S192, S193, and S194 are then estimated as 47.08, 46.89, and 46.30, respectively. By comparing these log(N\({}_{UV}\)) values with those given in Panagia (1973), we have found that the spectral type of the ionizing sources for all the H ii regions in the S193 complex is between B0.5-B0 type. We have also calculated the dynamical age of the H ii regions by using the following equation by Dyson & Williams (1980): \[t_{dyn}=\Big{(}\frac{4R_{s}}{7c_{s}}\Big{)}[\Big{(}\frac{R_{H\,ii}}{R_{s}} \Big{)}^{7/4}-1] \tag{6}\] where, c\({}_{s}\) is the isothermal sound velocity in the ionised gas (c\({}_{s}\) = 11 km s\({}^{-1}\)) (Stahler & Palla, 2005), R\({}_{H\,ii}\) is the radius of the H ii region, and R\({}_{s}\) is the Stromgren radius of the H ii region which is given by: \[R_{s}=\Big{(}\frac{3S_{\nu}}{4\pi n_{0}{{}^{2}\beta_{2}}}\Big{)}^{1/3} \tag{7}\] where, n\({}_{0}\) is the initial ambient density (in cm\({}^{-3}\)) and \(\beta_{2}\) is the total recombination coefficient to the first excited state of hydrogen \(\beta_{2}\) = \(2.6\times 10^{-13}\)(Stahler & Palla, 2005). The dynamical age of the H ii regions S192, S193, and S194 for n\({}_{0}\)=10\({}^{3}\)(10\({}^{4}\)) cm\({}^{-3}\) are estimated as 0.6 (1.9) Myr, 0.6 (2.0) Myr, 0.4 (1.4) Myr, respectively. Figure 7: A plot of the MF for the statistically cleaned CMD for the stellar sources in the [BDS2003]57 cluster region. Log \(\phi\) represents log(\(N\)/dlog \(m\)). The error bars represent \(\pm\sqrt{N}\) errors. The solid line shows the least squares fit to the MF distribution (black dots). The open circle is the data point falling below the completeness limit of 0.3 M\({}_{\odot}\). #### 3.6.3 Embedded \(Herschel\) clumps/condensation in the S193 complex In this section, we have examined the \(Herschel\) column density (\(N\)(H\({}_{2}\))) and temperature map and identified the embedded structures in the S193 complex. We have used the maps produced for the _EU-funded ViaLactore project_(Molinari et al., 2010a) and are publicly available3. A Bayesian PPMAP approach is used on the \(Herschel\) images at 70, 160, 250, 350 and 500 \(\mu\)m to produce these maps (Molinari et al., 2010b; Marsh et al., 2015, 2017). Footnote 3: [http://www.astro.cardiff.ac.uk/research/ViaLactea/](http://www.astro.cardiff.ac.uk/research/ViaLactea/) We show the \(Herschel\) column density map of the S193 complex in Figure 10a. To identify clumps/condensation, we have employed the clumpfind algorithm (Williams et al., 1994) on the \(Herschel\) column density map. The clumpfind provides the total column density and the corresponding area of each clump as an output. We identified 16 clumps in our target field, shown in Figure 10b. We have further calculated the mass of each of the clump by employing the following equation: \[M_{area}=\mu_{H_{2}}m_{H}Area_{pix}\Sigma N(H_{2}) \tag{8}\] where \(\mu_{H_{2}}\) is the mean molecular weight per hydrogen molecule (i.e., 2.8), \(m_{H}\) is the mass of the hydrogen atom, and \(Area_{pix}\) is the area subtended by one pixel (i.e., 6\({}^{\prime\prime}\)pixel\({}^{-1}\)). \(\Sigma N\)(H\({}_{2}\)) signifies the total column density estimated using the clumpfind (see also Dewangan et al., 2017a). The mass and size of the identified clumps are provided in the Table 4. We have the mass range of 224.6 M\({}_{\odot}\) to 1141.9 M\({}_{\odot}\) for the clumps identified in the present study. The most massive clump (M\({}_{\odot}\)=1141.9, ID=1) lies almost at the center of S193 complex coinciding with the boundary of [BDS2003]57 and peak P1 (see Section 3.6.1). Other two massive clumps (M\({}_{\odot}\)=912.2, ID=2) and (M\({}_{\odot}\)=527.8, ID=7) are located near P2 (see Section 3.6.1). We can also see the two clumps (IDs=3 and 9) associated with the SEC clustering (see Section 3.1). In Figure 10c, we also show the \(Herschel\) temperature map of the S193 complex. The S193 complex can be seen glowing with a temperature greater than 17.5 K in the \(Herschel\) temperature map. A temperature around 14 K can be seen at P1, confirming the presence of a cold dust clump at this position. #### 3.6.4 Kinematics of the molecular gas in the S193 complex To trace the distribution of the molecular gas and determine the gas kinematics in the S193 complex, we have used the \({}^{12}\)CO(J =3\(-\)2) \(JCMT\) and MWISP \({}^{13}\)CO(J =1\(-\)0) line data (cf. Section 2). In Figure 11, we have shown the \({}^{12}\)CO and \({}^{13}\)CO velocity profiles at six different locations in the direction of the S193 complex (m1-m6; see Figure 12(a)). In some of the locations we got a double-peaked velocity profile in the \({}^{12}\)CO, these positions also host some of the molecular clumps identified by Azimlu & Fich (2011). However, at positions P1 and P2 (where the intensity is maximum) we did not get a double-peaked velocity profile. In the positions (m1 and m2; cf. Figure 12(a)) the shape of the \({}^{13}\)CO profile almost mimics the \({}^{12}\)CO while at positions m3, m4 we could see a double-peaked velocity profile of \({}^{13}\)CO. Overall, the velocity profiles indicate a velocity range of [-43,-50] kms\({}^{-1}\) with double peaks, i.e., at approximately -45 kms\({}^{-1}\) and Figure 8: a) \([H-K]\) vs. \([J-H]\) TCD for the sources corresponding to the same FOV. We show the reddened MS and giant branches (Bessell & Brett, 1988) by thick magenta dashed curves in the figure. The locus of dereddened CTTSs (Meyer et al., 1997) is shown by dotted magenta line. We also show parallel magenta dashed lines drawn from the tip (spectral type M4) of the giant branch (left reddening line), from the base (spectral type A0) of the MS branch (middle reddening line) and from the tip of the intrinsic CTTS line (right reddening line). The increment of \(A_{V}=5\) mag on the reddening lines is shown with crosses, the red circles are the identified Class ii sources. b) \([[3.6]-[4.5]]_{0}\) vs. \([[K]-[3.6]]_{0}\) TCD of all the sources inside the FOV of S193 complex (\(\sim 10^{\prime}\times 10^{\prime}\)). We used the colour criteria by Gutermuth et al. (2009) to identify and classify the YSOs. We show the identified Class i and Class ii YSOs with green and red circles, respectively. 48 kms\({}^{-1}\) at few particular locations. These double-peaked velocity profiles could show two molecular clouds having different velocity components which might be interacting with each other (see for example; Dewangan et al., 2019). In Figure 12 we have shown moment-0 (a), moment-1 (c), and moment-2 (d) map of \({}^{12}\)CO(J =3\(-\)2). The intensity map (moment-0) is integrated over a velocity range of [-43,-50] kms\({}^{-1}\). The molecular gas shows fragmented morphology around the H ii regions with cavities created by the feedback from the massive stars. We can see two prominent peaks in the intensity map, one at P1 and another at P2, where we have also seen peaks in the \(Herschel\) column density map (see Section 3.6.3). The location of massive stars, IRAS sources, and the YSOs identified in the present study are also shown in the figure. Most of the YSOs and IRAS sources are distributed towards peak P1. The ten molecular clumps identified by Azimlu & Fich (2011) are also shown in the figure and are well correlated with our moment-0 map. Azimlu & Fich (2011) have reported the masses and radii of the detected clumps to be in between 61-7 M\({}_{\odot}\) and Figure 9: (a) Colour composite image of the S193 complex (\(\sim 10^{\prime}\times 10^{\prime}\) FOV) made using \(WISE\) 22 \(\mu\)m (red), \(Spitzer\) 3.6 \(\mu\)m (green) and \(IPHAS\) H\(\alpha\) (blue. (b) Colour composite image of the S193 complex (\(\sim 10^{\prime}\times 10^{\prime}\) FOV) made using \(WISE\) 22 \(\mu\)m (red), \(WISE\) 12 \(\mu\)m (green), \(IPHAS\) H\(\alpha\) (blue) images is overplotted with CGPS radio continuum contours. The individual H ii regions (S192, S193, and S194) are also marked over the image, and the magenta circles show the YSOs identified in the present study. The lowest radio contour is at 5 K and the step size is 0.5 K. (c) \(Spitzer\) ratio map of the S193 complex obtained by dividing \(Spitzer\) 4.5 \(\mu\)m image by \(Spitzer\) 3.6 \(\mu\)m image. (d) The figure shows the same image as in panel (a), overplotted with the \(JCMT\) (\({}^{12}\)CO(J =3\(-\)2)) contours. The lowest contour is at 2 K km s\({}^{-1}\) and the step size is 4 K km s\({}^{-1}\). The arrows marked two prominent peaks (P1 and P2) in CO emission. In the images, star symbols represent the previously identified massive stars, while triangles are the IRAS sources. We have also shown previously identified Be star with the cross symbol and the candidate massive sources with the plus symbol. 0.5-0.2 pc, respectively. These values are a bit lower than those we have estimated using the Hershel column density maps. They have adopted a different value of the distance for the S193 complex, i.e., 2.96 kpc, than the present study (4.84 kpc). The most massive clump C3 (61\(\pm\)19 M\({}_{\odot}\)), together with clumps C5 and C6, lie within the peak P1, whereas the least massive clump C2 (7\(\pm\)2 M\({}_{\odot}\)), along with clump C1 is located towards the identified SEC clustering (cf. Figure 12(a)). There appears to be some correlation between the dust and gas clumps as the most massive dust clumps are distributed towards peak P1 while the less massive clumps are scattered towards SEC (cf. Section 3.6.3). In Figure 12b, we have shown the first-moment (moment-1) map of \({}^{12}\)CO (J =3\(-\)2), which is the intensity-weighted mean velocity of the emitting gas. The first-moment map shows two velocity components, i.e., one mainly in the eastern direction at [-43,-46] kms\({}^{-1}\) and another mainly in the western direction at [-47,-50] kms\({}^{-1}\), entangling at P1 and P2. The moment-2 map, which shows velocity dispersion, indicates relatively high values at P1 and P2. Figure 12(b) shows the moment-0 map of \({}^{13}\)CO(J =1\(-\)0). The \({}^{13}\)CO gas is relatively optically thin and traces dense gas compared to the \({}^{12}\)CO. The moment-0 map, integrated in a velocity interval of [-43,-50] kms\({}^{-1}\) peaks at P1 and P2 and shows almost the same morphology as \({}^{12}\)CO despite having low resolution (cf. Section 2.2). In Figure 13a, we show the color composite image of S193 complex generated from the \({}^{12}\)CO maps at [-43,-46] and [-47,-50] kms\({}^{-1}\) in red and green colors, respectively. The molecular cloud near the massive star seems to be blown away from the massive stars, as indicated by the \(CGPS\) radio contours located in the region of low brightness. In Figure 13b, we have shown the spatial connection of the two components, which is superimposed by the surface density map of the stars in the S193 complex (see Section 3.1). The stellar clustering, i.e., [BDS20003][57], and almost all the YSOs are located in the interaction zone P1. To check whether there is a connection between these two components in velocity space, we have also obtained the position-velocity (PV) map of \({}^{12}\)CO and \({}^{13}\)CO along multiple lines. These lines are marked over the moment-2 map of \({}^{12}\)CO (Figure 12c) and \({}^{13}\)CO (Figure 13d), most of these lines are passing through interaction zone P1 and P2. We obtained the PV map by choosing the width of the slices as 1\({}^{\prime}\), which are shown in Figure 14. Most of the PV maps obtained using \({}^{12}\)CO (Figure 14a) \({}^{13}\)CO (Figure 14b) show a clear signature of velocity gradient where velocity is increasing while going from east to west direction. No apparent signature suggests that two groupings in velocity space are separated with a diffuse emission. #### 3.6.5 Feedback of massive stars in the S193 complex We also examined the effect of feedback from the massive stars in the S193 complex by calculating total feedback pressure. Total feedback pressure consists of the stellar wind pressure (P\({}_{wind}\)), radiation pressure (P\({}_{rad}\)), and the H ii region pressure (P\({}_{H\,ii}\)) (Bressert et al. 2012; Dewangan et al. Figure 11: The \({}^{12}\)CO(J =3\(-\)2) and \({}^{13}\)CO(J =1\(-\)0) profile in the direction of 6 small regions (i.e. m1 to m6; see Figure 12a). Figure 10: (a) \(Herschel\) column density map of the S193 complex overplotted with the column density contours. (b) Clumps identified in the S193 complex using the \(Herschel\) column density map are marked with the corresponding IDs. (c) \(Herschel\) temperature map, the cold gas clump at position P1 is also marked. The position of clumps P1 and P2 is marked in all three panels. The symbols in all three panels are similar to those shown in Figure 9. 2017b), which are given by the following equations (see for details, Bressert et al., 2012). \[P_{rad}=L_{bol}/4\pi cD_{s}^{2}; \tag{9}\] \[P_{H\,ii}=\mu m_{H}c_{s}^{2}\,\left(\sqrt{\frac{3N_{UV}}{4\pi\,\alpha_{B}\,D_{s }^{2}}}\right); \tag{10}\] \[P_{wind}=\dot{M}_{w}V_{w}/4\pi D_{s}^{2}; \tag{11}\] In the equations mentioned above, L\({}_{bol}\) denotes the bolometric luminosity of the ionizing source, \(D_{s}\) is the distance at which pressure has to be calculated, m\({}_{H}\) is the hydrogen atom mass, \(c_{s}\) is the sound speed in the photoionised region (=11 km s\({}^{-1}\); Bisbas et al., 2009). N\({}_{UV}\) in the above Figure 12: a) \(J\)_CMT_ (\({}^{12}\)CO(J =3\(-\)2)) moment-0 map along with the four regions (m1 to m4) from where we have extracted the spectra are shown with the red circles. Molecular clumps (C1 to C10) identified by Azimlu & Fich (2011) are also marked in the figure. (b) MWISP (\({}^{12}\)CO(J =3\(-\)2)) moment-0 map. c) \(J\)_CMT_ (\({}^{12}\)CO(J =3\(-\)2)) moment-1 map. d) \(J\)_CMT_ (\({}^{12}\)CO(J =3\(-\)2)) moment-2 map, the map is overplotted with the lines along which PV maps are obtained (see; Figure 14). equation denotes the Lyman continuum flux, '\(\alpha_{B}\)' is the radiative recombination coefficient, and \(\dot{M}_{w}\), V\({}_{w}\) is the mass-loss rate, and the wind velocity of the ionizing source, respectively. Since there is ambiguity regarding the spectral type of the ionizing sources in the S193 complex, we are considering both the lower and the upper limits of the spectral type for pressure calculation. As a lower limit, we consider B2.5V as the spectral type of all the sources. B2.5V is the reported spectral type for the S193-1 and S192-1; for S194-1, it can also be treated as the lower limit. We are adopting \(L_{bol}\)= 5012 L\({}_{\odot}\)(Panagia, 1973), \(M_{w}\)\(\approx\) 10\({}^{-10}\) M\({}_{\odot}\) yr\({}^{-1}\), and V\({}_{w}\)\(\approx\) 700 km s\({}^{-1}\)(Kobulnicky et al., 2019), and N\({}_{UV}\) = 7.763 \(\times\) 10\({}^{44}\)(Panagia, 1973) for the B2.5V spectral type. We obtained the total pressure (\(P\)= \(P_{H\,ii}\)+\(P_{rad}\)+ \(P_{wind}\)) for the S192, S193, and S194 H ii regions as 1.91 \(\times\) 10\({}^{-12}\) dynes cm\({}^{-2}\), 1.37 \(\times\) 10\({}^{-12}\) dynes cm\({}^{-2}\), and 4.08 \(\times\) 10\({}^{-12}\) dynes cm\({}^{-2}\), respectively. If we add these values to get the pressure due to all three H ii regions at the position of the central cluster [BDS2003]57 (\(P_{all}\)= \(P_{192}\)+\(P_{193}\)+\(P_{194}\)), it comes out to be \(\sim\) 7.36 \(\times\) 10\({}^{-12}\) dynes cm\({}^{-2}\). For the upper limit, we consider the spectral type as B0.5, which is suggested by our radio calculation (cf. Section 3.6.2). For a B0.5 star, we adopted the values of \(L_{bol}\)= 19952 L\({}_{\odot}\), \(\dot{M}_{w}\)\(\approx\) 2.5\(\times\) 10\({}^{-9}\) M\({}_{\odot}\) yr\({}^{-1}\), V\({}_{w}\)\(\approx\) 1000 km s\({}^{-1}\)(Dewangan et al., 2017), and N\({}_{UV}\)\(=\) 7.763 \(\times\) 10\({}^{44}\)(Panagia, 1973). We obtained the total pressure (\(P\)= \(P_{H\,ii}\)+\(P_{rad}\)+ \(P_{wind}\)) for the S192, S193 and S194 H ii regions as 2.78 \(\times\) 10\({}^{-11}\) dynes cm\({}^{-2}\), 2.03 \(\times\) 10\({}^{-11}\) dynes cm\({}^{-2}\), and 5.6 \(\times\) 10\({}^{-11}\) dynes cm\({}^{-2}\), respectively. The pressure due to all three H ii regions comes out to be \(\sim\) 1.06 \(\times\) 10\({}^{-10}\) dynes cm\({}^{-2}\). Here it is worthwhile to note that the distance (\(D_{s}\)) is estimated without taking into consideration of the projection effects. Thus, the estimated distance represents its lower limit and it will lead to the upper limit in the estimated pressure values. ## 4 Discussion The S193 complex has complex and diverse morphological features like young star clusters, three H ii regions, a few massive stars, and several molecular clumps. Figure 9 shows the distribution of the massive stars, dust and gas, PDRs, and ionised gas along with the YSOs in the S193 complex. The presence of YSOs indicates the ongoing recent star formation activity in the region. In this section, we have discussed the implications of the observed morphological features in the S193 complex and investigated the star formation scenario. Most YSOs in the S193 complex are distributed toward the central region populating the young star cluster [BDS2003]57. The other young star cluster T162 is associated with the H ii region S193 and contains a massive star (S193-1). The new clustering identified in the southwest direction (SEC') has few YSOs. We have estimated the MF slope for the central cluster [BDS2003]57 from our deep NIR data and found a change of MF slope from the high to low mass end with a turn-off at around 1.5 M\({}_{\odot}\). This truncation of MF slope at a bit higher mass bins has often been noticed in other **star-forming regions (SFRs)** under the influence of massive OB-type stars (Pandey et al., 2020, 2008; Sharma et al., 2017, 2007; Jose et al., 2008). The Figure 13: (a) Two-color composite image of S193 complex produced using \({}^{12}\)CO(J =3–2) intensity map ((red:-[43,-46], green:-[47,-50] kms\({}^{-1}\)), the _CGPS_ 1420 MHz contours are also shown with white color. (b) Filled contours showing the surface density map of the S193 complex, the map is also overplotted with the \({}^{12}\)CO(J =3–2 ) contours showing two velocity components. The blue dashed line shows the possible axis of collision. higher-mass stars mostly follow the Salpeter MF value, i.e., \(\Gamma\)=-1.35 (Salpeter, 1955). At lower masses, the IMF is less constrained but appears to flatten below 1 M\({}_{\odot}\) and exhibits fewer stars of the lowest masses (Luhman et al., 2016; Lim et al., 2015; Chabrier, 2003; Kroupa, 2002). While the higher-mass domain is thought to be mostly formed through fragmentation and/or accretion onto the protostellar core (e.g., Bonnell & Bate, 2006; Padoan & Nordlund, 2002) in the low-mass and substellar regime, additional physics is likely to play an important role. The density, velocity fields, chemical composition, tidal forces in the natal molecular clouds, and photo-erosion in the radiation field of massive stars in the vicinity can lead to different star formation processes and consequently, some variation in the characteristic mass Figure 14: PV maps obtained using the \({}^{12}\)CO(J =3–2) (a), and \({}^{13}\)CO(J =1–0) (b) data, these maps are obtained along the different lines (white lines in Figure 12d) as discussed in Section 3.6.4. (turn-off point) of the IMF (Bate, 2009; Bate & Bonnell, 2005; Whitworth & Zinnecker, 2004; Padoan & Nordlund, 2002). Each cluster identified in the S193 complex has an associated IRAS source. We saw the presence of cold gas and dust clumps at the location of [BDS2003]57 and SEC in the \(Herschel\) and molecular maps. The [BDS2003]57 cluster is associated with the peak in the \(Herschel\)\(500\)\(\mu\)m emission and traced with a relatively **low** temperature (\(\sim\)14 K) in the \(Herschel\) temperature map. We have also seen molecular line emission (a bright region in the IRAC ratio map) from the [BDS2003]57. By analysing the \({}^{12}\)CO(J =3\(-\)2) data, we have identified the presence of two velocity components in the S193 complex at velocities around -45 kms\({}^{-1}\) and -48 kms\({}^{-1}\). Two spatially overlapping zones of clouds, P1 and P2 (see Figure 12a), are observed, which are peaking at the intensity maps of the \({}^{12}\)CO(J =3\(-\)2), \({}^{13}\)CO(J =1\(-\)0) emission. Zone P1 has a young open cluster [BDS2003]57, YSOs, a cold gas clump, a massive molecular clump (C1). We also saw three more molecular clumps (C3, C5, and C6) lying inside this overlapping zone P1. Zone P2 also have two massive \(Herschel\) clumps (C2, C7; see Section 3.6.3) along with a YSO. Though, in the \({}^{12}\)CO and \({}^{13}\)CO intensity map (see Figure 12a, 12b), the space near the massive stars seems to be almost devoid of gas but are in very close proximity of overlapping zones P1 and P2. The massive stars evolve faster than the low mass stars, affecting their natal environment very soon. In that way, the space near the massive stars does not always give a picture of the initial physical conditions. We show a blue dashed line in Figure 13b, which we can consider as the axis of two clouds colliding. This axis passes through the overlapping zones P1 and P2 and covers the positions of massive stars in the S193 complex. Few cavities are visible along this axis around massive stars (cf. Figure 13b), filled with ionised gas and surrounded by the PDRs, dust, and gas. These findings lead us to explore the CCC process in the S193 complex, which strongly favours the massive dense core, young clusters, and YSOs formation at the overlapped region of the two interacting clouds (Torii et al., 2017; Fukui et al., 2018; Dewangan et al., 2019; Fukui et al., 2021). Many authors list the observational signatures of the CCC (Torii et al., 2017; Dewangan et al., 2019). An important signature of CCC is the existence of a compressed layer of gas at the junction of two clouds, known as the bridge-like feature (Haworth et al., 2015, 2019; Torii et al., 2017; Dewangan et al., 2019). The bridge feature shows the connection of two clouds in velocity space, which is observationally seen as groupings in the PV maps separated by diffuse emission (Haworth et al., 2015, 2015). We have shown PV maps along different lines in our \({}^{12}\)CO and \({}^{13}\)CO maps to investigate the bridge feature. The \({}^{12}\)CO(J =3\(-\)2) transition is known to be optically thin as compared to other \({}^{12}\)CO transitions such as J=1-0 and J=2-1. It is a very useful tracer of medium density (10\({}^{4}\) at 20K) material, while \({}^{13}\)CO is a tracer of the dense gas. In all of the PV maps (except x1 and x2; see Figure 14) for \({}^{12}\)CO (a) and \({}^{13}\)CO (b), one can see the velocity gradient from east to west but cannot see any significant feature suggesting the velocity connection of two clouds. Another important observational signature of the CCC is the spatial fit of enhanced intensity and depressed region (Haworth et al., 2015, 2022). Intensity enhancement is termed 'key' while intensity depressed feature is known as 'key-hole.' In Figure 13a, where we have plotted two velocity components with red and green colours, this kind of distribution is not observable. However, the spatial connection of two velocity components with massive stars and clustering found at the interface suggests the CCC in this region; other signatures of CCC are not observed with the existing data sets. The possibility of not seeing the observational signature of CCC could be that feedback from massive stars, might be responsible for diminishing these important features as three evolved H ii regions are distributed towards the S193 complex (Maity et al., 2022). Another possibility suggesting the influence of feedback from the massive **stars** played a role in forming the central cluster [BDS2003]57 is discussed in the following paragraph. The S193 complex contains a few massive stars, and feedback from them (intense UV radiation, stellar wind) could also be the reason behind the observed velocity gradient in the S193 complex. The massive stars in the S193 complex strongly affect their surroundings, evident with the PDR structures and the cavities formed in the cloud. Interestingly the central cluster [BDS2003]57 is surrounded by the H ii regions S192 (from the south), S193 (from the north-east), and S194 (from the north-west), as we can see radio emissions from the H ii regions surrounding the cluster (see Figure 15). To investigate the effect of feedback from the massive stars in the cluster, we have shown two colour composite image of the S193 complex made using CGPS 1.42 GHz radio continuum image (red) and the \({}^{12}\)CO(J =3\(-\)2) moment-0 map (green) in Figure 15. Let us take a close look at the central region hosting the cluster. It is visible that this region shows curved morphology from all three sides where radio continuum emission interacts with the molecular cloud (shown with magenta arrows in Figure 15). This signature strongly suggests the compression of the cloud due to the interaction with the radio emission; interestingly, we have also seen PDR structures at the interaction zone, further strengthening this idea. This central region also hosts molecular clumps, dust condensations, YSOs, and the cluster. The compression of the molecular cloud from the sides of all three H ii regions could be responsible for the formation of clumps and, subsequently, the cluster [BDS2003]57, which we can attribute to the positive feedback of the massive stars. To quantitatively examine this scenario, we also calculated the pressure of the massive star at the position of the cluster [BDS2003]57; the pressure calculations had been given as one of the main arguments supporting the feedback scenario (Pandey et al., 2020). Considering the ambiguity regarding the spectral type of the massive stars in the region, we have calculated the upper and lower limit of the total pressure due to all three H ii regions at the position of [BDS2003]57. The lower limit of the total pressure (7.36 \(\times\) 10\({}^{-12}\) dynes cm\({}^{-2}\)) is similar to the value of a cool molecular cloud (\(P_{MC}\)\(\sim\)10\({}^{-11}\)-10\({}^{-12}\) dynes cm\({}^{-2}\) for a temperature \(\sim\)20 K and particle density \(\sim\)10\({}^{3}\)-10\({}^{4}\) cm\({}^{-3}\)) (see Table 7.3 of Dyson & Williams, 1980). However, an upper limit of the total pressure (1.06 \(\times\) 10\({}^{-10}\) dynes cm\({}^{-2}\)) is greater than the value of a typical cloud. If taken into consideration, the upper limit of the pressure suggests that the massive stars' feedback has triggered the cloud's collapse; however, a detailed spectroscopic analysis of the massive stars would shed greater light in this regard. In our analysis, we found that the dynamical age of the H ii regions S192, S193, and S194, for ambient density n\({}_{0}\)=10\({}^{3}\)(10\({}^{4}\)) cm\({}^{-3}\) are 0.6 (1.9) Myr, 0.6 (2.0) Myr, 0.4 (1.4) Myr, respectively. The dynamical age of these H ii regions considering the ambient density of n\({}_{0}\)=10\({}^{4}\) cm\({}^{-3}\) seems to be high enough to trigger the formation of the YSOs which are considered to have an average age of 0.46 (Class I) and 1-3 Myr (Class II) (Evans et al., 2009). Since we are proposing that the star formation in the central cluster [BDS2003]57 is happening due to the compression of existing material from the surrounding H ii regions. In this case, considering the ambient density at the higher end is a good approximation and suits the triggered star formation scenario. ## 5 Summary and conclusions In the paper, we aimed to understand the formation of massive stars and young star clusters in the S193 complex. We probed the physical environment around the S193 complex by carefully examining the multiwavelength infrared data and the molecular line data. We present observational findings and conclusions drawn from the above studies as follows. * We have identified three clustering of stars in the S193 complex. The clustering contains previously identified young open clusters and a new grouping of stars in the south-west direction. The membership of the identified clustering is constrained using the Gaia-DR3 data, and the clusters seem connected in the proper-motion space. The distance of the S193 complex is estimated as 4.84 kpc from the PM data. * The distribution of dust in the S193 complex is traced using the MIR and FIR images. The dust is distributed in arc-type structures around the H ii regions. The PDRs are also traced around the H ii regions, indicating the significant heating/impact of the massive stars. We also traced Br\(\alpha\) emission from the H ii regions, whereas a prominent hydrogen line emission towards the cluster [BDS2003]57 is observed. * Using the observed NIR data and the MIR data from the \(Spitzer\), we identified 27 YSOs in the S193 complex. Out of the total YSOs, two are Class i, and twenty five are Class ii objects. Most of the YSOs are distributed towards the cluster [BDS2003]57. We have also identified 16 molecular clumps in the S193 complex using the \(Herschel\) column density map. The most massive clump is seen towards the central cluster, [BDS2003]57, accompanying several YSOs. This clump is relatively cold and traced with a temperature \(\sim\)14 K in the \(Herschel\) temperature map. * Using our observed NIR data, we have also calculated the slope of the MF for the central cluster [BDS2003]57. We found a break in the slope of the MF at 1.5 M\({}_{\odot}\). In the mass range of \(3>M_{\odot}>1.5\) it is found to be \(-2.46\pm 0.01\) while in the mass range of \(1.5>M_{\odot}>0.3\), it is found as \(+0.64\pm 0.20\). * The distribution of the ionised gas in the S193 complex is examined using the H\(\alpha\) and the \(CGPS\) 1420 MHz images. Using radio flux, we further constrained the spectral type (B0.5-B1) of the ionizing sources for H ii regions S192, S193, and S194. * We traced two velocity components towards the S193 complex with velocities [-43,-46] and [-47,-50] km/s. Two prominent spatially overlapping zones, P1 and P2 peaking in the intensity map were traced towards the center and right-above the H ii region S193 complex. The overlapping zone P1 accompanies the [BDS2003]57 clustering, cold gas clump, YSOs. * We explored the CCC scenario in the S193 complex and it appears that the cause of the formation of a new generation of stars in this region is not the CCC process but the feedback from the massive stars. * We have investigated feedback's possible role in forming the central cluster [BDS2003]57 by calculating the total pressure and inspecting the physical environment around the central clump. It suggests that the massive stars could have played a role in forming the central cluster, although a detailed spectroscopic and age analysis is needed to shed more light on this aspect. Figure 15: Two colour-composite image produced using CGPS 1420 MHz radio continuum image (red) and \({}^{12}\)CO(J =3\(-\)2) moment-0 map (green). In this figure, massive stars S193-1 and S193-2 are shown with the star symbol, while candidate massive star S194-1 is shown with the plus symbol. The black box represents the cluster’s core region [BDS2003]57, white circle are the YSOs, and blue lines connect the cluster with the massive stars. The \(Spitzer\) ratio map (see Figure 9c) enclosed within a white box is shown in the right panel. ## Acknowledgments **We thank the anonymous referee for the constructive and valuable comments that helped to improve the manuscript. We thank the staff at the 3.6m DOT, Devasthal (ARIES) and IR astronomy group at TIFR, for their cooperation during TANSPEC observations. We also acknowledge the TIFR Near Infrared Spectrometer and Imager mounted on 2 m HCT using which we have made NIR observations. This research made use of the data from the Milky Way Imaging Scroll Paint- ing (MWISP) project, which is a multi-line survey in 12CO/13CO/C18O along the northern galactic plane with PMO\(-\)13.7m telescope. We are grateful to all the members of the MWISP working group, particularly the staff members at PMO-13.7m telescope, for their long-term support. MWISP was sponsored by National Key R&D Program of China with grant 2017YFA0402701 and by CAS Key Research Program of Frontier Sciences with grant QYZDJ-SSW-SLH047. D.K.O. acknowledges the support of the Department of Atomic Energy, Government of India, under Project Identification No. RTI 4002.** ## Data availability **The data underlying this article will be shared on reasonable request to the corresponding author.**
2309.04514
A High-Sensitivity Radon Emanation Detector System for Future Low-Background Experiments
Radioactive radon atoms originating from the long-lived primordial $^{238}\mathrm{U}$ and $^{232}\mathrm{Th}$ decay chains are constantly emanated from the surfaces of most materials. The radon atoms and their radioactive daughter isotopes can significantly contribute to the background of low-background experiments. The $^{222}\mathrm{Rn}$ progeny $^{214}\mathrm{Pb}$, for example, dominates the background of current liquid xenon-based direct dark matter detectors. We report on a new detector system to directly quantify the $^{222}\mathrm{Rn}$ surface emanation rate of materials. Using cryogenic physisorption traps, emanated radon atoms are transferred from an independent emanation vessel and concentrated within the dedicated detection vessel. The charged daughter isotopes are collected electrostatically on a silicon PIN photodiode to spectrometrically measure the alpha decays of $^{218}\mathrm{Po}$ and $^{214}\mathrm{Po}$. The overall detection efficiency is $\sim 36\,\%$ for both polonium channels. The radon emanation activity of the emanation vessel was measured to be $(0.16\pm 0.03)\,\mathrm{mBq}$, resulting in a detection sensitivity of $\sim 0.06\,\mathrm{mBq}$ at $90\,\%\,\mathrm{C.L.}$.
D. Wiebe, S. Lindemann, M. Schumann
2023-09-08T16:19:48Z
http://arxiv.org/abs/2309.04514v2
# A High-Sensitivity Radon Emanation Detector System for Future Low-Background Experiments ###### Abstract Radioactive radon atoms originating from the long-lived primordial \({}^{238}\)U and \({}^{232}\)Th decay chains are constantly emanated from the surfaces of most materials. The radon atoms or their radioactive daughter isotopes can significantly contribute to the background of low-background experiments, e.g., the \({}^{222}\)Rn progeny \({}^{214}\)Pb dominates the background of liquid xenon detectors which are currently leading the direct search for WIMP dark matter. We report on a new detector system to directly quantify the \({}^{222}\)Rn surface emanation of materials. Using cryogenic physisorption traps, emanated radon atoms are transferred from an independent emanation vessel and concentrated inside the dedicated detection vessel, where the charged daughter isotopes, most importantly \({}^{214}\)Po and \({}^{218}\)Po, are electrostatically collected and detected on a silicon PIN photodiode. The overall detection efficiency is \(\sim\)36 % for both polonium channels. The intrinsic detection vessel background was measured to be \(\sim 2.4\) cpd (28 \(\upmu\)Bq) and \(\sim 1.5\) cpd (17 \(\upmu\)Bq) for \({}^{218}\)Po and \({}^{214}\)Po, respectively. The radon emanation activity of the emanation vessel was determined to be \((0.16\pm 0.3)\) mBq, resulting in a detection sensitivity of \(\sim\)59 \(\upmu\)Bq (at 90 % C.L.). radon detector, radon emanation, electrostatic collection, alpha spectrometry, material screening, ultra-low background, rare-event search, direct dark matter detection + Footnote †: preprint: Prepared for submission to JINST ## 1 Introduction The best constraints on spin-independent WIMP-nucleon scattering for WIMP masses \(>5\,\mathrm{GeV}/c^{2}\) to date come from ultra-low background experiments using dual-phase xenon time projection chambers (TPCs) [1; 2; 3; 4]. These instrument tonne-scale liquid xenon (LXe) targets to detect both the scintillation and ionization signals generated by the interaction of particles with the target xenon atoms. The ratio of light to charge signal is used to distinguish between nuclear recoil (NR, WIMP-like) and electronic recoil (ER, background-like) signals. One of the dominant backgrounds in these experiments is the leakage of low-energetic beta decays of \({}^{214}\)Pb (ER signals), a daughter of \({}^{222}\)Rn, into the WIMP NR-signal region: Radon is part of the ubiquitous primordial \({}^{238}\)U decay chain and emanates off any detector component via recoil ejection and diffusion. Due to its comparably long half-live of 3.8 d and its chemical inertness, it distributes within the LXe target. 9.3 % of the \({}^{214}\)Pb daughter atoms beta-decay directly to the ground state. Consequently, these radon-induced ER background events can neither be mitigated via target fiducialization nor by tagging coincident nuclear de-excitations with emission of a gamma ray. The XENONnT experiment has recently reported a radon activity concentration of 1.8 \(\upmu\)Bq/kg [5], which has been further reduced in the meanwhile [4]. Future experiments with a multi-ton-scale LXe target, such as DARWIN [6] or XLZD [7], aim at exploring the entire WIMP parameter space accessible to the LXe TPC technology [8; 9] and offer an interesting neutrino physics program [10; 11; 12]. To reach the design sensitivity, their ER and NR backgrounds must be dominated by irreducible interactions of astrophysical neutrinos [6], requiring a reduction of the \({}^{222}\)Rn concentration to 0.1 uBq [9]. This corresponds to an order of magnitude-improvement with respect to the best experiment of the current-generation [13]. The challenging goal will be met by a combination of background mitigation methods: surface treatment/coating [14], detector design [15; 16; 17], active radon removal [18] as well as by using only ultra-low-emanation materials for all detector parts in direct contact with the LXe. As Rn-emanation is a surface effect, bulk measurements of the \({}^{226}\)Ra activity via standard gamma spectrometry can be misleading. Thus, the radon emanation rate of all materials in contact with LXe must be quantified by means of highly sensitive radon emanation detectors. The concept of the _electrostatic radon emanation chamber_[19; 20] presented in this work is by now an integral part of the radiopurity assay of modern rare-event searches [21; 22; 23; 24; 25]. Such an instrument consists of a gas-tight vacuum vessel that houses a silicon PIN photodiode set to negative high-voltage, creating an electrical drift field with respect to the vessel on ground potential. \({}^{222}\)Rn atoms present in the vessel will eventually decay, leaving the daughter \({}^{218}\)Po in a positively charged state with a probability of \(\sim 90\,\%\) as a consequence of the alpha decay recoil [26]. Accelerated in the drift field, the ionized daughters can be collected electrostatically on the surface of the PIN diode, where the radon daughters \({}^{218}\)Po and \({}^{214}\)Po are identified by the spectrometric measurement of their alpha decay energies. Modeling the evolution of the detected activity of the radon daughters during the measurement allows the inference of the initial radon emanation rate. This work presents the design, construction, and performance of the _MonXe_ radon emanation detector. Section 2 describes the working principle and experimental setup of the detector. Section 3 explains the operation of the instrument. Section 4 shows its performance in terms of background, efficiency, and sensitivity. Section 5 presents exemplary screening measurements of high-activity zeolite granulate and a low-activity PTFE sample. The article concludes in Section 6 with a summary and an outlook on future optimizations and planned measurements. ## 2 Experimental Setup The MonXe radon detector system features a set of two decoupled vessels for the emanation and detection of radon, respectively. It allows the assessment of emanation of large and bulky samples while optimizing the radon detection in a reproducible measurement procedure. The hemispherical shape of the detection vessel (volume: 1.2 liters, radius: 7.7 cm) was optimized in terms of electrostatic collection efficiency via dedicated particle-tracking simulations taking into account diffusion effects. The PIN diode is installed in the central bore of the vessel's CF160 flange, with the diode surface being aligned with the inner flange plane. The CF160 flange features six additional CF16 flanges to connect sensors and the concentration line. The detection vessel's inner surface is electropolished to minimize its intrinsic radon emanation. The cylindrical emanation vessel has a height of 40.0 cm and an inner diameter of 25.4 cm, corresponding to a volume of 20.4 l. It is closed off with CF250 flanges on both sides. A photograph of the radon emanation detector is shown in the left panel of Figure 1. The gas system connecting both vessels is sketched in the right panel. The entire system was built from CF/Conflat and VCR metal-sealed UHV components and exhibits a leak rate below \(10^{-9}\,\mathrm{mbar}\,\mathrm{l}/s\). Samples of arbitrary material, size, and shape can be placed inside the emanation vessel, which is evacuated after installation of the sample, and afterwards refilled with 1 bara of purified helium. In principle, several emanation vessels could be installed in parallel to speed up an extensive measurement campaign. The emanation rate of a sample is assessed by transferring the emanated radon atoms into the detection vessel via cryogenic physisorption on activated charcoal (charcoal: Blucher Saratech 100050-VC000021). Helium (grade 5.0) is used as carrier gas; the gas bottle is directly attached to the _purification trap_ (PT), which is kept at cryogenic temperature during operation by immersing it into a liquid nitrogen bath. This purifies the helium gas introduced into the system, removing radon and other contaminants. The emanation vessel is filled with helium gas which is subsequently extracted from the system by means of a vacuum pump via the cold _transfer trap_ (TT), where the radon atoms are adsorbed. The purification (transfer) trap is made of an electropolished stainless steel cylinder of 10.0 cm length and 4.0 cm (1.2 cm) inner diameter to accommodate 75 g (10 g) of activated charcoal. By subsequent heating of the transfer trap to 175\({}^{\circ}\)C, the radon gets desorbed and is flushed into the detection vessel using clean He gas. The pressure in the emanation and detection vessels is monitored in the radon transfer and measurement phases using OMEGA PX409 sensors. A hot zirconium getter (SAES MonoTorr PS3-MT3-R-2) is installed in between the emanation vessel and the gas system's main line to remove electronegative impurities outgassing from the sample, which might affect the electrostatic collection efficiency. The hemispherical detection vessel houses the silicon PIN photodiode (Hamamatsu S3590-09) with a photosensitive area of 10 mm \(\times\) 10 mm. The diode does not feature a protective epoxy cover and directly exposes its p-layer to minimize the absorption of the energy of the impinging alpha particles in an inactive material layer. The diode is embedded in a PTFE cylinder installed in a Figure 1: Photograph (left) and schematic (right) of the MonXe radon emanation detector. The examined sample (green brick) is placed inside the emanation vessel (EV). The emanated radon atoms are transferred into the hemispherical detection vessel (DV), where the activity is measured using a PIN diode (PD, magenta). The transfer occurs by evacuating the emanation vessel using a turbo molecular pump (TP) through the transfer trap (TT), which is cooled to liquid nitrogen temperature. In this trap, radon atoms adsorb onto activated charcoal (black spheroids). A hot zirconium getter (HG) installed in series removes other contaminants such as O\({}_{2}\), H\({}_{2}\)O in the gas. By heating the transfer trap (TT), the radon atoms desorb again and are guided into the detection vessel by a flow of helium, which is purified by cryogenic activated charcoal housed inside the purification trap (PT). CF40 double nipple centered on the flange of the detection vessel. Its pins are connected to two SHV coaxial feedthroughs. Their air sides are directly connected to a custom-developed frontend electronics module. It provides a high-voltage of \(-1.0\,\mathrm{kV}\) to the diode to establish an almost radial electrical collection field between the grounded vessel and the diode. A battery installed in series provides the reverse bias voltage of \(9\,\mathrm{V}\) to the diode. The analog current signal of the diode is capacitively decoupled from the high-voltage circuit and fed into a two-stage low-noise preamplifier with a total transimpedance gain of \(\sim 10^{8}\,\mathrm{\SIUnitSymbolOhm}\) and a bandwidth of \(100\,\mathrm{kHz}\). Low- and high-pass filters reduce the electronic noise. The shaped signals from alpha decays of \({}^{214}\mathrm{Po}\) (\(7.7\,\mathrm{MeV}\)) and \({}^{218}\mathrm{Po}\) (\(6.0\,\mathrm{MeV}\)) create amplitudes of \(750\,\mathrm{mV}\) and \(500\,\mathrm{mV}\), respectively. The typical decay time of the signals is \(\sim 50\,\mathrm{\SIUnitSymbolMicro s}\). They are digitized and analyzed by a 14-bit multichannel analyzer (CAEN DT5781a) sampling the signal at \(100\,\mathrm{MS/s}\). An event is read out if the pulse exceeds a threshold that is set sufficiently low such that it is surpassed by any relevant alpha signal. For every event, its timestamp, pulse height, and raw waveform data are stored. Storage of the raw data could be disabled, however, especially during detector commissioning the direct access to the waveform data was very useful. During every measurement, which consists of the radon transfer and data acquisition phases, ambient and process parameters, such as temperatures and pressures, are monitored and stored in a database. A custom-developed lightweight slow control system running on an industry-grade microcontroller (KUNBUS RevPi Core 3+) is used for that purpose. ## 3 Measurement Procedure and Activity Model \({}^{226}\mathrm{Ra}\) has a long half-life of 1600 years, and \({}^{222}\mathrm{Rn}\) is thus assumed to be produced and emanated with a constant radon emanation activity \(\mathcal{R}\). The emanation rate of a sample is determined by measuring the alpha decays of the radon daughter isotopes \({}^{214}\mathrm{Po}\) and \({}^{218}\mathrm{Po}\). The standard measurement procedure with the radon emanation detector consists of three phases: _radon emanation_, _radon transfer_, and the _polonium activity measurement_. The activities of the various isotopes in these phases can be computed analytically by solving the respective rate equations; the time evolution of every isotope depends on its radioactive decays and the decays of its mother isotopes. Radon Emanation:The sample under study is closed off in the emanation vessel EV, which is subsequently evacuated and filled up with purified helium to atmospheric pressure. Governed by the \({}^{222}\mathrm{Rn}\) half-life of \(3.82\,\mathrm{d}\), the emanated radon activity within the emanation vessel asymptotically approaches the secular equilibrium activity \(\mathcal{R}^{\mathrm{sample}}\). Radon Transfer:After a few \({}^{222}\mathrm{Rn}\) half-lives (the time is optimized depending on the expected emanation rate of the sample), the accumulated radon atoms are concentrated and transferred from the emanation into the detection vessel DV. The emanation vessel is evacuated via the cold transfer trap TT, which is immersed in liquid nitrogen. Once the extracted radon atoms are adsorbed onto the porous charcoal, the trap is closed off and then heated up to \(175\,\mathrm{\SIUnitSymbolCelsius}\). The radon atoms desorb and mix with the carrier gas. By connecting the transfer trap to the previously evacuated detection vessel, the carrier gas and hence the majority of the radon atoms can expand into the detection vessel. The remaining radon atoms are collected by flushing purified helium through the transfer trap into the detection vessel up to an absolute pressure of \(1.0\,\mathrm{b}\mathrm{a}\). Polonium Activity Analysis:Most impurities, e.g., contaminants from outgassing and the radon decay products themselves, are removed by the hot getter during the transfer from the emanation vessel into the detection vessel. Thus the sample signal activities of \({}^{218}\)Po and \({}^{214}\)Po inside the decay vessel, \(A_{{}^{214}\text{Po}}^{\text{sample}}(t)\) and \(A_{{}^{218}\text{Po}}^{\text{sample}}(t)\), are zero at the start of the measurement \(t=t_{\text{meas}}^{0}\). Once the transfer is finished, they increase solely due to the exponential radioactive decay of the trapped radon sample atoms. \({}^{218}\)Po or \({}^{214}\)Po decays are identified by their respective energy in the MCA spectrum, see Figure 3 (right) on page 3. The number of detected events \(n^{\text{meas}}\) from a certain polonium isotope in a measurement interval \(\Delta t_{\text{meas}}\) is given by \[n^{\text{meas}}=\varepsilon^{\text{det}}\,\int_{\Delta t_{\text{meas}}}\!\!A^ {\text{sample}}(t)\,\text{d}t\ +\bar{n}^{\text{DV}}\ +\bar{n}^{\text{EV}}\quad, \tag{1}\] which takes into account the detection efficiency \(\varepsilon^{\text{det}}\) (see Section 4.2) and the mean number of expected background events from both the detection vessel \(\bar{n}^{\text{DV}}\) and the emanation vessel \(\bar{n}^{\text{EV}}\) (see Section 4.1). Since the polonium signal activities \(A^{\text{sample}}(t)\) can be expressed in analytical form (as the solution of a system of coupled, inhomogeneous first-order differential equations), Equation (1) can be solved for the radon activity at \(t_{\text{meas}}^{0}\). By additionally taking into account the duration of the radon emanation and transfer phases, one can then infer the radon emanation activity \(\mathcal{R}^{\text{sample}}\) of the sample under study. ## 4 Detector Performance In this Section, we present results on the performance of the radon emanation detector obtained during detector commissioning. ### Backgrounds During a sample measurement, the surfaces of both the detection vessel (DV) and emanation vessel (EV) also emanate \({}^{222}\)Rn atoms, that, along with leakage of other decays and detector artifacts, contribute to the overall number of measured events \(n^{\text{meas}}\), as expressed by Equation (1). To measure the detection vessel background, it was filled with 1 bara of purified helium. Several such blank measurements were conducted. After approximately four weeks, average equilibrium detection vessel background rates of 2.4 counts per day (cpd; 28 \(\upmu\)Bq) and 1.5 cpd (17 \(\upmu\)Bq) were measured in the regions of interest for the \({}^{218}\)Po and \({}^{214}\)Po channels, respectively. These rates comprise the sum of intrinsic \({}^{222}\)Rn emanation from the detection vessel and additional components that potentially result from the leakage of other decays, such as those from the \({}^{220}\)Rn chain, as well as detector-specific artifacts. Due to the a priori unknown time dependence of all these effects, the detection vessel background is determined in a data-driven manner for each sample measurement. We estimate the number of events, \(\bar{n}^{\text{DV}}\), that is attributed not to a sample but to the detection vessel, by averaging the number of background events that occurred during the equivalent measurement duration in the individual background measurements. The radon emanation background rate of the emanation vessel \(\mathcal{R}^{\text{EV}}\) is determined in a regular measurement without sample, following the procedure outlined in Section 3 accounting for the detection vessel background determined above. Two such background measurements were conducted. For both measurements, both polonium channels are in excellent agreement. Taking into account the detection efficiencies \(\varepsilon^{\rm det}\) from Equation (10) yields a value of \[\mathcal{R}^{\rm EV}=(0.16\pm 0.03)\,\,\,{\rm mBq}. \tag{11}\] With this measured EV background activity \(\mathcal{R}^{\rm EV}\), the expected number of EV background events \(\bar{n}^{\rm EV}\) can be calculated via the analytical polonium activity model. Any activity from \({}^{220}\)Rn and its daughters is expected to have already decayed during the transfer process and can thus be neglected. The background modeling approach allows us to account for the unavoidable variations of the duration of the radon emanation and transfer phases, caused by, e.g., different emanation vessels and sample sizes. ### Detection Efficiency The overall detection efficiency \(\varepsilon^{\rm det}\) is obtained by comparing the experimentally measured emanation rate with the known reference value of a calibrated sample. The sample was provided by G. Zuzel (Jagiellonian University, Krakow, Poland). It consists of two 2 mm thick stainless steel discs with a diameter of 20 mm, onto which \({}^{226}\)Ra ions were electrodeposited. The reference measurements of the same source were carried out by H. Simgen at the Max-Planck-Institut fur Kernphysik (MPIK) in Heidelberg, Germany, utilizing miniaturized proportional counters [27]. An activity of \((47.6\pm 1.5)\) mBq was measured for both polonium channels. The source was stored in a CF40 vacuum vessel, which can be closed off by two VCR bellow valves. For the calibration measurements, it was connected to the gas system, replacing the emanation vessel. The calibration campaign consisted of four individual measurements of the radon emanation rate from the calibrated sample, following the routine presented in Section 3. The left panel of Figure 2 shows the individual results relative to the reference value. All four measurements are in mutual agreement, and both polonium channels are consistent with each other (see values given in Figure 2, left). The detection efficiency is thus \[\varepsilon^{\rm det}=(36.3\pm 0.2(\rm stat.)\pm 1.4(\rm syst.))\,\,\%. \tag{12}\] The statistical uncertainty resembles the propagated Poissonion errors; the systematic uncertainty comes from the \(\sim 3\,\%\) uncertainty of the reference measurement. The uncertainties have been adjusted for overdispersion due to potentially unaccounted variations in the manual transfer procedure by increasing the statistical uncertainty such that the fit of a constant results in a reduced chi-square of unity. Note that \(\varepsilon^{\rm det}\) already includes efficiency losses of at least 50 % due to the finite solid angle coverage of the active photodiode for alpha particles emitted on its surface. Simulation studies of the electric collection field suggest an electrostatic collection efficiency close to 100 % for the realized hemispherical detection vessel geometry and a static collection field generated by a high-voltage of \(-1.0\) kV [28]. ### Sensitivity We evaluate the detector sensitivity in terms of a single-bin Poisson counting experiment, following the analysis scheme outlined in Section 3: The integer number of measured events \(n^{\rm meas}\) is Poisson-distributed with the expected value \(\bar{n}^{\rm sample}+\bar{n}^{\rm DV}+\bar{n}^{\rm EV}\). Whether or not \(n^{\rm meas}\) significantly exceeds the expected background \(\bar{n}^{\rm DV}+\bar{n}^{\rm EV}\), given a significance level \(\alpha=0.1\), is determined by computing the corresponding \(p\)-value assuming the background-only hypothesis. If \(p<\alpha\), the background-only hypothesis is considered rejected by the data, and one can determine the sample's radon emanation rate from the excess number of signal events. If no signal above the background is observed (\(p\geq\alpha\)), we quote the _observed upper limit_\({}^{*}n^{\rm sample}\) on the number of signal events as the largest value of \(\bar{n}^{\rm sample}\) that still yields \(\leq n^{\rm meas}\) detected events with probability \(\alpha\): \[p=\sum_{n=0}^{n^{\rm meas}}\mbox{Poisson}(n\,;\,\bar{n}^{\rm DV}+\bar{n}^{\rm EV }+{}^{*}n^{\rm sample})\stackrel{{!}}{{=}}\alpha. \tag{11}\] The observed upper limit is thus \[{}^{*}n^{\rm sample}=\frac{1}{2}\,F_{\chi^{2}}^{-1}\,[1-\alpha;2(n^{\rm meas}+1 )]-(\bar{n}^{\rm DV}+\bar{n}^{\rm EV}). \tag{12}\] In Equation (12), the sum of Poissonian probabilities is identified with the cumulative chi-squared distribution \(F_{\chi^{2}}\), which then allows computing \({}^{*}\bar{n}^{\rm sample}\) in analytical form. Figure 2: **(Left)** The results of four independent measurements of a \({}^{226}\)Ra calibration source are shown relative to the absolute \({}^{222}\)Ra emanation rate obtained from miniaturized proportional counters. The detection efficiency \(\varepsilon^{\rm det}\), counting both \({}^{218}\)Po (blue) and \({}^{214}\)Po (red) decays, agrees in all four measurements, which also validates the radon concentration and transfer procedure. **(Right)** Estimation of MonXe’s sensitivity in terms of the Poissonian box-counting model described in the text, considering only the \({}^{214}\)Po background of the detection vessel (DV, dashed lines), and the backgrounds of the detection and the emanation vessels (DV+EV, solid lines). In the absence of a signal, the expected median upper limits at \(90\,\%\)\(CL\) on the number of detected \({}^{214}\)Po signal events \({}^{***}n^{\rm sample}_{{}^{214}\rm Po}\) (red lines, left axis) increase with increasing measurement time \(\Delta\tau^{\rm meas}\). The corresponding upper limits at \(90\,\%\)\(CL\) on the radon emanation activity \({}^{***}\mathcal{R}^{\rm sample}\) of the sample (petrol lines, right axis) show optima at \(13\,\upmu\)Bq and \(59\,\upmu\)Bq, respectively. (Upper limits are given as the median of the underlying Monte Carlo distribution; the shaded areas indicate their widths). To estimate the experimental sensitivity, we compute the distribution of _expected upper limits_ on the detected number of signal events \({}^{**}\bar{n}^{\rm sample}\) at confidence level CL = \(1-\alpha=0.9\). Monte Carlo data resembling the distribution of the number of observed events under the assumption of the background-only hypothesis is generated according to Equation (10). The right panel of Figure 2 shows the median expected upper limit on the number of signal events from the sample \({}^{**}\bar{n}^{\rm sample}\) for the \({}^{214}\)Po line and two different background contributions vs. the measurement time. Via the activity model (13) and assuming infinite emanation time (which is approximately the case after an emanation period of four weeks) and infinitesimal transfer time, one can translate the expected upper limit of signal events into the corresponding upper limit on the radon emanation rate \({}^{**}\mathcal{R}^{\rm sample}\). The fluctuations in Figure 2 (right) are due to the quantized number of expected detector vessel background events \(\bar{n}^{\rm DV}(\Delta t_{\rm meas})\). Initially, the \({}^{**}\mathcal{R}^{\rm sample}\) sensitivity curves steeply decrease until they reach a local minimum. For longer measurement times, the mean and width of the distributions show a steady increase. This (at first glance counter-intuitive) time-dependence is caused by the asymptotically falling ratio of events from the signal (emanated by the sample and transferred once into the DV) and background events (constantly emitted from the DV walls): while the signal events will initially dominate over the slowly and linearly increasing number of DV background events, they decrease exponentially and eventually fall below the number of accumulated DV background events. For the standard measurement procedure, i.e., a sample placed in the EV and taking into account the detection efficiency of Equation (14), and the Poissonian box-counting analysis presented in Section 3, we hence quote the sensitivity of the MonXe radon emanation detector as the minimum of the curve taking into account the background from both vessels (EV+DV): \[{}^{**}\mathcal{R}^{\rm sample}=59\,\mu\text{Bq}. \tag{11}\] As a consequence of the time behavior shown in Figure 2, a measurement is terminated, and an upper limit on the emanation rate of the sample is placed if no signal is detected after twelve days. For comparison, the miniaturized proportional counters at the Max-Planck-Institut fur Kernphysik (MPIK) in Heidelberg, Germany, which are among the most sensitive radon emanation instruments to date, achieve sensitivities of \(\sim\)40 \(\mu\)Bq [27] with some counters exhibiting an intrinsic emanation activity up to four times lower the DV background rate of our detector. The background emanation activity of the MPIK 801 emanation vessel is \((0.16\pm 0.05)\) mBq and thus comparable to the value of the MonXe emanation vessel given in Equation (13) at a four times larger volume. The R.E.S. facility at the South Dakota School of Mines and Technology uses a detector concept similar to MonXe and features two large emanation chambers of 131 or 3001. Its sensitivity of 0.2 mBq [29] is comparable to the one of our instrument, while our vessel backgrounds and detection efficiency are slightly superior [22]. ## 5 Sample Screening Measurements In this section, we demonstrate the performance of the MonXe radon emanation detector based on two samples with very different emanation rates. ### High-activity Sample: Zeolite Granulate Zeolites are a class of microporous minerals typically used as an adsorbent, e.g., in backing pump adsorption traps to prevent the backstreaming of oil vapor. The examined sample consists of 730 g of commercial zeolite adsorbent pellets (Pfeiffer Vacuum Technology AG Zeolith PK 001 248-T). Figure 3 (left) shows the alpha energy spectrum acquired over a measurement period of \(\Delta t_{\rm meas}=11.59\) d and after an emanation time of 7.73 d. One can clearly distinguish three peaks corresponding to the energies of the alpha particles emitted by \({}^{210}\)Po, \({}^{218}\)Po, and \({}^{214}\)Po, at 5.3 MeV, 6.0 MeV, and 7.7 MeV, respectively. The long half-life of \({}^{210}\)Pb (22.3 y) strongly suppresses the \({}^{210}\)Po peak. The peak shapes of \({}^{218}\)Po and \({}^{214}\)Po are well described by a Crystal Ball function. Their low-energy tails are attributed to angle-of-incidence-dependent energy losses when the alpha particles traverse the insensitive p-layer of the PIN photodiode. From the Crystal Ball fits, one can extract the energy resolution of 1.5 %, determined as the full width at half maximum of the \({}^{214}\)Po peak. The energy scale is defined by identifying the peak's mean values from the fit with the respective alpha energies. The \({}^{214}\)Po and \({}^{218}\)Po events are selected from a predefined energy Figure 3: Radon emanation measurement of 730 g of commercial zeolite adsorbent pellets for 11.59 days, following an emanation phase of 7.73 days. **(Left)** The measured alpha spectrum shows the two expected peaks from \({}^{218}\)Po (blue) and \({}^{214}\)Po (red); the \({}^{210}\)Po (green) peak is strongly suppressed because of its much longer half-life. The peaks can be well described by Crystal Ball functions. The tail towards lower energies is caused by angle-of-incidence-dependent energy losses in the detector. The colored areas define the energy windows for the event selection. The observed \({}^{222}\)Rn activity, i.e., without correcting for the finite detection efficiency, at the start of the measurement at \(t_{\rm meas}^{0}\) can be computed analytically using the number of events observed in the peaks and the modeled polonium activity (box-counting (BC) analysis). **(Right)** Detected events in the energy regions of interest vs. time. The activity model is fitted to the data of both isotopes (solid lines). The model describes the data very well and can also be used to obtain the \({}^{222}\)Rn activity at the start of the measurement (activity model fit (AMF) analysis); both analysis methods yield identical results. interval around the peak means. These were chosen to include \(>\)99% of the Crystal Ball integrals while excluding the stray \({}^{220}\)Rn events mentioned in section 4.1. Following the box-counting (BC) analysis presented in Section 3, which takes into account the background contributions of the EV and DV, the specific radon emanation activity of the zeolite sample is \[\mathcal{R}^{\rm zeolite}=(562\pm 5(\rm stat.)\pm 22(\rm syst.))\ \rm mBq/kg. \tag{10}\] The box-counting analysis essentially ignores the knowledge of the individual event trigger timestamps. However, for high-activity samples, this timing information can be utilized to validate the underlying model assumptions. Instead of just counting all events recorded during the entire measurement \(\Delta t_{\rm meas}\), one can subdivide \(\Delta t_{\rm meas}\) in equal intervals and count the detected polonium events in each bin, as shown in Figure 3 (right) for 3 h intervals. For each time interval, Equation (11) applies, and the underlying model parameters can now be determined from a fit of the activity model to the data. The reduced \(\chi^{2}\)-values of 0.89 and 1.24 indicate an excellent agreement between data and model for \({}^{214}\)Po and \({}^{218}\)Po. Both analysis methods yield identical radon activities at the start of the measurement \(t_{\rm meas}^{0}\): \(A^{\rm zeolite}(t_{\rm meas}^{0})=(112.1\pm 0.4)\) mBq (box-counting) and \(A^{\rm zeolite}(t_{\rm meas}^{0})=(112.2\pm 0.5)\) mBq (fit of model). For the box-counting result, the quoted uncertainty resembles the propagated Poisson uncertainty, whereas for the model fit analysis, the statistical and systematic uncertainties arising from the fit were combined in quadrature. The excellent agreement of data and model validates the detector model assumption, e.g., that the initially zero polonium population at \(t_{\rm meas}^{0}\) only emerges from the decaying radon sample and the DV background. The bulk activity of the zeolite granulate was additionally measured with the high-purity germanium gamma-spectrometer GeMSE [30]. The measurement of \({}^{226}\)Ra is \(A_{{}^{226}Ra}=11.4^{+1.0}_{-0.7}\) kg reveals that (at normal temperature and pressure and assuming secular equilibrium) only \(\sim\)5 % of the \({}^{222}\)Rn atoms produced in \({}^{226}\)Ra decays are emanated from the surface. ### Low-activity Sample: Semi-Finished PTFE Polytetrafluoroethylene (PTFE, Teflon(r)) is a widely used construction material in most low-background experiments due to its unique electrical insulation and optical properties. PTFE semi-finished products are compression molded and sintered from granulated PTFE resin. Here we present measurements of the radon emanation of two semi-finished PTFE samples: a sample manufactured by ElringKlinger AG and a sample manufactured by Fluoreals S.p.A.. Each sample consisted of three cubic blocks (\(320\times 160\times 60\,\rm mm^{3}\) of \(\sim\)6 kg mass each). In preparation for the measurements, a few microns were milled off from all surfaces of the blocks. Since PTFE is expected only to emanate trace amounts of radon (see, e.g., [21]), a series of deep and narrow grooves was saw-milled into the blocks to increase the total surface area from 0.48 m\({}^{2}\) to 2.47 m\({}^{2}\) per sample. The blocks were cleaned in a bath of 6 mol/l nitric acid, then immersed in deionized water and ethanol, and finally blow-dried with pressurized helium. To reduce outgassing, the emanation vessel housing the cleaned PTFE samples was evacuated with the turbomolecular pump for two weeks before the start of the actual emanation process. For both sample measurements, the \({}^{214}\)Po and \({}^{218}\)Po channels led to compatible results; the following radon emanation activities were measured: \[\mathcal{R}^{\rm ElringKlinger}=61^{+18}_{-19}\ \frac{\rm\mu Bq}{\rm m^{2}}\quad \rm and\quad\mathcal{R}^{\rm Fluoreals}=34^{+14}_{-15}\ \frac{\rm\mu Bq}{\rm m^{2}}. \tag{11}\] The systematic uncertainties from the calibration are a factor of ten smaller than the statistical uncertainties and are omitted in the following. A second measurement of the Fluorseals sample yielded upper limits of 43 \(\upmu\)Bq/m\({}^{2}\) and 51 \(\upmu\)Bq/m\({}^{2}\) for the \({}^{214}\)Po and \({}^{218}\)Po channels, respectively, in agreement with the first measurement The Fluoreals detection corresponds to a total sample emanation activity of 84\({}^{+35}_{-37}\)\(\upmu\)Bq and thus lies only slightly above the theoretical optimum sensitivity of the detector of 59 \(\upmu\)Bq, given in Equation (10). Statistical fluctuations can easily move the measured activity above or below the detector's significance limit. For reference, the PTFE reflectors used in the XENON1T dark matter experiment exhibit an emanation activity of \((24\pm 5)\)\(\upmu\)Bq/m\({}^{2}\)[21], which is comparable to the value inferred for the Fluoreals sample. These measurements were conducted with the miniaturized proportional counter infrastructure at the Max-Planck-Institut fur Kernphysik in Heidelberg; a sample with a total mass of 32 kg and a surface area of 4 m\({}^{2}\) was examined. The XENON1T reflectors were treated with a new diamond milling head to achieve a smooth surface to optimize the material's light reflectivity [31]. Since radon emanation strongly depends on the surface properties of a sample, it is possible that this particular treatment led to an improved micro-porosity of the surface compared to the one of our saw-milled grooves. With an emanation activity of \((12\,^{+5}_{-10})\)\(\upmu\)Bq/m\({}^{2}\), the PTFE used in the LZ dark matter experiment is also cleaner. A very large sample of it was measured with the R.E.S. facility mentioned above [22]. ## 6 Conclusion The background of many rare-event search experiments is affected by the radioactive decays of \({}^{222}\)Rn and its daughters. As radon emanates from any detector construction material, quantifying the emanation rate of potential materials is crucial for optimizing the background of the next generation of low-background experiments. In this work, we present the design and performance of the MonXe radon emanation detector, designed to contribute to the radiopurity assay programs of the future DARWIN [6] and XLZD [7] astroparticle physics observatories. MonXe's detection concept is based on the spectrometric measurement of alpha decays of polonium atoms, which have been electrostatically collected on the surface of a silicon PIN photodiode. By utilizing cryogenic physisorption traps, the radon atoms emanating from a sample are transferred into a separate detection vessel. The alpha decays of the \({}^{222}\)Rn daughters\({}^{218}\)Po and \({}^{214}\)Po are measured with an energy resolution of 1.5 % and an detection efficiency of about 36% per isotope. The sensitivity of the instrument, taking into account the measured backgrounds in the emanation and detection vessels, was determined as 59 \(\upmu\)Bq (at 90 % C.L.). The performance of the MonXe radon emanation detector was demonstrated by determining the radon emanation of high-activity commercial zeolite granulate and two samples of semi-finished PTFE, with emanation rates close to the instrument's sensitivity. At the time of writing, the MonXe detector is operated manually. However, it is foreseen to automate the radon sample transfer from the emanation into the detection vessel by controlling new pneumatic valves and mass flow controller by the slow control system. A second independent emanation vessel is currently under construction and will decrease turnover times between measurements. The second vessel will be electropolished to possibly improve its intrinsic emanation, which currently limits the instrument's sensitivity. ###### Acknowledgments. This work was supported by the European Research Council (ERC) grant No. 724320 (ULTIMATE). We gratefully acknowledge H. Simgen and J. Westermann (Max-Planck-Institut fur Kernphysik, Heidelberg, Germany) for performing the reference calibration measurements and G. Zuzel (Jagiel- lonian University, Krakow, Poland) for providing the calibration source. We also thank the teams of the mechanical and electronics workshops of Institute of Physics, Freiburg, and in particular R. Mori for the development of the preamplifier. Finally, we thank all the Bachelor students and interns who contributed to commissioning the detector: J. Alt, W. Boemke, L. God, R. Kirsch, and V. Lieb.
2309.15419
Hypergraph $p$-Laplacians and Scale Spaces
This paper introduces gradient, adjoint, and $p$-Laplacian definitions for oriented hypergraphs as well as differential and averaging operators for unoriented hypergraphs. These definitions are used to define gradient flows in the form of diffusion equations with applications in modelling group dynamics and information flow in social networks as well as performing local and non-local image processing.
Ariane Fazeny, Daniel Tenbrinck, Kseniia Lukin, Martin Burger
2023-09-27T05:50:32Z
http://arxiv.org/abs/2309.15419v2
# Hypergraph p-Laplacians and Scale Spaces + ###### Abstract The aim of this paper is to revisit the definition of differential operators on hypergraphs, which are a natural extension of graphs in systems based on interactions beyond pairs. In particular, we focus on the definition of Laplacian and \(p\)-Laplace operators for oriented and unoriented hypergraphs, their basic properties, variational structure, and their scale spaces. We illustrate that diffusion equations on hypergraphs are possible models for different applications such as information flow on social networks or image processing. Moreover, the spectral analysis and scale spaces induced by these operators provide a potential method to further analyze complex data and their multiscale structure. The quest for spectral analysis and suitable scale spaces on hypergraphs motivates in particular a definition of differential operators with trivial first eigenfunction and thus more interpretable second eigenfunctions. This property is not automatically satisfied in existing definitions of hypergraph \(p\)-Laplacians and we hence provide a novel axiomatic approach that extends previous definitions and can be specialized to satisfy such (or other) desired properties. Keywords:Hypergraphs PDEs on (hyper)graphs Diffusion models Information flow Hypergraph spectral clustering Image Processing Denoising Segmentation. ## 1 Introduction Methods for image processing, data analysis and simulation of information propagation have strongly benefited from using graph structures in the past, and the modeling with PDEs on graphs including graph \(p\)-Laplacians and associated flow became a standard tool for analyzing graph structures and dynamics on such (cf. [6; 16; 17]). Those are carried on in machine learning in the concept of graph neural networks, again closely related to models for information flow on networks (cf. [5]). Traditional graphs can however capture only _pairwise interactions_ of individuals, objects, or pixels in images and thus are unable to directly model group relationships, which are relevant e.g. in social networks or image patches. In order to mitigate for this problem we propose to apply a more general structure, namely a **hypergraph** with which it is straightforward to encode group interactions. Here, we adopt the definition of oriented hypergraphs, whose hyperarcs (generalizing edges) can have more than one ingoing and more than one outgoing vertex. For this structure there is a natural way to define gradients, and we use a scaling which preserves the axiom that the gradient of a constant function on the vertices vanishes. Via a definition of adjoint we can then obtain a divergence operator and a Laplacian. Additionally, we investigate the case of unoriented hypergraphs, in which so-called hyperedges do not have an orientation, i.e., no destinction between outgoing and ingoing vertices. In contrast to traditional edges in graphs, here the number of vertices per hyperedge is not limited by two. For this type of hypergraph, we introduce two possible Laplacian operators, one which is gradient-based and one via an averaging operator. ### Motivation The hypergraph structure gives additional flexibility in several applications compared to the pair-based graph structure. An example is the modeling of social phenomena of (fake) news spread, e.g., by connecting one person to all their followers directly and hence representing a community within a social network. One field of study in analyzing information flow in social networks is _opinion formation_, an interesting phenomenon that can be observed for a group of individuals which interact and have complex relationships with each other. Some individuals of the social network, so-called _opinion leaders_ or _social media influencers_, with a large group of followers (up to half a billion people) have a strong influence on the opinion of many others and can even make profit by leveraging their impact on large groups of social media users (see, e.g., [18]). Modeling information flow in social networks mathematically is typically performed by using traditional graphs. With such graphs it is possible to link two social media users with a pairwise connection, if they are online friends or follow each other (see, e.g., [13]). The information flow in the social network can then be modeled in terms of diffusion processes on the graph, e.g., by solving a partial differential equation (PDE) (see, e.g., [1], [4]). However, recent work suggest that interactions beyond pairs are of particular relevance (cf. [20]). Structures reminiscent of a Laplacian on hypergraphs can be found in the model of [15]. A similar question arises in the analysis of community structures, where graph spectral clustering is a standard technique. In order to understand networks including group connections, a more general structure such as hypergraphs seems to be more appropriate. The success of PDE-based methods on graphs motivates a further study on hypergraphs in order to explore the potential of PDEs on such objects. For this sake we need appropriate definitions of hypergraph gradients and Laplacians, which we revisit in this paper. Moreover, the study of scales on hypergraphs is a relevant topic, which could naturally be defined by the evolution of diffusion type processes we hence study here. In image processing the graph structure is potentially limiting, since it merely confines to the comparison of pairs of pixels and their gray (respectively colour) values. It may however be relevant to make comparison between one pixel and its surrounding pixels without the restriction to all pairs. Another example is nonlocal image processing based on patches consisting of multiple images. Hypergraph p-Laplacians and their associated scale spaces are a promising approach for such. ### Related work There already exists extensive literature about traditional graph theory and its application to social networks. In [13], an overview of social network modeling with traditional graphs is given, including community clustering, similarity analysis and community-based event detection. It indicates how the versatile structure of a graph can be applied to real world problems. [1] introduces the so-called ego network, a graph focusing on one specific social media user in the center and their surrounding concentric layers of followers, sorted hierarchically depending on their contact frequency. The paper [6] introduces first-order differential operators and a family of \(p\)-Laplacian operators for traditional oriented graphs. The proposed partial difference, adjoint, divergence and anisotropic \(p\)-Laplacian for traditional graphs are a special case of our vertex gradient, adjoint, divergence and \(p\)-Laplacian operators for hypergraphs, which are introduced in Section 3. The theoretical results of [6] are applied for mathematical image analysis, such as filtering, segmentation, clustering, and inpainting, but not for social network modeling. [10] generalizes the already known \(p\)-Laplacian operators for normal graphs to the hypergraph setting and performs spectral analysis with a specific focus on the 1-Laplacian. The spectral properties are then applied to common (hyper)graph problems, for instance vertex partitioning, cuts in graphs, coloring of vertices and hyperarc partitioning, but the paper does not include any numerical experiments or the modeling of social networks with hypergraphs. In comparison, our gradient, adjoint, and \(p\)-Laplacian definitions are more general and also have the property of the gradient null space including constant vertex functions. Additionally, they are also more flexible with respect to their adaptability for application tasks. [15] uses unoriented hypergraphs to model different sociological phenomena of cliques, such as peer pressure, with consensus models. Diffusion processes in multi-way interactions with convergence to one united group consensus are modeled with a simple 2-Laplacian inspired by the traditional graph setting. Due to the lack of orientation in the hypergraphs, the described consensus models are not able to capture the effects of a one-sided connection through following someone (e.g., Twitter, Instagram), but only mutual connection through being friends (e.g., Facebook). Furthermore, [22] uses unoriented hypergraphs in machine learning and shows how hypergraph modelling of data relationships can outperform normal graphs in spectral clustering tasks. Similarly, [12] compares two different algorithms for submodular hypergraph clustering, for not oriented hypergraphs with positive vertex weights and a normalized positive hyperedge weight function, namely the Inverse Power Method (IPM) and the clique expansion method (CEM). ### Main contributions The contributions of this paper are manifold. First, we recall the generalized vertex \(p\)-Laplacian operators for oriented hypergraphs, which were introduced in the preceding paper [8] and [7]. They generalize the definitions in [10] by including two different vertex weight functions and hyperarc weight functions respectively. With appropriate choice of these weights the vertex gradient definition leading to the vertex \(p\)-Laplacian fulfills the expected properties of the continuum setting (antisymmetry and the gradient of a constant function being equal to zero), based on less strict assumptions compared to the implicit gradient of [10]. In order to obtain a meaningful definition of a \(p\)-Laplace operator on unoriented hypergraphs as well, we introduce a gradient operator with respect to a single vertex, which follows the idea of the respective operators in the oriented hypergraph case. As an alternative, we also consider an approach via an averaging operator on the unoriented hypergraph, which is however confined to the linear case (\(p=2\)) as of now. The two different Laplacian operators are subsequently compared in our numerical experiments. Moreover, we include two possible applications of the corresponding diffusion equations: for the oriented setting we investigate the information flow on networks based on the hypergraph Laplacian and for the unoriented setting we discuss an application to image processing and derive novel scale spaces based on pixel neighbourhood comparison, for which our definitions are naturally suited. ## 2 Mathematical basics of hypergraphs The definition of hypergraphs is a generalization of finite graphs, both in the case of unoriented and oriented hypergraphs, which are based on unoriented and oriented normal graphs respectively. Given a finite set of vertices \(\mathcal{V}=\{v_{1},v_{2},\ldots v_{N}\}\), then a hypergraph does not only capture pairwise connections between two vertices, but higher-order relationships within any subset of all vertices. Remark 1: As proposed in [14], we differentiate between oriented and unoriented hypergraphs instead of directed and undirected hypergraphs, because for every oriented hyperarc there is only one orientation but two possible directions: the direction along the orientation and the direction against the orientation. Definition 1 (Unoriented hypergraph \(Uh\)): [21] An unoriented hypergraph \(UH=(\mathcal{V},\mathcal{E}_{H})\) consists of a finite set of vertices \(\mathcal{V}\), and a set of so-called hyperedges \(\mathcal{E}_{H}\), with each hyperarc \(e_{q}\in\mathcal{E}_{H}\) being in the power set of the vertices \(2^{|\mathcal{V}|}\) and satisfying \(\emptyset\subset e_{q}\subset\mathcal{V}\) with \(2\leq|e_{q}|\leq|\mathcal{V}|-1\). Example 1 (**Unoriented hypergraph \(Uh\)**): _Given a set of vertices_ \[\mathcal{V}=\left\{v_{1},v_{2},v_{3},v_{4},v_{5},v_{6},v_{7},v_{8}\right\}\] _and a set of hyperedges_ \[\mathcal{E}_{H}=\left\{\left\{v_{1},v_{2},v_{5}\right\},\left\{v_{2},v_{3},v_{ 7},v_{8}\right\},\left\{v_{6},v_{7}\right\}\right\},\] _then the unoriented hypergraph \(UH=(\mathcal{V},\mathcal{E}_{H})\) can be visualized in the following way:_ Remark 2: For clarity reasons we assume that each hyperedge in \(\mathcal{E}_{H}\) is unique and hence occurs only once. This implies that the cardinality of the hyperedge set is finite due to set of vertices \(\mathcal{V}\) being finite and the number of hyperedges in \(UH=(\mathcal{V},\mathcal{E}_{H})\) being limited by \(|\mathcal{E}_{H}|\leq N^{N}\). Assigning either an output or an input orientation to each vertex of a hyperedge results in an oriented version of hyperedges, so-called hyperarcs. Based on this, oriented hypergraphs can be defined. Definition 2 (Oriented hypergraph \(Oh\)): [10] An oriented hypergraph \(OH=(\mathcal{V},\mathcal{A}_{H})\) consists of a finite set of vertices \(\mathcal{V}\), and a set of so-called hyperarcs \(\mathcal{A}_{H}\). Each hyperarc \(a_{q}\in\mathcal{A}_{H}\) contains two disjoint subsets of vertices \[a_{q}=\left(a_{q}^{out},a_{q}^{in}\right) \tag{1}\] with \(\emptyset\subset a_{q}^{out},a_{q}^{in}\subset\mathcal{V}\), \(a_{q}^{out}\cap a_{q}^{in}=\emptyset\), \(a_{q}^{out}\) being the set of all output vertices and \(a_{q}^{in}\) being the set of all input vertices of the hyperarc \(a_{q}\). Example 2 (**Oriented hypergraph \(Oh\)**): Given a set of vertices \[\mathcal{V}=\left\{v_{1},v_{2},v_{3},v_{4},v_{5},v_{6},v_{7},v_{8}\right\}\] and a set of hyperarcs \[\mathcal{A}_{H}=\left\{\left(\left\{v_{1},v_{2}\right\},\left\{v_{5}\right\} \right),\left(\left\{v_{3},v_{7}\right\},\left\{v_{2},v_{8}\right\}\right), \left(\left\{v_{6}\right\},\left\{v_{7}\right\}\right)\right\},\] then the oriented hypergraph \(OH=\left(\mathcal{V},\mathcal{A}_{H}\right)\) can be visualized in the following way: Alternatively, hyperarcs can also be visualized similarly to arcs in normal graphs: In our numerical experiments we will use the second visualization option (without color-coding the different hyperarcs) in order to simplify understanding of the links between vertices. Since the underlying oriented hypergraph will have a specific property (\(\left|a_{q}^{out}\right|=1\)), the examples will have a one-to-one correspondence between the "normal graph visualization" and the hypergraph visualization, which would generally not be given without color-coding each hyperarc. Remark 3: Furthermore, for clarity reasons we assume that each hyperarc in the set of hyperarcs \(\mathcal{A}_{H}\) is unique and thus occurs only once. This automatically implies that the cardinality of the hyperarc set is finite due to set of vertices \(\mathcal{V}\) being finite. More precisely the number of hyperarcs in \(OH=\left(\mathcal{V},\mathcal{A}_{H}\right)\) is limited by \(\left|\mathcal{A}_{H}\right|\leq N^{N}\). We now define different functions on both unoriented and oriented hypergraphs, which are used in Section 3 to introduce differential operators inspired by the continuum setting. In order to efficiently denote whether vertex is part of a hyperedge for an unoriented hypergraph and to check if a vertex is part of a hyperarc as an output or an input vertex for an oriented hypergraph, we use different kinds of vertex-hyperedge and vertex-hyperarc characteristic functions. Definition 3 (Vertex-hyperedge characteristic function \(\delta\)): For an unoriented hypergraph \(UH=(\mathcal{V},\mathcal{E}_{H})\), we define the vertex-hyperedge characteristic function \(\delta\) as: \[\delta:\ \mathcal{V}\times\mathcal{E}_{H}\longrightarrow\{0,1\}\qquad(v_{i},e_ {q})\longmapsto\delta\left(v_{i},e_{q}\right)=\left\{\begin{array}{ll}1&v_{ i}\in e_{q}\\ 0&\text{otherwise}\end{array}\right.. \tag{2}\] Definition 4 (Vertex-hyperarc characteristic functions \(\delta_{out}\), \(\delta_{in}\)): For an oriented hypergraph \(OH=(\mathcal{V},\mathcal{A}_{H})\), we define the output vertex-hyperarc characteristic function \(\delta_{out}\) as: \[\delta_{out}:\ \mathcal{V}\times\mathcal{A}_{H}\longrightarrow\{0,1\}\qquad(v_{i}, a_{q})\longmapsto\delta_{out}\left(v_{i},a_{q}\right)=\left\{\begin{array}{ll}1&v_{ i}\in a_{q}^{out}\\ 0&\text{otherwise}\end{array}\right.. \tag{3}\] Respectively, the input vertex-hyperarc characteristic function \(\delta_{in}\) is given by: \[\delta_{in}:\ \mathcal{V}\times\mathcal{A}_{H}\longrightarrow\{0,1\}\qquad(v_{i}, a_{q})\longmapsto\delta_{in}\left(v_{i},a_{q}\right)=\left\{\begin{array}{ll}1&v_{ i}\in a_{q}^{in}\\ 0&\text{otherwise}\end{array}\right.. \tag{4}\] Instead of defining separate \(\delta_{out}\) and \(\delta_{in}\) characteristic functions, it would also be possible to define one vertex-hyperarc characteristic function \(\delta_{*}\) as: \[\delta_{*}:\ \mathcal{V}\times\mathcal{A}_{H}\longrightarrow\{-1,0,1\}\qquad(v_{i},a_{q})\longmapsto\delta_{*}\left(v_{i},a_{q}\right)=\left\{\begin{array}{ll}- 1&v_{i}\in a_{q}^{in}\\ 1&v_{i}\in a_{q}^{out}\\ 0&\text{otherwise}\end{array}\right.. \tag{5}\] however, this would lead to more complex definitions of the vertex gradient, adjoint, and \(p\)-Laplacian operators later on, because it complicates weighing output and input vertices of a hyperarc differently. Real valued functions can be defined on the set of vertices \(\mathcal{V}\), the set of hyperedges \(\mathcal{E}_{H}\) and the set of hyperarcs \(\mathcal{A}_{H}\) in order to link any kind of data to a hypergraph. Definition 5 (Vertex functions \(f\), and hyperedge or hyperarc functions \(F\)): For both an unoriented hypergraph \(UH=(\mathcal{V},\mathcal{E}_{H})\) and an oriented hypergraph \(OH=(\mathcal{V},\mathcal{A}_{H})\), vertex functions are defined on the set of vertices as \[f:\ \mathcal{V}\longrightarrow\mathbb{R}\qquad v_{i}\longmapsto f\left(v_{i}\right) \tag{6}\] with vertex weight functions being defined as \[w:\ \mathcal{V}\longrightarrow\mathbb{R}_{>0}\qquad v_{i}\longmapsto w\left(v_{ i}\right). \tag{7}\] For an unoriented hypergraph \(UH=(\mathcal{V},\mathcal{E}_{H})\), hyperedge functions are defined on the domain of the set of hyperedges as \[F:\ \mathcal{E}_{H}\longrightarrow\mathbb{R}\qquad e_{q}\longmapsto F\left(e_{ q}\right) \tag{8}\] _with hyperedge weight functions being defined as_ \[W:\ \mathcal{E}_{H}\longrightarrow\mathbb{R}_{>0}\qquad e_{q}\longmapsto W\left( e_{q}\right). \tag{9}\] _Similarly, for an oriented hypergraph \(OH=\left(\mathcal{V},\mathcal{A}_{H}\right)\), hyperarc functions are defined on the domain of the set of hyperarcs as_ \[F:\ \mathcal{A}_{H}\longrightarrow\mathbb{R}\qquad a_{q}\longmapsto F\left(a_{ q}\right) \tag{10}\] _with hyperarc weight functions being defined as_ \[W:\ \mathcal{A}_{H}\longrightarrow\mathbb{R}_{>0}\qquad a_{q}\longmapsto W \left(a_{q}\right). \tag{11}\] The space of all vertex functions, all hyperedge and all hyperarc functions defined on a given hypergraph can be identified with an \(N\)- or an at most \(N^{N}\)-dimensional Hilbert space, respectively. Definition 6 (Space of vertex functions \(\mathcal{H}\left(\mathcal{V}\right)\), space of hyperedge functions \(\mathcal{H}\left(\mathcal{E}_{H}\right)\), and space of hyperarc functions \(\mathcal{H}\left(\mathcal{A}_{H}\right)\)): For an unoriented hypergraph \(UH=\left(\mathcal{V},\mathcal{E}_{H}\right)\) and an oriented hypergraph \(OH=\left(\mathcal{V},\mathcal{A}_{H}\right)\), the space of all vertex functions \(f\) is given by \[\mathcal{H}\left(\mathcal{V}\right)=\left\{f\ |\ f:\ \mathcal{V}\longrightarrow \mathbb{R}\right\} \tag{12}\] where \(\mathcal{H}\left(\mathcal{V}\right)\) with the inner product \(\left\langle f,g\right\rangle_{\mathcal{H}\left(\mathcal{V}\right)}=\sum_{v_{ i}\in\mathcal{V}}w_{I}\left(v_{i}\right)^{\alpha}f\left(v_{i}\right)g\left(v_{i}\right)\) for any two vertex functions \(f,g\in\mathcal{H}\left(\mathcal{V}\right)\), vertex weight function \(w_{I}\), and parameter \(\alpha\in\mathbb{R}\) is a Hilbert space. For an unoriented hypergraph \(UH=\left(\mathcal{V},\mathcal{E}_{H}\right)\), the space of all hyperedge functions \(F\) is defined as \[\mathcal{H}\left(\mathcal{E}_{H}\right)=\left\{F\ |\ F:\ \mathcal{E}_{H} \longrightarrow\mathbb{R}\right\} \tag{13}\] where \(\mathcal{H}\left(\mathcal{E}_{H}\right)\) with the inner product \(\left\langle F,G\right\rangle_{\mathcal{H}\left(\mathcal{E}_{H}\right)}=\sum_ {e_{q}\in\mathcal{E}_{H}}W_{I}\left(e_{q}\right)^{\beta}F\left(e_{q}\right)G \left(e_{q}\right)\) for any two hyperedge functions \(F,G\in\mathcal{H}\left(\mathcal{E}_{H}\right)\), hyperedge weight function \(W_{I}\), and parameter \(\beta\in\mathbb{R}\) constitutes a Hilbert space. In the same manner, the space of all hyperarc functions \(F\) for an oriented hypergraph \(OH=\left(\mathcal{V},\mathcal{A}_{H}\right)\) is defined as \[\mathcal{H}\left(\mathcal{A}_{H}\right)=\left\{F\ |\ F:\ \mathcal{A}_{H} \longrightarrow\mathbb{R}\right\} \tag{14}\] where \(\mathcal{H}\left(\mathcal{A}_{H}\right)\) with the product \(\left\langle F,G\right\rangle_{\mathcal{H}\left(\mathcal{A}_{H}\right)}=\sum_ {a_{q}\in\mathcal{A}_{H}}W_{I}\left(a_{q}\right)^{\beta}F\left(a_{q}\right)G \left(a_{q}\right)\) for any two hyperarc functions \(F,G\in\mathcal{H}\left(\mathcal{A}_{H}\right)\), hyperarc weight function \(W_{I}\), and parameter \(\beta\in\mathbb{R}\) defines a Hilbert space. ## 3 Differential operators on hypergraphs This section introduces first and higher order differential operators both for unoriented and oriented hypergraphs. ### First-order differential operators for oriented hypergraphs Utilizing the introduced definitions for hypergraphs we can now generalize the definitions of the vertex gradient, the vertex adjoint, and the vertex \(p\)-Laplacian for normal graphs, which have already been discussed in a simplified form with less weight functions and parameters in [6]. Definition 7 (Vertex gradient operator \(\nabla_{v}\)): For an oriented hypergraph \(OH=\left(\mathcal{V},\mathcal{A}_{H}\right)\) with vertex weight functions \(w_{I}\) and \(w_{G}\), and hyperarc weight function \(W_{G}\), we define the vertex gradient operator \(\nabla_{v}\) with parameters \(\alpha,\gamma,\epsilon,\eta\in\mathbb{R}\) as: \[\nabla_{v}:\ \mathcal{H}\left(\mathcal{V}\right)\longrightarrow\mathcal{H} \left(\mathcal{A}_{H}\right)\quad f\longmapsto\nabla_{v}f \tag{15}\] \[\nabla_{v}f:\ \mathcal{A}_{H}\longrightarrow\mathbb{R}\quad a_{q} \longmapsto\nabla_{v}f\left(a_{q}\right)=\] \[W_{G}\left(a_{q}\right)^{\gamma}\sum_{v_{i}\in\mathcal{V}} \left(\delta_{in}\left(v_{i},a_{q}\right)\frac{w_{I}\left(v_{i}\right)^{ \alpha}w_{G}\left(v_{i}\right)^{\epsilon}}{\left|a_{q}^{in}\right|}-\delta_{ out}\left(v_{i},a_{q}\right)\frac{w_{I}\left(v_{i}\right)^{\alpha}w_{G}\left(v_{i} \right)^{\eta}}{\left|a_{q}^{out}\right|}\right)f\left(v_{i}\right).\] The weight \(w_{I}\) denotes the vertex weight function from the inner product of \(\mathcal{H}\left(\mathcal{V}\right)\) and \(w_{G}\) denotes the vertex weight function, which is introduced with the gradient operator. Using different values for the parameters \(\epsilon\) and \(\eta\) corresponds to putting different weights on the input and output vertices in the gradient of hyperarc \(a_{q}\). The introduced vertex gradient fulfills two expected properties from the continuum setting, namely antisymmetry and the gradient of a constant function being equal to zero. Theorem 3.1 (Vertex gradient operator properties): _The vertex gradient \(\nabla_{v}\) defined on an oriented hypergraphs \(OH=\left(\mathcal{V},\mathcal{A}_{H}\right)\) with vertex weight functions \(w_{I}\) and \(w_{G}\), and hyperarc weight function \(W_{G}\), satisfies the following properties:_ 1. **Vanishing gradient of a constant vertex function:** _If the condition_ \[w_{I}\left(v_{k}\right)^{\alpha}w_{G}\left(v_{k}\right)^{\epsilon}=w_{I}\left( v_{j}\right)^{\alpha}w_{G}\left(v_{j}\right)^{\eta}\] _holds for all vertex combinations_ \(v_{j},v_{k}\in\mathcal{V}\) _with_ \(v_{j}\in a_{q}^{out}\) _and_ \(v_{k}\in a_{q}^{in}\) _for a hyperarc_ \(a_{q}\in\mathcal{A}_{H}\)_, then for every constant function_ \(f\)_, i.e._ \(f\left(v_{i}\right)\equiv\overline{f}\) _for all vertices_ \(v_{i}\in\mathcal{V}\)_, we have_ \(\nabla_{v}f\left(a_{q}\right)=0\) _for all hyperarcs_ \(a_{q}\in\mathcal{A}_{H}\)_._ 2. **Antisymmetry:** _Let_ \(\epsilon=\eta\)_. Then the identity_ \(\nabla_{v}f\left(a_{q}^{out},a_{q}^{in}\right)=-\nabla_{v}f\left(a_{q}^{in},a_ {q}^{out}\right)\) _holds for all hyperarcs_ \(a_{q}\in\mathcal{A}_{H}\)_._ Proof: See [7] Theorem 9.2 (Vertex gradient operator properties). Let us mention one additional complication compared to the traditional graph case: While it is trivial to see that for a connected graph constant functions are the only elements in the nullspace of the gradient, this is not apparent for hypergraphs. By computing the adjoint \(\nabla_{v}^{*}\) of the vertex gradient we can introduce a consistent definition of a divergence operator on hypergraphs in analogy to traditional calculus. Detailed computation based on the relation \[\left\langle G,\nabla_{v}f\right\rangle_{\mathcal{H}\left(\mathcal{A}_{H} \right)}=\left\langle f,\nabla_{v}^{*}G\right\rangle_{\mathcal{H}\left( \mathcal{V}\right)} \tag{16}\] for all vertex functions \(f\in\mathcal{H}\left(\mathcal{V}\right)\) and all hyperarc functions \(G\in\mathcal{H}\left(\mathcal{A}_{H}\right)\) can be found in [7] Theorem 9.9 (Connection vertex gradient \(\nabla_{v}\) and vertex adjoint \(\nabla_{v}^{*}\)). Definition 8 (Vertex adjoint operator \(\nabla_{v}^{*}\)): For an oriented hypergraph \(OH=\left(\mathcal{V},\mathcal{A}_{H}\right)\) with vertex weight function \(w_{G}\), and hyperarc weight functions \(W_{I}\) and \(W_{G}\), the vertex adjoint operator \(\nabla_{v}^{*}\) with parameters \(\beta,\gamma,\epsilon,\eta\in\mathbb{R}\) is given by: \[\nabla_{v}^{*}\colon\ \mathcal{H}\left(\mathcal{A}_{H}\right) \longrightarrow\mathcal{H}\left(\mathcal{V}\right)\quad F\longmapsto\nabla_{v }^{*}F\] \[\nabla_{v}^{*}F:\ \mathcal{V}\longrightarrow\mathbb{R}\quad v_{i} \longmapsto\nabla_{v}^{*}F\left(v_{i}\right)=\] \[\sum_{a_{q}\in\mathcal{A}_{H}}\left(\delta_{in}\left(v_{i},a_{q }\right)\frac{w_{G}\left(v_{i}\right)^{\epsilon}}{\left|a_{q}^{in}\right|}- \delta_{out}\left(v_{i},a_{q}\right)\frac{w_{G}\left(v_{i}\right)^{\eta}}{ \left|a_{q}^{out}\right|}\right)W_{I}\left(a_{q}\right)^{\beta}W_{G}\left(a_{q }\right)^{\gamma}F\left(a_{q}\right). \tag{17}\] Definition 9 (Vertex divergence operator \(\operatorname{div}_{v}\)): For an oriented hypergraph \(OH=\left(\mathcal{V},\mathcal{A}_{H}\right)\) with vertex weight function \(w_{G}\), and hyperarc weight functions \(W_{I}\) and \(W_{G}\), the vertex divergence operator \(\operatorname{div}_{v}\) with parameters \(\beta,\gamma,\epsilon,\eta\in\mathbb{R}\) is given by: \[\operatorname{div}_{v}\colon\ \mathcal{H}\left(\mathcal{A}_{H}\right) \longrightarrow\mathcal{H}\left(\mathcal{V}\right)\quad F\longmapsto \operatorname{div}_{v}F\] \[\operatorname{div}_{v}F:\ \mathcal{V}\longrightarrow\mathbb{R}\quad v_{i} \longmapsto\operatorname{div}_{v}F\left(v_{i}\right)=-\nabla_{v}^{*}F\left(v_ {i}\right)=\] \[\sum_{a_{q}\in\mathcal{A}_{H}}\left(\delta_{out}\left(v_{i},a_{q }\right)\frac{w_{G}\left(v_{i}\right)^{\eta}}{\left|a_{q}^{out}\right|}-\delta _{in}\left(v_{i},a_{q}\right)\frac{w_{G}\left(v_{i}\right)^{\epsilon}}{\left| a_{q}^{in}\right|}\right)W_{I}\left(a_{q}\right)^{\beta}W_{G}\left(a_{q}\right)^{ \gamma}F\left(a_{q}\right). \tag{18}\] ### \(p\)-Laplacian operators for oriented hypergraphs Based on the previous definitions we introduce a generalized vertex \(p\)-Laplacian inspired by the continuum setting, which implies that for all \(p\in\left(1,\infty\right)\) and all vertex functions \(f\in\mathcal{H}\left(\mathcal{V}\right)\) it holds true that: \[\Delta_{v}^{p}f=\operatorname{div}_{v}\left(\left|\nabla_{v}f\right|^{p-2} \nabla_{v}f\right).\] Note that from the definition of the divergence as a negative adjoint of the gradient it becomes clear the oriented hypergraph \(p\)-Laplacian is the negative variation of the \(p\)-norm of the gradient, which allows to apply the full theory of eigenvalues of \(p\)-homogeneous functionals (see, [2]). In particular, the oriented hypergraph Laplacian is a negative semidefinite linear operator and has a spectrum on the negative real line. **Definition 10** (Vertex \(p\)-Laplacian operator \(\Delta_{v}^{p}\)).: _For an oriented hypergraph \(OH=\left(\mathcal{V},\mathcal{A}_{H}\right)\) with vertex weight functions \(w_{I}\) and \(w_{G}\), and with hyperarc weight functions \(W_{I}\) and \(W_{G}\), the vertex \(p\)-Laplacian operator \(\Delta_{v}^{p}\) with parameters \(\alpha,\beta,\gamma,\epsilon,\eta\in\mathbb{R}\) is given by:_ \[\Delta_{v}^{p}:\;\mathcal{H}\left(\mathcal{V}\right)\longrightarrow \mathcal{H}\left(\mathcal{V}\right)\quad f\longmapsto\Delta_{v}^{p}f\;\;\; \;\;\;\;\;\;\;\;\Delta_{v}^{p}f:\;\mathcal{V}\longrightarrow\mathbb{R}\quad v _{i}\longmapsto\Delta_{v}^{p}f\left(v_{i}\right)=\] \[\sum_{a_{q}\in\mathcal{A}_{H}}\left(\delta_{out}\left(v_{i},a_{q} \right)\frac{w_{G}\left(v_{i}\right)^{\eta}}{\left|a_{q}^{out}\right|}-\delta_ {in}\left(v_{i},a_{q}\right)\frac{w_{G}\left(v_{i}\right)^{\epsilon}}{\left|a _{q}^{in}\right|}\right)W_{I}\left(a_{q}\right)^{\beta}W_{G}\left(a_{q}\right) ^{p\gamma}\] \[\left|\sum_{v_{j}\in\mathcal{V}}\left(\delta_{in}\left(v_{j},a_{ q}\right)\frac{w_{I}\left(v_{j}\right)^{\alpha}w_{G}\left(v_{j}\right)^{ \epsilon}}{\left|a_{q}^{in}\right|}-\delta_{out}\left(v_{j},a_{q}\right)\frac{w _{I}\left(v_{j}\right)^{\alpha}w_{G}\left(v_{j}\right)^{\eta}}{\left|a_{q}^{ out}\right|}\right)f\left(v_{j}\right)\right|^{p-2}\] \[\sum_{v_{k}\in\mathcal{V}}\left(\delta_{in}\left(v_{k},a_{q} \right)\frac{w_{I}\left(v_{k}\right)^{\alpha}w_{G}\left(v_{k}\right)^{\epsilon }}{\left|a_{q}^{in}\right|}-\delta_{out}\left(v_{k},a_{q}\right)\frac{w_{I} \left(v_{k}\right)^{\alpha}w_{G}\left(v_{k}\right)^{\eta}}{\left|a_{q}^{out} \right|}\right)f\left(v_{k}\right). \tag{19}\] The following theorem states that the vertex \(p\)-Laplacian is well-defined. Theorem 2.2 (Connection vertex gradient \(\nabla_{v}\), vertex divergence \(\operatorname{div}_{v}\), and vertex \(p\)-Laplacian \(\Delta_{v}^{p}\)).: _For an oriented hypergraph \(OH=\left(\mathcal{V},\mathcal{A}_{H}\right)\) with vertex weight functions \(w_{I}\) and \(w_{G}\), and hyperarc weight functions \(W_{I}\) and \(W_{G}\), the vertex \(p\)-Laplacian \(\Delta_{v}^{p}\) fulfills the equality_ \[\Delta_{v}^{p}f=\operatorname{div}_{v}\left(\left|\nabla_{v}f\right|^{p-2} \nabla_{v}f\right) \tag{20}\] _for all vertex functions \(f\in\mathcal{H}\left(\mathcal{V}\right)\)._ Proof.: See [7] Theorem 10.13 (Connection vertex divergence \(\operatorname{div}_{v}\), vertex gradient \(\nabla_{v}\), and vertex \(p\)-Laplacian \(\Delta_{v}^{p}\)). Moreover, our vertex \(p\)-Laplacian definition is a valid generalization of the definition introduced in [10]. Remark 4 (Parameter choice for the vertex \(p\)-Laplacian operator).: The simplified definition of the vertex \(p\)-Laplacian introduced in [10] for any vertex function \(f\in\mathcal{H}\left(\mathcal{V}\right)\) and for any vertex \(v_{i}\in\mathcal{V}\) can be written in our notation as: \[\Delta_{p}f\left(v_{i}\right) =\frac{1}{\deg\left(v_{i}\right)}\sum_{\begin{subarray}{c}a_{q} \in\mathcal{A}_{H}:\;\delta_{out}\left(v_{i},a_{q}\right)=1\\ \text{ or }\delta_{in}\left(v_{i},a_{q}\right)=1\end{subarray}}\left|\sum_{v_{j} \in a_{q}^{in}}f\left(v_{j}\right)-\sum_{v_{j}\in a_{q}^{out}}f\left(v_{j} \right)\right|^{p-2}\] \[\left(\sum_{v_{k}\in\mathcal{V}}\left(\delta_{out}\left(v_{i},a_{q }\right)\delta_{out}\left(v_{k},a_{q}\right)+\delta_{in}\left(v_{i},a_{q} \right)\delta_{in}\left(v_{k},a_{q}\right)\right)f\left(v_{k}\right)-\right.\] \[\left.\sum_{v_{k}\in\mathcal{V}}\left(\delta_{out}\left(v_{i},a_{q }\right)\delta_{in}\left(v_{k},a_{q}\right)+\delta_{in}\left(v_{i},a_{q} \right)\delta_{out}\left(v_{k},a_{q}\right)\right)f\left(v_{k}\right)\right). \tag{21}\] The factor \(\left(\delta_{out}\left(v_{i},a_{q}\right)\delta_{out}\left(v_{k},a_{q}\right)+ \delta_{in}\left(v_{i},a_{q}\right)\delta_{in}\left(v_{k},a_{q}\right)\right)\) is always equal to zero, unless \(v_{i},v_{k}\in a_{q}^{out}\) or \(v_{i},v_{k}\in a_{q}^{in}\), which means that the vertices \(v_{i}\) and are co-oriented. Similarly, the factor \(\left(\delta_{out}\left(v_{i},a_{q}\right)\delta_{in}\left(v_{k},a_{q}\right)+ \delta_{in}\left(v_{i},a_{q}\right)\delta_{out}\left(v_{k},a_{q}\right)\right)\) ensures to only consider vertices \(v_{k}\in\mathcal{V}\) which are anti-oriented compared to vertex \(v_{i}\) and hence either \(v_{i}\in a_{q}^{out},v_{k}\in a_{q}^{in}\) or \(v_{i}\in a_{q}^{in},v_{k}\in a_{q}^{out}\). Thus, choosing the parameters of the vertex \(p\)-Laplacian \(\Delta_{v}^{p}\) as \(\alpha=0\), \(\beta=0\), \(\gamma=0\), \(\epsilon=0\) and \(\eta=0\) together with excluding the \(\frac{1}{\left|a_{q}^{out}\right|}\) and \(\frac{1}{\left|a_{q}^{in}\right|}\) multiplicative factors and including a new \(-\frac{1}{\deg\left(v_{i}\right)}\) factor in the vertex adjoint and the vertex divergence, results in the simplified vertex \(p\)-Laplacian introduced in [10]. Moreover, applying these parameter choices to the vertex gradient, the vertex adjoint and the vertex divergence leads to the following definitions for all vertex functions \(f\in\mathcal{H}\left(\mathcal{V}\right)\), all hyperarc functions \(F\in\mathcal{H}\left(\mathcal{A}_{H}\right)\), for all hyperarcs \(a_{q}\in\mathcal{A}_{H}\) and all vertices \(v_{i}\in\mathcal{V}\): \[\nabla_{v}f\left(a_{q}\right)=\sum_{v_{i}\in\mathcal{V}}\left(\delta_{in} \left(v_{i},a_{q}\right)-\delta_{out}\left(v_{i},a_{q}\right)\right)f\left(v_{ i}\right)\] \[\nabla_{v}^{*}F\left(v_{i}\right)=-\frac{1}{\deg\left(v_{i}\right)}\sum_{a_{q }\in\mathcal{A}_{H}}\left(\delta_{in}\left(v_{i},a_{q}\right)-\delta_{out} \left(v_{i},a_{q}\right)\right)F\left(a_{q}\right)\] \[\operatorname{div}_{v}\left(F\right)\left(v_{i}\right)=-\frac{1}{\deg\left(v_ {i}\right)}\sum_{a_{q}\in\mathcal{A}_{H}}\left(\delta_{out}\left(v_{i},a_{q} \right)-\delta_{in}\left(v_{i},a_{q}\right)\right)F\left(a_{q}\right)\] Proof: See [7] Theorem 10.12 (Parameter choice for the vertex \(p\)-Laplacian operator). Based on our axiomatic definition of the \(p-\)Laplacian via a gradient and adjoint divergence, it is straight-forward to verify its variational structure: Theorem 3.3 (\(p\)-Laplacian energy and derivatives): _For an oriented hypergraph \(OH=\left(\mathcal{V},\mathcal{A}_{H}\right)\), the negative hypergraph \(p\)-Laplacian for \(p\in\left(1,\infty\right)\) is the first variation of the associated \(p\)-Dirichlet energy_ \[E_{p}[f]:=\sum_{v_{i}\in\mathcal{V}}\left|\nabla_{v}^{p}f\left(v_{i}\right) \right|, \tag{22}\] _i.e. for every vertex function \(f\in\mathcal{H}\left(\mathcal{V}\right)\) we have_ \[-\Delta_{v}^{p}f=E_{p}^{\prime}[f]. \tag{23}\] ### First-order differential operators for unoriented hypergraphs In order to retrieve a meaningful definition of a gradient also in the case of an unoriented hypergraph, where vertices in an hyperedge can not be separated into output and input vertices, we appoint for each hyperedge \(e_{q}\in\mathcal{E}_{H}\) a specific vertex \(v_{\tilde{q}}:=v_{i}\in e_{q}\), which all other vertices in the hyperedge \(v_{j}\in e_{q}\setminus\left\{v_{\tilde{q}}\right\}\) are compared to. _Remark 5_.: If it is not clear how to choose a suitable special vertex \(v_{\tilde{q}}\) for each hyperedge \(e_{q}\in\mathcal{E}_{H}\) based on the application, then it is also possible to include each hyperedge \(e_{q}\) exactly \(|e_{q}|\) times in the set of hyperedges \(\mathcal{E}_{H}\), where each version of the hyperedge has a different special vertex \(v_{i}\in e_{q}\) (however, note that this requires paying particular attention to notation as mentioned in 2). This means that we are able to generate an oriented hypergraph out of an unoriented one as follows: For each hyperedge \(e_{q}\in\mathcal{E}_{H}\) in the unoriented hypergraph, we create \(|e_{q}|\) hyperarcs for the oriented hypergraph with the same vertices as \(e_{q}\). Each hyperarc has one output and \(|e_{q}|-1\) input vertices and each vertex \(v_{i}\in e_{q}\) is an output vertex in exactly one newly created hyperarc. The associated vertex gradient, adjoint, and \(p\)-Laplacian operators for the oriented hypergraph then follow the respective definitions of the unoriented hypergraph case. Before defining new differential operators for unoriented hypergraphs, it is necessary to introduce a new vertex-hyperedge characteristic function. Definition 11 (Vertex-hyperedge characteristic function \(\tilde{\delta}\)).: For an unoriented hypergraph \(UH=(\mathcal{V},\mathcal{E}_{H})\), we define the vertex-hyperedge characteristic function \(\tilde{\delta}\) as \[\tilde{\delta}:\ \mathcal{V}\times\mathcal{E}_{H}\longrightarrow\{0,1\}\qquad(v _{i},e_{q})\longmapsto\tilde{\delta}\left(v_{i},e_{q}\right)=\left\{\begin{array} []{ll}1&\text{ $v_{i}=v_{\tilde{q}}$}\\ 0&\text{ otherwise}\end{array}\right. \tag{24}\] which indicates if vertex \(v_{i}\in\mathcal{V}\) is the special vertex \(v_{\tilde{q}}\) of hyperedge \(e_{q}\in\mathcal{E}_{H}\). Furthermore, the following connection to the vertex-hyperedge function \(\delta\) holds true for all vertices \(v_{i}\in\mathcal{V}\) and all hyperedges \(e_{q}\in\mathcal{E}_{H}\): \[\tilde{\delta}\left(v_{i},e_{q}\right)=1\quad\Longrightarrow\quad\delta\left( v_{i},e_{q}\right)=1. \tag{25}\] The vertex gradient operator for unoriented hypergraphs is defined with the same weight functions and parameters as the definition in the oriented case. Definition 12 (Vertex gradient operator \(\nabla_{v}\)).: For an unoriented hypergraph \(UH=(\mathcal{V},\mathcal{E}_{H})\) with vertex weight functions \(w_{I}\) and \(w_{G}\), and hyperedge weight function \(W_{G}\), the vertex gradient operator \(\nabla_{v}\) with parameters \(\alpha,\gamma,\epsilon,\eta\in\mathbb{R}\) is given as: \[\nabla_{v}:\ \mathcal{H}\left(\mathcal{V}\right)\longrightarrow\mathcal{H} \left(\mathcal{E}_{H}\right)\quad f\longmapsto\nabla_{v}f\] \[\nabla_{v}f:\ \mathcal{E}_{H}\longrightarrow\mathbb{R}\quad e_{q}\longmapsto \nabla_{v}f\left(e_{q}\right)\] with \[\nabla_{v}f\left(e_{q}\right)=\] \[W_{G}\left(e_{q}\right)^{\gamma}\left(\sum_{v_{i}\in\mathcal{V}}\delta\left( v_{i},e_{q}\right)\left(w_{I}\left(v_{i}\right)^{\alpha}w_{G}\left(v_{i}\right)^{ \epsilon}f\left(v_{i}\right)-w_{I}\left(v_{\tilde{q}}\right)^{\alpha}w_{G} \left(v_{\tilde{q}}\right)^{\eta}f\left(v_{\tilde{q}}\right)\right)\right).\] The above gradient can be rewritten as: \[\nabla_{v}f\left(e_{q}\right)=\] \[W_{G}\left(e_{q}\right)^{\gamma}\left(\left(\sum_{v_{i}\in\mathcal{V}}\delta \left(v_{i},e_{q}\right)w_{I}\left(v_{i}\right)^{\alpha}w_{G}\left(v_{i}\right) ^{\epsilon}f\left(v_{i}\right)\right)-|e_{q}|\,w_{I}\left(v_{\tilde{q}}\right) ^{\alpha}w_{G}\left(v_{\tilde{q}}\right)^{\eta}f\left(v_{\tilde{q}}\right) \right). \tag{26}\] The vertex gradient for unoriented hypergraphs also fulfills the the expected property of a constant vertex function \(f\in\mathcal{H}\left(\mathcal{V}\right)\) resulting in a vanishing gradient. Theorem 4.1 (Vertex gradient operator properties): _The vertex gradient \(\nabla_{v}\) of an unoriented hypergraph \(UH=\left(\mathcal{V},\mathcal{E}_{H}\right)\) with vertex weight functions \(w_{I}\) and \(w_{G}\), and hyperedge weight function \(W_{G}\), satisfies the following property: If the vertex weights suffice the condition_ \[w_{I}\left(v_{i}\right)^{\alpha}w_{G}\left(v_{i}\right)^{\epsilon}=w_{I}\left( v_{\bar{q}}\right)^{\alpha}w_{G}\left(v_{\bar{q}}\right)^{\eta}\] _for all vertex-hyperedge combinations \(v_{i}\in e_{q}\) and \(e_{q}\in\mathcal{E}_{H}\), then for every constant function \(f\in\mathcal{H}\left(\mathcal{V}\right)\), i.e. \(f\left(v_{i}\right)\equiv\overline{f}\) for all vertices \(v_{i}\in\mathcal{V}\), we get \(\nabla_{v}f\left(e_{q}\right)=0\) for all hyperedges \(e_{q}\in\mathcal{A}_{H}\)._ Proof: Given a constant vertex function \(f\in\mathcal{H}\left(\mathcal{V}\right)\) on an unoriented hypergraph \(UH=\left(\mathcal{V},\mathcal{E}_{H}\right)\) with vertex weight functions \(w_{I}\) and \(w_{G}\), and hyperedge weight function \(W_{G}\), then it holds true that: The property \(w_{I}\left(v_{i}\right)^{\alpha}w_{G}\left(v_{i}\right)^{\epsilon}=w_{I}\left( v_{\bar{q}}\right)^{\alpha}w_{G}\left(v_{\bar{q}}\right)^{\eta}\) for all vertex-hyperedge combinations \(v_{i}\in e_{q}\) and \(e_{q}\in\mathcal{E}_{H}\) implies that for each hyperedge \(e_{q}\) there exists a constant \(w_{e_{q}}\in\mathbb{R}_{>0}\) such that \[w_{I}\left(v_{i}\right)^{\alpha}w_{G}\left(v_{i}\right)^{\epsilon}=w_{I}\left( v_{\bar{q}}\right)^{\alpha}w_{G}\left(v_{\bar{q}}\right)^{\eta}=:w_{e_{q}}\] for all vertices \(v_{i}\in e_{q}\). Thus, together with the property \(f\left(v_{i}\right)\equiv\overline{f}\in\mathbb{R}\) for all vertices \(v_{i}\in\mathcal{V}\), this yields for every hyperedge \(e_{q}\in\mathcal{E}_{H}\): \[\nabla_{v}f\left(e_{q}\right) =W_{G}\left(e_{q}\right)^{\gamma}\left(\sum_{v_{i}\in\mathcal{V}} \delta\left(v_{i},e_{q}\right)\left(w_{I}\left(v_{i}\right)^{\alpha}w_{G} \left(v_{i}\right)^{\epsilon}\overline{f}-w_{I}\left(v_{\bar{q}}\right)^{ \alpha}w_{G}\left(v_{\bar{q}}\right)^{\eta}\overline{f}\right)\right)\] \[=W_{G}\left(e_{q}\right)^{\gamma}\left(\sum_{v_{i}\in\mathcal{V}} \delta\left(v_{i},e_{q}\right)\left(w_{I}\left(v_{i}\right)^{\alpha}w_{G} \left(v_{i}\right)^{\epsilon}-w_{I}\left(v_{\bar{q}}\right)^{\alpha}w_{G} \left(v_{\bar{q}}\right)^{\eta}\right)\overline{f}\right)\] \[=W_{G}\left(e_{q}\right)^{\gamma}\left(\sum_{v_{i}\in\mathcal{V}} \delta\left(v_{i},e_{q}\right)\left(w_{e_{q}}-w_{e_{q}}\right)\overline{f}\right)\] \[=W_{G}\left(e_{q}\right)^{\gamma}\left|e_{q}\right|\left(w_{e_{q }}-w_{e_{q}}\right)\overline{f}\] \[=W_{G}\left(e_{q}\right)^{\gamma}\left|e_{q}\right|\cdot 0\cdot \overline{f}=0\] Where the last equality is feasible due to the hyperedge weight function \(W_{G}\) and the vertex function \(\overline{f}\) being real functions and the number of vertices in every hyperedge \(\left|e_{q}\right|\) being finite. Based on the connections in the continuum setting \[\left\langle G,\nabla_{v}f\right\rangle_{\mathcal{H}\left(\mathcal{E}_{H} \right)}=\left\langle f,\nabla_{v}^{\ast}G\right\rangle_{\mathcal{H}\left( \mathcal{V}\right)}\] \[\operatorname{div}_{v}F=-\nabla_{v}^{\ast}F\] for all vertex functions \(f\in\mathcal{H}\left(\mathcal{V}\right)\) and all hyperedge functions \(F,G\in\mathcal{H}\left(\mathcal{E}_{H}\right)\), we define the vertex adjoint and vertex gradient operators for unoriented hypergraphs. **Definition 13**: **(Vertex adjoint operator \(\nabla_{v}^{*}\)).** _For an unoriented hypergraph \(UH=\left(\mathcal{V},\mathcal{E}_{H}\right)\) with vertex weight function \(w_{G}\), and hyperedge weight functions \(W_{I}\) and \(W_{G}\), the vertex adjoint operator \(\nabla_{v}^{*}\) with parameters \(\beta,\gamma,\epsilon,\eta\in\mathbb{R}\) is given by:_ \[\nabla_{v}^{*}F:\ \mathcal{H}\left(\mathcal{E}_{H}\right)\longrightarrow \mathcal{H}\left(\mathcal{V}\right)\quad F\longmapsto\nabla_{v}^{*}F\] \[\nabla_{v}^{*}F:\ \mathcal{V}\longrightarrow\mathbb{R}\quad v_{i} \longmapsto\nabla_{v}^{*}F\left(v_{i}\right)=\] \[\sum_{e_{q}\in\mathcal{E}_{H}}\left(\delta\left(v_{i},e_{q} \right)w_{G}\left(v_{i}\right)^{\epsilon}-\tilde{\delta}\left(v_{i},e_{q} \right)\left|e_{q}\right|w_{G}\left(v_{i}\right)^{\eta}\right)W_{I}\left(e_{q }\right)^{\beta}W_{G}\left(e_{q}\right)^{\gamma}F\left(e_{q}\right). \tag{27}\] **Theorem 5**: **(Connection vertex gradient \(\nabla_{v}\) and vertex adjoint \(\nabla_{v}^{*}\)).** _For an unoriented hypergraph \(UH=\left(\mathcal{V},\mathcal{E}_{H}\right)\) with vertex weight functions \(w_{I}\) and \(w_{G}\), and hyperedge weight functions \(W_{I}\) and \(W_{G}\), the vertex gradient \(\nabla_{v}\) and the vertex adjoint \(\nabla_{v}^{*}\) fulfill the equality_ \[\left\langle G,\nabla_{v}f\right\rangle_{\mathcal{H}\left(\mathcal{E}_{H} \right)}=\left\langle f,\nabla_{v}^{*}G\right\rangle_{\mathcal{H}\left( \mathcal{V}\right)} \tag{28}\] _for all vertex functions \(f\in\mathcal{H}\left(\mathcal{V}\right)\) and all hyperedge functions \(G\in\mathcal{H}\left(\mathcal{E}_{H}\right)\)._ _Proof._ For the sake of clarity of this essay the proof is given in the appendix. As in the case of the oriented hypergraph, we define the vertex divergence operator based on the vertex adjoint operator. **Definition 14**: **(Vertex divergence operator \(\mathrm{div}_{v}\)).** _For an unoriented hypergraph \(UH=\left(\mathcal{V},\mathcal{E}_{H}\right)\) with vertex weight function \(w_{G}\), and hyperedge weight functions \(W_{I}\) and \(W_{G}\), the vertex divergence operator \(\mathrm{div}_{v}\) with parameters \(\beta,\gamma,\epsilon,\eta\in\mathbb{R}\) is given by:_ \[\mathrm{div}_{v}:\ \mathcal{H}\left(\mathcal{E}_{H}\right) \longrightarrow\mathcal{H}\left(\mathcal{V}\right)\quad F\longmapsto \mathrm{div}_{v}F\] \[\mathrm{div}_{v}F:\ \mathcal{V}\longrightarrow\mathbb{R}\quad v_{i} \longmapsto\mathrm{div}_{v}F\left(v_{i}\right)=-\nabla_{v}^{*}F\left(v_{i} \right)=\] \[\sum_{e_{q}\in\mathcal{E}_{H}}\left(\tilde{\delta}\left(v_{i},e_{ q}\right)\left|e_{q}\right|w_{G}\left(v_{i}\right)^{\eta}-\delta\left(v_{i},e_{q} \right)w_{G}\left(v_{i}\right)^{\epsilon}\right)W_{I}\left(e_{q}\right)^{ \beta}W_{G}\left(e_{q}\right)^{\gamma}F\left(e_{q}\right). \tag{29}\] ### \(p\)-Laplacian operators for unoriented hypergraphs Analogously to the case of the oriented hypergraph, in this subsection we present a definition for the vertex \(p\)-Laplacian based on the vertex gradient and vertex divergence. The vertex Laplacian we obtain from a perspective of averaging, can be found in the next subsection. **Definition 15**: **(Vertex \(p\)-Laplacian operator \(\Delta_{v}^{p}\)).** _For an unoriented hypergraph \(UH=\left(\mathcal{V},\mathcal{E}_{H}\right)\) with vertex weight functions \(w_{I}\) and \(w_{G}\), and hyperedge weight functions \(W_{I}\) and \(W_{G}\), the vertex \(p\)-Laplacian operator \(\Delta_{v}^{p}\) with parameters \(\alpha,\beta,\gamma,\epsilon,\eta\in\mathbb{R}\) is given by:_ \[\Delta_{v}^{p}:\ \mathcal{H}\left(\mathcal{V}\right)\longrightarrow\mathcal{H} \left(\mathcal{V}\right)\quad f\longmapsto\Delta_{v}f\quad\quad\Delta_{v}^{p}f :\ \mathcal{V}\longrightarrow\mathbb{R}\quad v_{i}\longmapsto\Delta_{v}^{p}f \left(v_{i}\right)=\] \[\sum_{e_{q}\in\mathcal{E}_{H}}\left(\delta\left(v_{i},e_{q}\right)w_{G}\left(v _{i}\right)^{\epsilon}-\tilde{\delta}\left(v_{i},e_{q}\right)\left|e_{q} \right|w_{G}\left(v_{i}\right)^{\eta}\right)W_{I}\left(e_{q}\right)^{\beta}W_{G }\left(e_{q}\right)^{p\gamma}\] \[\left|\left(\sum_{v_{j}\in\mathcal{V}}\delta\left(v_{j},e_{q}\right)w_{I}\left(v_{ j}\right)^{\alpha}w_{G}\left(v_{j}\right)^{\epsilon}f\left(v_{j}\right)\right)- \left|e_{q}\right|w_{I}\left(v_{\bar{q}}\right)^{\alpha}w_{G}\left(v_{\bar{q}} \right)^{\eta}f\left(v_{\bar{q}}\right)\right|^{p-2}\] \[\left(\left(\sum_{v_{k}\in\mathcal{V}}\delta\left(v_{k},e_{q}\right)w_{I}\left( v_{k}\right)^{\alpha}w_{G}\left(v_{k}\right)^{\epsilon}f\left(v_{k}\right) \right)-\left|e_{q}\right|w_{I}\left(v_{\bar{q}}\right)^{\alpha}w_{G}\left(v_{ \bar{q}}\right)^{\eta}f\left(v_{\bar{q}}\right)\right). \tag{30}\] **Theorem 6** (Connection vertex gradient \(\nabla_{v}\), vertex divergence \(\mathrm{div}_{v}\), and vertex \(p\)-Laplacian \(\Delta_{v}^{p}\)): _For an unoriented hypergraph \(UH=\left(\mathcal{V},\mathcal{E}_{H}\right)\) with vertex weight functions \(w_{I}\) and \(w_{G}\), and hyperedge weight functions \(W_{I}\) and \(W_{G}\), the presented vertex \(p\)-Laplacian \(\Delta_{v}^{p}\) fulfills the equality_ \[\Delta_{v}^{p}f=\mathrm{div}_{v}\left(\left|\nabla_{v}f\right|^{p-2}\nabla_{v }f\right) \tag{31}\] _for all vertex functions \(f\in\mathcal{H}\left(\mathcal{V}\right)\)._ _Proof._ Given an unoriented hypergraph \(UH=\left(\mathcal{V},\mathcal{E}_{H}\right)\) with vertex weight functions \(w_{I}\) and \(w_{G}\), and hyperedge weight functions \(W_{I}\) and \(W_{G}\), and a vertex function \(f\in\mathcal{H}\left(\mathcal{V}\right)\), then the definitions of the vertex divergence operator \(\mathrm{div}_{v}\) and the vertex gradient operator \(\nabla_{v}\) lead to the following for all vertices \(v_{i}\in\mathcal{V}\): \[\mathrm{div}_{v}\left(\left|\nabla_{v}f\right|^{p-2}\nabla_{v}f \right)\left(v_{i}\right)\] \[= \sum_{e_{q}\in\mathcal{E}_{H}}\left(\delta\left(v_{i},e_{q} \right)w_{G}\left(v_{i}\right)^{\epsilon}-\bar{\delta}\left(v_{i},e_{q} \right)\left|e_{q}\right|w_{G}\left(v_{i}\right)^{\eta}\right)W_{I}\left(e_{q} \right)^{\beta}W_{G}\left(e_{q}\right)^{\gamma}\] \[\quad\left|\nabla_{v}f\left(e_{q}\right)\right|^{p-2}\nabla_{v}f \left(e_{q}\right)\] \[= \sum_{e_{q}\in\mathcal{E}_{H}}\left(\delta\left(v_{i},e_{q} \right)w_{G}\left(v_{i}\right)^{\epsilon}-\bar{\delta}\left(v_{i},e_{q} \right)\left|e_{q}\right|w_{G}\left(v_{i}\right)^{\eta}\right)W_{I}\left(e_{q} \right)^{\beta}W_{G}\left(e_{q}\right)^{\gamma}\] \[\quad\left|W_{G}\left(e_{q}\right)^{\gamma}\left(\left(\sum_{v_{j }\in\mathcal{V}}\delta\left(v_{j},e_{q}\right)w_{I}\left(v_{j}\right)^{\alpha }w_{G}\left(v_{j}\right)^{\epsilon}f\left(v_{j}\right)\right)-\left|e_{q}\right| w_{I}\left(v_{\bar{q}}\right)^{\alpha}w_{G}\left(v_{\bar{q}}\right)^{\eta}f \left(v_{\bar{q}}\right)\right)\right|^{p-2}\] \[\quad W_{G}\left(e_{q}\right)^{\gamma}\left(\left(\sum_{v_{k}\in \mathcal{V}}\delta\left(v_{k},e_{q}\right)w_{I}\left(v_{k}\right)^{\alpha}w_{G} \left(v_{k}\right)^{\epsilon}f\left(v_{k}\right)\right)-\left|e_{q}\right|w_{I} \left(v_{\bar{q}}\right)^{\alpha}w_{G}\left(v_{\bar{q}}\right)^{\eta}f\left(v_ {\bar{q}}\right)\right)\] Since the hyperedge weight function \(W_{G}\) maps to positive values, the following equality holds true and leads to the vertex \(p\)-Laplacian definition for unoriented hypergraphs: \[= \sum_{e_{q}\in\mathcal{E}_{H}}\left(\delta\left(v_{i},e_{q}\right)w_{G }\left(v_{i}\right)^{\epsilon}-\bar{\delta}\left(v_{i},e_{q}\right)\left|e_{q} \right|w_{G}\left(v_{i}\right)^{\eta}\right)W_{I}\left(e_{q}\right)^{\beta}W_{G }\left(e_{q}\right)^{\gamma+\gamma\left(p-2\right)+\gamma}\] \[\left|\left(\sum_{v_{j}\in\mathcal{V}}\delta\left(v_{j},e_{q} \right)w_{I}\left(v_{j}\right)^{\alpha}w_{G}\left(v_{j}\right)^{\epsilon}f \left(v_{j}\right)\right)-\left|e_{q}\right|w_{I}\left(v_{\bar{q}}\right)^{ \alpha}w_{G}\left(v_{\bar{q}}\right)^{\eta}f\left(v_{\bar{q}}\right)\right|^{p -2}\] \[= \sum_{e_{q}\in\mathcal{E}_{H}}\left(\delta\left(v_{i},e_{q}\right) w_{G}\left(v_{i}\right)^{\epsilon}-\bar{\delta}\left(v_{i},e_{q}\right)\left|e_{q} \right|w_{G}\left(v_{i}\right)^{\eta}\right)W_{I}\left(e_{q}\right)^{\beta}W_{ G}\left(e_{q}\right)^{p\gamma}\] \[\left|\left(\sum_{v_{j}\in\mathcal{V}}\delta\left(v_{j},e_{q} \right)w_{I}\left(v_{j}\right)^{\alpha}w_{G}\left(v_{j}\right)^{\epsilon}f \left(v_{j}\right)\right)-\left|e_{q}\right|w_{I}\left(v_{\bar{q}}\right)^{ \alpha}w_{G}\left(v_{\bar{q}}\right)^{\eta}f\left(v_{\bar{q}}\right)\right|^{p -2}\] \[\left(\left(\sum_{v_{k}\in\mathcal{V}}\delta\left(v_{k},e_{q} \right)w_{I}\left(v_{k}\right)^{\alpha}w_{G}\left(v_{k}\right)^{\epsilon}f \left(v_{k}\right)\right)-\left|e_{q}\right|w_{I}\left(v_{\bar{q}}\right)^{ \alpha}w_{G}\left(v_{\bar{q}}\right)^{\eta}f\left(v_{\bar{q}}\right)\right)\] \[= \Delta_{v}^{p}f\left(v_{i}\right)\] Thus, the previously introduced definitions for the vertex gradient \(\nabla_{v}\), the vertex divergence \(\operatorname{div}_{v}\), and the vertex \(p\)-Laplacian \(\Delta_{v}^{p}\) suffice the equality \(\Delta_{v}^{p}f\left(v_{i}\right)=\operatorname{div}_{v}\left(|\nabla_{v}f|^{p -2}\,\nabla_{v}f\right)\left(v_{i}\right)\) for all vertices \(v_{i}\in\mathcal{V}\) and for all vertex functions \(f\in\mathcal{H}\left(\mathcal{V}\right)\). ### Averaging operators on unoriented hypergraphs Instead of starting with a gradient definition in order to retrieve a feasible Laplacian operator for unoriented hypergraphs, we now want to define a Laplacian operator based on intuitive averaging. For this definition a special vertex \(v_{\bar{q}}\) for every hyperedge \(e_{q}\in\mathcal{E}_{H}\) is not necessary anymore. Before introducing the vertex averaging operator, we need the definition of the number of incident hyperedges. Definition 16 (Number of incident hyperedges): For an unoriented hypergraph \(UH=\left(\mathcal{V},\mathcal{E}_{H}\right)\), the number of incident hyperedges of a given vertex \(v_{i}\in\mathcal{V}\) is defined as: \[\#\mathcal{E}_{H}\left(v_{i}\right)=\left|\left\{e_{q}\in\mathcal{E}_{H}\ |\ v_{i}\in e_{q}\right\}\right|. \tag{32}\] Note: We call vertices \(v_{i}\in\mathcal{V}\) with \(\#\mathcal{E}_{H}\left(v_{i}\right)=0\) isolated, since they are not connected to any other vertex \(v_{j}\in\mathcal{V}\). The averaging operator below aims at defining for a given vertex \(v_{i}\) the average value of a vertex function \(f\in\mathcal{H}\left(\mathcal{V}\right)\) by considering all hyperedges \(e_{q}\in\mathcal{E}_{H}\), which \(v_{i}\) is a part of, and then averaging the vertex function over all vertices \(v_{j}\in e_{q}\). Definition 17 (Vertex averaging operator \(\overline{\Delta_{v}}\)): For an unoriented hypergraph \(UH=\left(\mathcal{V},\mathcal{E}_{H}\right)\) without any isolated vertices \(v_{i}\in\mathcal{V}\), i.e. \(\#\mathcal{E}_{H}\left(v_{i}\right)>0\) for all vertices \(v_{i}\in\mathcal{V}\), we define the vertex averaging operator as: \[\overline{\Delta_{v}}:\ \mathcal{H}\left(\mathcal{V}\right) \longrightarrow\mathcal{H}\left(\mathcal{V}\right) f\longmapsto\overline{\Delta_{v}}f\qquad\overline{\Delta_{v}}f:\ \mathcal{V}\longrightarrow\mathbb{R}\quad v_{i}\longmapsto\overline{\Delta_{v}}f \left(v_{i}\right)=\] \[\frac{1}{\#\mathcal{E}_{H}\left(v_{i}\right)}\sum_{e_{q}\in \mathcal{E}_{H}}\delta\left(v_{i},e_{q}\right)\frac{1}{\left|e_{q}\right|} \sum_{v_{j}\in\mathcal{V}}\delta\left(v_{j},e_{q}\right)f\left(v_{j}\right). \tag{33}\] By using a simplified version of the inner product on the space of all vertex functions \(\mathcal{H}\left(\mathcal{V}\right)\) with \(w_{I}\equiv 1\), we obtain an energy conserving adjoint vertex averaging operator \(\overline{\Delta_{v}}^{*}\). Definition 18 (Adjoint vertex averaging operator \(\overline{\Delta_{v}}^{*}\)): For an unoriented hypergraph \(UH=\left(\mathcal{V},\mathcal{E}_{H}\right)\) without any isolated vertices and with the previously defined averaging operator \(\overline{\Delta_{v}}\), the adjoint vertex averaging operator is given by: \[\overline{\Delta_{v}}^{*}:\ \mathcal{H}\left(\mathcal{V}\right) \longrightarrow\mathcal{H}\left(\mathcal{V}\right)\quad f \longmapsto\overline{\Delta_{v}}^{*}f\ \ \ \ \ \ \ \overline{\Delta_{v}}^{*}f:\ \mathcal{V} \longrightarrow\mathbb{R}\quad v_{i}\longmapsto\overline{\Delta_{v}}^{*}f\left(v_ {i}\right)=\] \[\sum_{e_{q}\in\mathcal{E}_{H}}\delta\left(v_{i},e_{q}\right) \frac{1}{\left|e_{q}\right|}\sum_{v_{j}\in\mathcal{V}}\delta\left(v_{j},e_{q} \right)\frac{1}{\#\mathcal{E}_{H}\left(v_{j}\right)}f\left(v_{j}\right). \tag{34}\] Theorem 4.1 (Connection between vertex averaging operator \(\overline{\Delta_{v}}\) and adjoint vertex averaging operator \(\overline{\Delta_{v}}^{*}\)): For an unoriented hypergraph \(UH=\left(\mathcal{V},\mathcal{E}_{H}\right)\) without isolated vertices and with any two vertex functions \(f,g\in\mathcal{H}\left(\mathcal{V}\right)\), the vertex averaging operator \(\Delta_{v}\) and the adjoint vertex operator \(\Delta_{v}^{*}\) suffice the following equality \[\left\langle g,\overline{\Delta_{v}}f\right\rangle_{\mathcal{H}\left(\mathcal{ V}\right)}:=\sum_{v_{i}\in\mathcal{V}}g\left(v_{i}\right)\overline{\Delta_{v}}f \left(v_{i}\right)=\sum_{v_{j}\in\mathcal{V}}f\left(v_{j}\right)\overline{ \Delta_{v}}^{*}g\left(v_{j}\right)=:\left\langle f,\overline{\Delta_{v}}^{*}g \right\rangle_{\mathcal{H}\left(\mathcal{V}\right)}, \tag{35}\] where the inner product on the space of all vertex functions \(\mathcal{H}\left(\mathcal{V}\right)\) has the weight \(w_{I}\equiv 1\). Proof: Given an unoriented hypergraph \(UH=\left(\mathcal{V},\mathcal{E}_{H}\right)\) without any isolated vertices and two vertex functions \(f,g\in\mathcal{H}\left(\mathcal{V}\right)\), then the definitions of the vertex averaging operator \(\overline{\Delta_{v}}\) and the adjoint vertex averaging operator \(\overline{\Delta_{v}}^{*}\) yield the following: \[\left\langle g,\overline{\Delta_{v}}f\right\rangle_{\mathcal{H} \left(\mathcal{V}\right)} =\sum_{v_{i}\in\mathcal{V}}g\left(v_{i}\right)\overline{\Delta_{v} }f\left(v_{i}\right)\] \[=\sum_{v_{i}\in\mathcal{V}}g\left(v_{i}\right)\frac{1}{\# \mathcal{E}_{H}\left(v_{i}\right)}\sum_{e_{q}\in\mathcal{E}_{H}}\delta\left(v _{i},e_{q}\right)\frac{1}{\left|e_{q}\right|}\sum_{v_{j}\in\mathcal{V}}\delta \left(v_{j},e_{q}\right)f\left(v_{j}\right)\] \[=\sum_{v_{i}\in\mathcal{V}}\sum_{e_{q}\in\mathcal{E}_{H}}\sum_{v_{ j}\in\mathcal{V}}g\left(v_{i}\right)\frac{1}{\#\mathcal{E}_{H}\left(v_{i} \right)}\delta\left(v_{i},e_{q}\right)\frac{1}{\left|e_{q}\right|}\delta\left( v_{j},e_{q}\right)f\left(v_{j}\right)\] \[=\sum_{v_{j}\in\mathcal{V}}f\left(v_{j}\right)\sum_{e_{q}\in \mathcal{E}_{H}}\delta\left(v_{j},e_{q}\right)\frac{1}{\left|e_{q}\right|} \sum_{v_{i}\in\mathcal{V}}\delta\left(v_{i},e_{q}\right)\frac{1}{\left|e_{q} \right|}\delta\left(v_{j},e_{q}\right)f\left(v_{j}\right)\] \[=\sum_{v_{j}\in\mathcal{V}}f\left(v_{j}\right)\overline{\Delta_{v }}^{*}g\left(v_{j}\right)=\left\langle f,\overline{\Delta_{v}}^{*}g\right\rangle _{\mathcal{H}\left(\mathcal{V}\right)}\] Therefore, with the definitions of the vertex averaging operator \(\overline{\Delta_{v}}\) and the adjoint vertex averaging operator \(\overline{\Delta_{v}}^{*}\), the equality \(\left\langle g,\overline{\Delta_{v}}f\right\rangle_{\mathcal{H}\left(\mathcal{ V}\right)}=\left\langle f,\overline{\Delta_{v}}^{*}g\right\rangle_{ \mathcal{H}\left(\mathcal{V}\right)}\) holds true for all vertex functions \(f,g\in\mathcal{H}\left(\mathcal{V}\right)\). Example 3 (Vertex averaging operator does not conserve mean values): Given a set of vertices \(\mathcal{V}=\left\{v_{1},v_{2},v_{3},v_{4}\right\}\) and a set of hyperedges \(\mathcal{E}_{H}=\left\{\left\{v_{1},v_{2},v_{3}\right\},\left\{v_{2},v_{4} \right\}\right\}\) then the vertex averaging operator \(\overline{\Delta_{v}}\) on the unoriented hypergraph \(UH=\left(\mathcal{V},\mathcal{E}_{H}\right)\) does not conserve the mean value for general vertex functions \(f\in\mathcal{H}\left(\mathcal{V}\right)\): \[\overline{\Delta_{v}}f\left(v_{1}\right) =\tfrac{1}{1}\left(\tfrac{1}{3}\left(f\left(v_{1}\right)+f\left(v_ {2}\right)+f\left(v_{3}\right)\right)\right)\] \[\overline{\Delta_{v}}f\left(v_{2}\right) =\tfrac{1}{2}\left(\tfrac{1}{3}\left(f\left(v_{1}\right)+f\left(v _{2}\right)+f\left(v_{3}\right)\right)+\tfrac{1}{2}\left(f\left(v_{2}\right)+f \left(v_{4}\right)\right)\right)\] \[\overline{\Delta_{v}}f\left(v_{3}\right) =\tfrac{1}{1}\left(\tfrac{1}{3}\left(f\left(v_{1}\right)+f\left(v _{2}\right)+f\left(v_{3}\right)\right)\right)\] \[\overline{\Delta_{v}}f\left(v_{4}\right) =\tfrac{1}{1}\left(\tfrac{1}{2}\left(f\left(v_{2}\right)+f\left( v_{4}\right)\right)\right)\] \[\overline{\Delta_{v}}f\left(v_{1}\right)+\overline{\Delta_{v}}f \left(v_{2}\right)+\overline{\Delta_{v}}f\left(v_{3}\right)+\overline{\Delta_ {v}}f\left(v_{4}\right)=\tfrac{5}{6}f\left(v_{1}\right)+\tfrac{19}{12}f\left( v_{2}\right)+\tfrac{5}{6}f\left(v_{3}\right)+\tfrac{3}{4}f\left(v_{4}\right)\] \[\neq f\left(v_{1}\right)+f\left(v_{2}\right)+f\left(v_{3}\right) +f\left(v_{4}\right)\] In contrast to this, the adjoint vertex averaging operator conserves the overall energy for all vertex functions \(f\in\mathcal{H}\left(\mathcal{V}\right)\). Theorem 3.1 (Adjoint vertex averaging operator \(\overline{\Delta_{v}}^{*}\) conserves mean values): For an unoriented hypergraph \(UH=\left(\mathcal{V},\mathcal{E}_{H}\right)\) without isolated vertices and with any vertex function \(f\in\mathcal{H}\left(\mathcal{V}\right)\), the adjoint vertex averaging operator \(\overline{\Delta_{v}}^{*}\) conserves the mean value of \(f\), hence the following equality holds: \[\sum_{v_{i}\in\mathcal{V}}\overline{\Delta_{v}}^{*}f\left(v_{i}\right)=\sum_{ v_{i}\in\mathcal{V}}f\left(v_{i}\right). \tag{36}\] Proof: Given an unoriented hypergraph \(UH=\left(\mathcal{V},\mathcal{E}_{H}\right)\) without any isolated vertices and a vertex function \(f\in\mathcal{H}\left(\mathcal{V}\right)\), then the following reformulations hold true: \[\sum_{v_{i}\in\mathcal{V}}\overline{\Delta_{v}}^{*}f\left(v_{i}\right) =\sum_{v_{i}\in\mathcal{V}}\sum_{e_{q}\in\mathcal{E}_{H}}\delta \left(v_{i},e_{q}\right)\frac{1}{\left|e_{q}\right|}\sum_{v_{j}\in\mathcal{V}} \delta\left(v_{j},e_{q}\right)\frac{1}{\#\mathcal{E}_{H}\left(v_{j}\right)}f \left(v_{j}\right)\] \[=\sum_{v_{i}\in\mathcal{V}}\sum_{e_{q}\in\mathcal{E}_{H}}\sum_{v_ {j}\in\mathcal{V}}\delta\left(v_{i},e_{q}\right)\frac{1}{\left|e_{q}\right|} \delta\left(v_{j},e_{q}\right)\frac{1}{\#\mathcal{E}_{H}\left(v_{j}\right)}f \left(v_{j}\right)\] \[=\sum_{v_{j}\in\mathcal{V}}f\left(v_{j}\right)\frac{1}{\#\mathcal{ E}_{H}\left(v_{j}\right)}\sum_{e_{q}\in\mathcal{E}_{H}}\delta\left(v_{j},e_{q} \right)\frac{1}{\left|e_{q}\right|}\sum_{v_{i}\in\mathcal{V}}\delta\left(v_{i},e_{q}\right)\] \[=\sum_{v_{j}\in\mathcal{V}}f\left(v_{j}\right)\frac{1}{\#\mathcal{ E}_{H}\left(v_{j}\right)}\sum_{e_{q}\in\mathcal{E}_{H}}\delta\left(v_{j},e_{q} \right)\] \[=\sum_{v_{j}\in\mathcal{V}}f\left(v_{j}\right)\frac{1}{\#\mathcal{ E}_{H}\left(v_{j}\right)}\,\#\mathcal{E}_{H}\left(v_{j}\right)\] \[=\sum_{v_{j}\in\mathcal{V}}f\left(v_{j}\right)\] Hence, the presented adjoint vertex averaging operator \(\overrightarrow{\Delta_{v}}\) conserves the overall energy on any given unoriented hypergraph \(UH\) for all vertex functions \(f\in\mathcal{H}\left(\mathcal{V}\right)\). Let us put together the knowledge gained from the above analysis. Since we used the simple Euclidean scalar product and showed that the adjoint operator conserves the mean value, we can look at \(\overrightarrow{\Delta_{v}}-I\) as a suitable operator for a scale space analysis, somehow introducing a non-selfadjoint version of the Laplacian. The energy conservation of the adjoint shows that indeed \(\overrightarrow{\Delta_{v}}\) has eigenvalue one with constant eigenfunction. Thus, the evolution equation with operator \(\overrightarrow{\Delta_{v}}-I\) is expected to converge to a constant state and yield a suitable scale space, which we will investigate further below. Indeed, the averaging operator is self-adjoint if \(\#\mathcal{E}_{H}\left(v_{i}\right)\) is constant on the set of vertices \(\mathcal{V}\), and in this case \(\overrightarrow{\Delta_{v}}-I\) has the structure of a normal graph Laplacian. Lemma 1: _Given an unoriented hypergraph \(UH=\left(\mathcal{V},\mathcal{E}_{H}\right)\) with \(\#\mathcal{E}_{H}\left(v_{i}\right)=\#\mathcal{E}_{H}\left(v_{j}\right)\) for all vertices \(v_{i},v_{j}\in\mathcal{V}\). Then for all vertex functions \(f\in\mathcal{H}\left(\mathcal{V}\right)\), the operator \(\overrightarrow{\Delta_{v}}f-f\) is equivalent to the graph Laplacian on a weighted oriented graph for an arc \(\left(v_{i},v_{j}\right)\) from vertex \(v_{i}\) to vertex \(v_{j}\) with the particular weight function_ \[w(v_{i},v_{j}):=\frac{1}{\#\mathcal{E}_{H}\left(v_{i}\right)}\sum_{e_{q}\in \mathcal{E}_{H}}\frac{1}{\left|e_{q}\right|}\delta\left(v_{i},e_{q}\right) \delta\left(v_{j},e_{q}\right).\] Proof: For an unoriented hypergraph \(UH=\left(\mathcal{V},\mathcal{E}_{H}\right)\) with \(\#\mathcal{E}_{H}\left(v_{i}\right)=\#\mathcal{E}_{H}\left(v_{j}\right)\) for all vertices \(v_{i},v_{j}\in\mathcal{V}\), the equivalence becomes apparent from a simple change of summation: \[\left(\overrightarrow{\Delta_{v}}f-f\right)\left(v_{i}\right) = \left(\frac{1}{\#\mathcal{E}_{H}\left(v_{i}\right)}\sum_{e_{q} \in\mathcal{E}_{H}}\delta\left(v_{i},e_{q}\right)\frac{1}{\left|e_{q}\right|} \sum_{v_{j}\in\mathcal{V}}\delta\left(v_{j},e_{q}\right)f(v_{j})\right)-f(v_{ i})\] \[= \left(\frac{1}{\#\mathcal{E}_{H}\left(v_{i}\right)}\sum_{e_{q} \in\mathcal{E}_{H}}\delta\left(v_{i},e_{q}\right)\frac{1}{\left|e_{q}\right|} \sum_{v_{j}\in\mathcal{V}}\delta\left(v_{j},e_{q}\right)f(v_{j})\right)\] \[- \frac{1}{\#\mathcal{E}_{H}\left(v_{i}\right)}\#\mathcal{E}_{H} \left(v_{i}\right)\frac{1}{\left|e_{q}\right|}\left|e_{q}\right|f(v_{i})\] \[= \left(\frac{1}{\#\mathcal{E}_{H}\left(v_{i}\right)}\sum_{e_{q} \in\mathcal{E}_{H}}\delta\left(v_{i},e_{q}\right)\frac{1}{\left|e_{q}\right|} \sum_{v_{j}\in\mathcal{V}}\delta\left(v_{j},e_{q}\right)f(v_{j})\right)\] \[- \left(\frac{1}{\#\mathcal{E}_{H}\left(v_{i}\right)}\sum_{e_{q} \in\mathcal{E}_{H}}\delta\left(v_{i},e_{q}\right)\frac{1}{\left|e_{q}\right|} \sum_{v_{j}\in\mathcal{V}}\delta\left(v_{j},e_{q}\right)\right)f(v_{i})\] \[= \frac{1}{\#\mathcal{E}_{H}\left(v_{i}\right)}\sum_{e_{q}\in \mathcal{E}_{H}}\delta\left(v_{i},e_{q}\right)\frac{1}{\left|e_{q}\right|}\sum_{ v_{j}\in\mathcal{V}}\delta\left(v_{j},e_{q}\right)\left(f(v_{j})-f(v_{i})\right)\] \[= \sum_{e_{q}\in\mathcal{E}_{H}}\sum_{v_{j}\in\mathcal{V}}\frac{1}{ \#\mathcal{E}_{H}\left(v_{i}\right)}\delta\left(v_{i},e_{q}\right)\frac{1}{ \left|e_{q}\right|}\delta\left(v_{j},e_{q}\right)\left(f(v_{j})-f(v_{i})\right)\] \[= \sum_{v_{j}\in\mathcal{V}}\sum_{e_{q}\in\mathcal{E}_{H}}\left( \frac{1}{\#\mathcal{E}_{H}\left(v_{i}\right)}\delta\left(v_{i},e_{q}\right) \frac{1}{\left|e_{q}\right|}\delta\left(v_{j},e_{q}\right)\left(f(v_{j})-f(v_{ i})\right)\right)\] \[= \sum_{v_{j}\in\mathcal{V}}w\left(v_{i},v_{j}\right)\left(f(v_{j}) -f(v_{i})\right)\] The term in the last row is exactly the traditional graph Laplace operator of vertex \(v_{i}\) for an oriented normal graph with a vertex function \(f\) (see, [7] Remark 7.12 (Parameter choice for the vertex \(p\)-Laplacian operator)), where for the arc weight \(w\) it holds true that \(w\left(v_{i},v_{j}\right)=0\) if the arc \(\left(v_{i},v_{j}\right)\) does not exist in the oriented normal graph. Let us mention that the weighted oriented normal graph we obtain above could be considered the easiest map from an unoriented hypergraph to a weighted graph, since the weights essentially count the number of hyperedges \(e_{q}\in\mathcal{H}\left(\mathcal{V}\right)\) two vertices \(v_{i},v_{j}\in\mathcal{V}\) have in common. ## 4 Scale spaces based on hypergraph \(p\)-Laplacians In the following we discuss PDEs based on the family of \(p\)-Laplace and averaging operators on hypergraphs introduced in Section 3, which can be used for modeling information flow in social networks with oriented hypergraphs as well as performing image processing based on both oriented and unoriented hypergraphs. ### Modelling information flow using oriented hypergraphs For analyzing information flow on social networks with oriented hypergraphs, we consider two different PDE systems modelling diffusion processes. We start with investigating the scale space for the \(p\)-Laplacian operator, i.e. the gradient flow of the \(p\)-Laplacian energy: \[\frac{\partial f}{\partial t}(v_{i},t) = \Delta_{v}^{p}f(v_{i},t),\qquad v_{i}\in\mathcal{V},t\in(0,\infty) \tag{37}\] \[f(v_{i},0) = f_{0}(v_{i}),\qquad\qquad v_{i}\in\mathcal{V}.\] Solving (37) for every time step \(t\in(0,\infty)\) amounts to computing the information flow between vertices of the oriented hypergraph along the respective hyperarcs. Note that although there are no explicit boundaries in oriented hypergraphs, we can interpret the above problem as the homogeneous Neumann boundary problem. Due to the properties of the proposed family of hypergraph \(p\)-Laplace operators it is easy to see that the mean-value of \(f\) is conserved in time and we can naturally interpret the evolution as a scale space towards coarser and coarser scales on the graph. Moreover, the general asymptotic of gradient flows for \(p\)-homogeneous energies (cf. [2]) yields that \(f\to\overline{f}\) as \(t\to\infty\), with \(\overline{f}\) being the mean value of \(f_{0}\). Moreover, the rescaled quantity \(g=\frac{f-\overline{f}}{\|f-\overline{f}\|}\) converges to a multiple of a second eigenfunction for generic initial values. Similar to the Neumann boundary problem, we can also introduce a Dirichlet type problem, where the Dirichlet boundary \(\partial\mathcal{V}\subset\mathcal{V}\) denotes a subset of the vertex set \(\mathcal{V}\) of the oriented hypergraph, for which we introduce boundary values and keep them fixed over time. The corresponding stationary solution is not necessarily constant \[\begin{split}\Delta_{v}^{p}f(v_{i})&=\ 0,\ \ \ \ \ \ \ \ v_{i}\in\hat{\mathcal{V}},\\ f(v_{j})&=\ F_{j},\ \ \ \ \ \ v_{j}\in\partial \mathcal{V}.\end{split} \tag{38}\] Then, we aim at solving the \(p\)-Laplace equation on the complementary vertex set \(\hat{\mathcal{V}}:=\mathcal{V}\setminus\partial\mathcal{V}\) of the oriented hypergraph. Instead of solving (38) directly, we solve the hyperbolic PDE model (37) on the vertex set \(\hat{\mathcal{V}}\), while keeping the vertex function \(f\in\mathcal{H}(\mathcal{V})\) fixed on the boundary set \(\partial\mathcal{V}\). The reason for this approach is that any stationary solution of (37) on \(\hat{\mathcal{V}}\) with fixed boundary values is also a solution to the \(p\)-Laplace equation in (38). To solve the two proposed PDE models discussed above, we numerically have to solve the initial value problem in (37). For this sake we employ a forward-Euler time discretization with fixed time step size \(\tau>0\) and use the renormalized variable \(g\) to observe convergence to a nontrivial eigenfunction. This leads to the following explicit iteration scheme: \[f_{n+1}(v_{i})\ =\ f_{n}(v_{i})+\tau\cdot\Delta_{v}^{p}f_{n}(v_{i}). \tag{39}\] ### Image processing using unoriented hypergraphs To perform image processing for grayscale images defined on regular grids, we consider a PDE system that can be interpreted as an initial value problem for the vertex averaging operator introduced in Definition 17. In particular, we are interested in solving the following initial value problem \[\begin{split}\overline{\Delta_{v}}f\left(v_{i},t\right)-f\left(v _{i},t\right)+\lambda\cdot\left(f_{0}\left(v_{i}\right)-f\left(v_{i},t\right) \right)&=\ 0,\ \ \ \ \ \ \ \ \ \ \ \ v_{i}\in\mathcal{V},t\in(0,\infty)\\ f\left(v_{i},0\right)&=\ f_{0}\left(v_{i}\right),\ \ \ \ \ \ v_{i}\in\mathcal{V}.\end{split} \tag{40}\] Note that in contrast to the initial value problem modeling opinion formation in social networks in (37), here we introduce an additional data fidelity term that penalizes strong deviations from the noisy image, represented by the initial vertex function \(f_{0}\in\mathcal{H}(\mathcal{V})\). The influence of this data fidelity term can be controlled by a fixed parameter \(\lambda>0\) that allows to realize a trade-off between smoothing the perturbed image pixels via the hypergraph vertex averaging operator and staying close to the initial image. In classical variational regularization this would correspond to the gradient flow of the least-squares fidelity augmented with a regularization energy scaled with regularization parameter \(\frac{1}{\lambda}\). However, since the averaging operator is not self-adjoint the corresponding term in (40) cannot arise in the gradient flow of an associated energy functional. Nonetheless, the diffusive nature of \(\overline{\Delta_{v}}f-f\) induces an interpretation as regularization albeit in nonvariational setting, similar e.g. to the inpainting model in [3]. Once again we use a forward-Euler time discretization with fixed time step size \(\tau>0\), which is chosen small enough to fulfill the CFL stability conditions. Following this approach we derive the following iterative scheme for image processing using the hypergraph vertex averaging operator: \[f_{n+1}\left(v_{i}\right)=f_{n}\left(v_{i}\right)+\tau\cdot\left(\overline{ \Delta_{v}}f_{n}(v_{i})-f_{n}\left(v_{i}\right)+\lambda\cdot\left(f_{0}\left(v _{i}\right)-f_{n}\left(v_{i}\right)\right)\right). \tag{41}\] ## 5 Numerical experiments In this section we present the results of our numerical experiments when using the hypergraph operators introduced in Section 3 for two different applications. In particular, we first discuss how the oriented hypergraph \(p\)-Laplacian operator can be used to model opinion formation in social networks. Furthermore, we apply the vertex averaging operator of unoriented hypergraphs for the task of image processing, and we provide results that can be used, both for image denoising as well as segmentation tasks. ### Opinion formation in social networks In the following we present the results of our numerical experiments in which we solve the two PDEs (37) and (38) by using the explicit forward-Euler discretization scheme until the relative change between two iterations is smaller than \(\epsilon:=10^{-6}\). We choose \(\tau\) in (39) small enough to fulfill the CFL condition for numerical stability. This leads to very small time steps for the iteration scheme in the cases \(1\leq p<2\). For our numerical experiments we use the Twitter data set provided by Stanford University [11]. It consists of \(41,652,230\) vertices (users) and \(1,468,364,884\) arcs (oriented pairwise connections indicating that one person follows another). Due to the size of the data set, we restrict our numerical experiments to a comparatively small sub-network within the first \(1,000,000\) lines of the Twitter input data. We chose a sub-network of individuals such that all users are directly or indirectly linked to each other to avoid cliques of individuals, which are not connected to the rest of the sub-network and thus also not influenced by users outside their small circle. Moreover, we ensure that each sub-network includes an opinion leader with a large number of followers in the sub-network. Therefore, we can observe how one influential user impacts the opinion of the rest. In order to generate hyperarcs from the given arcs, we put one Twitter user as a singleton output vertex set and summarize all followers of this user as the set of input vertices. This especially allows highlighting the effect of opinion leaders, for instance famous people with a large group of followers on Twitter. We simulate the opinion of all individuals in the social network towards an imaginary hypothesis by a vertex function \(f\colon\mathcal{V}\times[0,\infty)\rightarrow[-1,1]\), which can be interpreted as the following. If an individual believes the hypothesis the corresponding value of the vertex function is positive (with \(1\) being the strongest level of trust), while for an individual that opposes the hypothesis the corresponding value of the vertex function is negative (with \(-1\) being the strongest level of distrust). For the **boundary value problem** (38) we initialize the opinion of all individuals in a social network by setting the vertex function \(f\) to zero, which can be interpreted as having no opinion towards an imaginary hypothesis. We now simulate information flow in the social network by giving two opinion leaders (i.e., vertices with many followers) two opposing opinions towards this hypothesis and setting the respective values of the vertex function to \(-1\) and \(1\). We keep these values fixed as a form of Dirichlet boundary conditions. By using the explicit forward-Euler discretization scheme to solve the boundary value problem for \(p=2\), the opinion of the two dedicated individuals is propagated in the social network as can be seen in Figure 1. We initialize the vertex function equally for the oriented normal graph (top row) and the oriented hypergraph (bottom row) and calculate the diffusion process until convergence. As can be seen, in both cases the opinion is propagated in the social network based on the underlying network topology and the final state is equivalent for both the normal graph and the hypergraph experiment. However, as can be observed, information within the hypergraph is distributed at a higher rate compared to the normal graph and thus converging faster. This is due to the fact that opinion leaders in a normal graph have a less direct impact on their followers compared to the hypergraph case, where the follower's believe \(f(v_{i})\) is scaled with \(\frac{1}{|a_{q}^{in}|}\), where \(|a_{q}^{in}|\) is the number of followers of the individual user. This can be seen in (19) since in our modeling for this application the parameter \(a_{q}^{out}\) is set to 1. For the **initial value problem** (37) we choose \(p=1\) and a sufficiently small time step size \(\tau>0\) to guarantee stability of the corresponding iteration scheme (39). We initialize each individual's opinion \(f_{0}\left(v_{i}\right)\) randomly with a uniform distribution in the interval \([-1,1]\). Additionally, we make sure that the vertex function \(f\) is initialized with average 0 and normalized. As can be observed in Figure 2, the information flow in the social network converges to a second eigenfunction of both the graph \(p\)-Laplacian (top row) and the hypergraph \(p\)-Laplacian (bottom row). For both cases we thresholded at 0 after 16.000 iterations to induce a spectral clustering of the opposing opinions and hence separating the social network into smaller communities based on the topology of the network (i.e., the relationship of following an individual). The resulting second eigenfunctions differ significantly with respect to the underlying topology of the oriented normal graph and the oriented hypergraph. This yields potential for further analysis and experiments in other applications, e.g., segmentation of images via spectral clustering. Figure 1: Solution of the boundary value problem of graph (top) and hypergraph (bottom) \(p\)-Laplace operator for \(p=2\). ### Local and nonlocal image processing In the following, we discuss how the proposed hypergraph differential operators can be applied to image processing tasks. By modeling pixels of an image with the help of normal graphs or hypergraphs instead of a regular grid, it is possible to not only represent local relationships of adjacent pixels, but also nonlocal relationships based on the image's content. For example, one could link image pixels that are relatively far from each other in the image, but share a similar image texture in their respective neighborhood. Given an image \(\tilde{I}\in\mathbb{R}^{n\times m}\) of height \(n\in\mathbb{N}\) and width \(m\in\mathbb{N}\) perturbed by a normal distributed noise signal \(\nu\sim\mathcal{N}(0,\sigma^{2})\), a typical task in image processing is to recover a noise-free image \(I\in\mathbb{R}^{n\times m}\) from \[\tilde{I}\ =\ I+\nu. \tag{42}\] This task can be interpreted as an inverse problem known as _image denoising_. In our numerical experiments we restrict ourselves to grayscale images for the sake of simplicity. To perform image denoising we first model the relationship between the image pixels with an unoriented hypergraph. In particular, we represent each image pixel as a vertex \(v_{i}\in\mathcal{V}\) of the oriented hypergraph and interpret the pixel grayscale intensities as the values of the vertex function \(f\in\mathcal{H}(\mathcal{V})\) with \(f\colon\mathcal{V}\to[0,255]\). Here, \(0\) represents the lowest signal intensity (i.e., black pixels) and \(255\) represents the highest signal intensity (i.e., white pixels). We construct the hyperedges of the hypergraph as described in detail below. For our numerical experiments we chose a grayscale image \(I\) of size \(225\times 400\) pixels of a flower field that contains image features at different scales. We added random Gaussian noise \(\nu\) with mean \(\mu=0\) and variance \(\sigma^{2}=150\) to every image pixel to generate an artificially perturbed image \(\tilde{I}\). Both, the unperturbed image \(I\) as well as the noisy variant \(\tilde{I}\), are illustrated in Figure 3 We construct an initial vertex function \(f_{0}\in\mathcal{H}(\mathcal{V})\) from the noisy image \(\tilde{I}\). Figure 2: Second eigenfunction of graph (top) and hypergraph (bottom) \(p\)-Laplace operator for \(p=1\) with thresholding at \(0\) after \(16.000\) iterations. We performed two different experiments for image denoising using the introduced unoriented hypergraph vertex average operator \(\overline{\Delta_{v}}\) from definition 17. In the first experiment we perform **local image processing** by constructing a hyperedge of the unoriented hypergraph from the vertices that model the direct four pixel neighbors of any image pixel. This corresponds to traditional image processing methods on regular grids as performed, e.g., in [3]. For boundary pixels of the image we use an analogue of Neumann zero boundary conditions, i.e., we assume the image is extended constantly. This results in a total of \(225\cdot 400=90000\) hyperedges for the unoriented hypergraph, where each hyperedge \(e_{q}\in\mathcal{E}_{H}\) can be directly associated with the corresponding image pixel. In the first case, we compare the results of using the iterative scheme (41) with and without data fidelity term, for which we each performed \(100\) iterations in our numerical experiments. To investigate the influence of the data fidelity term, we first fixed the time step size as \(\tau:=0.1\) and varied the regularization parameter \(\lambda>0\). The left column of Figure 4 shows that with decreasing value of \(\lambda>0\), the smoothing effect of the vertex averaging operator increases, leading to less noisy images. On the other hand, the edges of image features get more and more blurry as can be expected in this case. In another setting we remove the data fidelity term by setting \(\lambda:=0\) and hence we investigate the corresponding evolution equation of the vertex averaging operator. Here, we varied the time step size parameter \(\tau>0\) to compare different results for a fixed number of iterations. As can be seen in the right column of Figure 4, we recover the scale spaces of the operator \(\overline{\Delta_{v}}\). With increasing time step size \(\tau>0\), we observe more and more coarse image features induced by the local averaging effect of the operator. In the second numerical experiment we perform **nonlocal image processing** by constructing the hyperedges of the unoriented hypergraph not from the local neighborhood of an image pixel, but by regarding the pixel intensities of the image. By this we are able to assemble vertices in hyperedges that correspond to image pixels which can be anywhere in the image and hence we gain a nonlocal vertex averaging operator. In particular, we construct for each vertex \(v_{i}\in\mathcal{V}\) a hyperedge containing all vertices for which the value of the vertex function \(f\) is similar. For this we choose a threshold \(\epsilon>0\) and add all vertices \(v_{j}\in\mathcal{V}\) to the hyperedge induced by the vertex \(v_{i}\in\mathcal{V}\) for which the distance is small enough, i.e., \[|f(v_{i})-f(v_{j})|<\epsilon. \tag{43}\] Figure 3: Illustration of the original grayscale image \(I\) (left) and the artificially perturbed image \(\tilde{I}\) (right) used for image denoising. We chose \(\epsilon:=40\) for the experiments presented in this setting. With this approach we nonlocally group image pixels with similar grayscale intensities. Since pixels with equal grayscale intensity lead to similar or even equal hyperedges, we treat hyperedges uniquely, i.e., without any duplicates in \(\mathcal{H}(\mathcal{E})\). As in the case of local image processing described above, we compared the results of using the iterative scheme (41) with and without data fidelity term, for which we performed 100 iterations in our numerical experiments. We started our experiments by fixing the time step size as \(\tau:=1\) and varying the regularization parameter \(\lambda>0\) in order to investigate the influence of the data fidelity term. The left column in Figure 5 shows that with decreasing value of \(\lambda>0\), the results deviate more and more from the initial image. Furthermore, one can observe that the amount of different pixel intensities decreases more and more until the resulting image shows only one grayscale intensity in the last row. At the same time, edges of image features stay sharp as there is no averaging operation across the boundaries of image regions with strongly varying grayscale intensities. Secondly, we again removed the influence of the data fidelity term entirely by setting \(\lambda:=0\). By varying the time step size parameter \(\tau>0\), we can compare different scales of the resulting scale space for a fixed number of iterations of the nonlocal hypergraph vertex averaging operator \(\overline{\Delta_{v}}\). As in the experiment with data fidelity, we observe that the amount of different pixel intensities decreases rapidly with increasing time step size \(\tau>0\), leading to a grouping of image pixels into image regions with similar grayscale intensities, until eventually all image pixels have the same grayscale value in the last row. This expected behaviour can be leveraged for other image processing tasks in which grouping of image pixels is needed, e.g., in _image compression_ or _segmentation_. ## 6 Conclusion In this paper we derived various variants of differential operators and a family of \(p\)-Laplacian operators on hypergraphs, which generalize known graph and hypergraph operators from literature. In particular, we considered gradient, adjoint and \(p\)-Laplacian definitions both in the case of oriented as well as unoriented hypergraphs. The resulting operators on oriented hypergraphs and the associated scale space flows can be employed for modelling information flows or performing spectral analysis in social networks, where we can directly incorporate group relationships via hyperarcs. Moreover, the proposed averaging operators and \(p\)-Laplacians for unoriented hypergraphs enable performing local and nonlocal image processing with results that can be used for segmentation tasks. Preliminary results indicate a great potential for future research. Interesting further questions, in addition to a more detailed study of spectral clustering, are e.g. the relation between hypergraph gradients and higher-order methods for partial differential equations or the definition of distance functions on hypergraphs via eigenfunctions of the infinity-Laplacian. Another promising direction is to investigate the influence of non-constant weight functions of the hypergraphs used in our numerical experiments. In particular, we propose to define the weights of hyperedges based on the variance of the vertex function values for the vertices included in the respective hyperedge. This should further improve the results of both local and nonlocal image processing. Due to the overarching success of learning-based methods in many areas of image processing and data analysis it seems obvious that further developments in this directions are in place for hypergraph structures. We believe that our work can provide a foundation for hypergraph neural networks generalizing the recently celebrated graph neural network structures with unforeseen opportunities.
2309.16110
Learning Effective NeRFs and SDFs Representations with 3D Generative Adversarial Networks for 3D Object Generation: Technical Report for ICCV 2023 OmniObject3D Challenge
In this technical report, we present a solution for 3D object generation of ICCV 2023 OmniObject3D Challenge. In recent years, 3D object generation has made great process and achieved promising results, but it remains a challenging task due to the difficulty of generating complex, textured and high-fidelity results. To resolve this problem, we study learning effective NeRFs and SDFs representations with 3D Generative Adversarial Networks (GANs) for 3D object generation. Specifically, inspired by recent works, we use the efficient geometry-aware 3D GANs as the backbone incorporating with label embedding and color mapping, which enables to train the model on different taxonomies simultaneously. Then, through a decoder, we aggregate the resulting features to generate Neural Radiance Fields (NeRFs) based representations for rendering high-fidelity synthetic images. Meanwhile, we optimize Signed Distance Functions (SDFs) to effectively represent objects with 3D meshes. Besides, we observe that this model can be effectively trained with only a few images of each object from a variety of classes, instead of using a great number of images per object or training one model per class. With this pipeline, we can optimize an effective model for 3D object generation. This solution is one of the final top-3-place solutions in the ICCV 2023 OmniObject3D Challenge.
Zheyuan Yang, Yibo Liu, Guile Wu, Tongtong Cao, Yuan Ren, Yang Liu, Bingbing Liu
2023-09-28T02:23:46Z
http://arxiv.org/abs/2309.16110v1
# Learning Effective NeRFs and SDFs Representations with ###### Abstract In this technical report, we present a solution for 3D object generation of ICCV 2023 OmniObject3D Challenge. In recent years, 3D object generation has made great process and achieved promising results, but it remains a challenging task due to the difficulty of generating complex, textured and high-fidelity results. To resolve this problem, we study learning effective NeRFs and SDFs representations with 3D Generative Adversarial Networks (GANs) for 3D object generation. Specifically, inspired by recent works, we use the efficient geometry-aware 3D GANs as the backbone incorporating with label embedding and color mapping, which enables to train the model on different taxonomies simultaneously. Then, through a decoder, we aggregate the resulting features to generate Neural Radiance Fields (NeRFs) based representations for rendering high-fidelity synthetic images. Meanwhile, we optimize Signed Distance Functions (SDFs) to effectively represent objects with 3D meshes. Besides, we observe that this model can be effectively trained with only a few images of each object from a variety of classes, instead of using a great number of images per object or training one model per class. With this pipeline, we can optimize an effective model for 3D object generation. This solution is one of the final top-3-place solutions in the ICCV 2023 OmniObject3D Challenge. ## 1 Introduction 3D object generation aims at generating meaningful 3D surfaces and synthesis images of 3D objects given random inputs [23, 7]. Inspired by the great success of 2D object generation [9, 4], there have been many studies [12, 5] extending 2D methods to 3D generation. Recent 3D generation approaches focus more on effective 3D representation learning [18, 15] and advanced generation strategy [10, 2, 21]. Despite the great progress and promising results in recent years, 3D object generation is still a challenging task due to the difficulty of generating complex, textured and high-fidelity results. To devise a solution for 3D object generation in the ICCV 2023 OmniObject3D Challenge, we simultaneously consider effective 3D representation learning and advanced generation strategy. In terms of 3D representation learning, textured mesh [3] and Neural Radiance Fields (NeRFs) [14] embrace the great potential to effectively encode 3D object features. In particular, textured mesh representations can support individual texture map generation, which allows us to easily replace surface textures, while NeRFs-based representations excel at rendering high-fidelity images. Furthermore, when considering effective shape representation, there are a variety of options, ranging from explicit representations like point-based [20], mesh-based [1] and voxel-based [22] approaches to implicit ones such as the Signed Distance Functions (SDFs) [17, 11] and occupancy function [13]. In the light of these cutting-edge techniques, we mainly focus on exploring effective NeRFs representations for 3D object generation while still optimizing SDFs representations to effectively represent objects with 3D meshes. Moving on to advanced generation strategy, there are usually different options, such as latent generation [21] and direct generation [16]. In particular, latent generation methods aim to generate the representation of the desired output in the latent space and decode the latent code back to the 3D space, while direct generation focus on directly generating the desired 3D representation without utilizing the latent space. Besides, from the perspective of generation process, generation strategies can be categorized into Generative Adversarial Networks (GANs) [10, 2], Variational AutoEncoder (VAE) [19], flow-based [6], Diffusion Models (DMs) [21], _etc_. In our solution, we employ the efficient geometry-aware 3D GANs [2] as the backbone for 3D object generation. The overall framework of our solution is depicted in Fig. 1. Specifically, we aim to learn effective NeRFs and SDFs representations with 3D Generative Adversarial Networks (GANs) for 3D object generation. Inspired by the recent success of efficient geometry-aware 3D GANs, _i.e_., EG3D [2, 18], we employ it as the backbone and incorporate label embedding and color mapping [18] to enable model training on different taxonomies simultaneously. Next, we generate NeRFs-based representations for rendering high-fidelity synthetic images and SDFs to effectively represent objects with 3D meshes through a decoder. In addition, we empirically find that we can effectively train this model with only a few images per object from various classes, rather than training with a great number of images per object or optimizing one model per class. With this pipeline, we can optimize an effective model for 3D object generation. This solution is one of the final top-3-place solutions in the ICCV 2023 OmniObject3D Challenge. ## 2 Technical Solution As aforementioned, to realize 3D object generation for ICCV 2023 OmniObject3D Challenge, we explore 3D GANs to learn effective NeRFs representations for synthetic images and SDFs representations for 3D shapes. Motivation of Using GANs instead of DMs.Although diffusion models have recently gained an increasing popularity in 3D object generation, most existing works [16, 21] have only been verified to work under specific category conditions (e.g., chairs [16] or avatars [21]). Besides, since the OmniObject3D dataset [23] is a collection of objects from hundreds of taxonomies, following the pipeline of existing DMs would necessitate the use of hundreds of pre-trained models, which is impractical. Hence, we employ a method that enables simultaneous training on all the taxonomies in the OmniObject3D dataset. Moreover, given the success of StyleGAN2 [10] in representing NeRFs in latent space, we have chosen latent generation for our pipeline. Solution Pipeline.The framework of our solution is depicted in Fig. 1. Overall, our solution adopts a GAN-based generation process. Specifically, with EG3D [2] as the backbone, firstly, we generate a one-dimensional latent code, \(\mathbf{z}\) (\(1\times 512\)), which is obtained by randomly sampling 512 values from a standard normal distribution. Meanwhile, we randomly generate an integer, indicating the object label, within the range of the total category number of OmniObject3D. The integer is then transformed into a label embedding by an embedding layer. This enables the model to be trained on different taxonomies simultaneously. Next, we forward the label embedding and latent code \(\mathbf{z}\) into a mapping network and generate an intermediate latent code \(\mathbf{w}\) (\(1\times 15\times 512\)) with promoted dimensions. Here, the first 14 dimensions of \(\mathbf{w}\) (\(\mathbf{w}_{1}\) in the figure) are used as the input to the object generator (\(\mathbf{G}_{1}\) in the figure) while the last dimension of \(\mathbf{w}\) (\(\mathbf{w}_{2}\) in the figure) are employed as the input to the color generator (\(\mathbf{G}_{2}\) in the figure). \(\mathbf{G}_{1}\) and \(\mathbf{G}_{2}\) generate the tri-plane representation (\(3\times 32\times 64\times 64\)) and the color tensor (\(3\times 10\)), respectively. Following [18], we use the color generator which allows convenient texture conversion. After that, we determine queries, \(\mathbf{Q}\), based on the randomly generated camera pose. The tri-plane representation is then taken as keys, \(\mathbf{K}\), and transmitted into the tri-plane decoder together with \(\mathbf{Q}\). Then, we use a decoder to generate the SDF of the object, which can be sampled to create a 3D mesh. This decoder also produces a feature as the input to a view direction mapper, from which the output is multiplied by the color tensor, \(\mathbf{V}\) (the values), in the Figure 1: An overview of the framework of our solution for 3D object generation. attention module. Finally, the attention module produces the 2D synthesis image and a discriminator [2] which only functions during training is used for adversarial training. Model Training.Since we are simultaneous optimizing NeRFs and SDFs representations, we ensure the generator can initially generate a unit sphere by initializing our model with SDF-pretraining as [18]. Then, the generator is optimized using the Adam optimizer with a learning rate of 0.0025, while the discriminator [2] uses a learning rate of 0.002. The batch size is set to 32 and the model is trained for 300,000 iterations. Additionally, we also use adaptive discriminator augmentation [8] to improve model generalization. Overall, the model training objective is formulated as: \[\alpha_{0}\mathcal{L}_{path}+\alpha_{1}\mathcal{L}_{e}+\alpha_{2}\mathcal{L}_ {v}+\alpha_{3}\mathcal{L}_{sdf}, \tag{1}\] where \(\mathcal{L}_{path}\) denotes the Path length regulation loss [10], \(\mathcal{L}_{e}\) and \(\mathcal{L}_{v}\) denote the entropy loss and total variation loss [10], respectively, \(\mathcal{L}_{sdf}\) represents the SDF eikonal loss [18], and \(\alpha_{i}\) are weighting parameters. We set \(\alpha_{0}{=}2\), \(\alpha_{1}{=}0.05\), \(\alpha_{2}{=}0.5\) and \(\alpha_{3}{=}0.1\). ## 3 Experiments Dataset.We train our model from scratch for 3D object generation on the training dataset provided by the ICCV 2023 OmniObject3D Challenge. The complete OmniObject3D dataset [23] contains 216 classes from a wide range of daily categories and approximately 6k objects in total where each object has 100 images in different views. Efficient Model Training.Instead of training one model for each class, we train a single model with various types of objects. The model takes each image along with its camera pose, focal length and object type. Besides, we empirically find that we can effectively train this model with only a few images per object from various classes, so we select the first 8 images of each object for model training. This significantly improves the model training efficiency. Results.Fig. 2 shows some visualization results of synthesis images generated by our solution on OmniObject3D. It can be seen that our approach is capable of generating high-fidelity images from different views for 3D object generation. Besides, Fig. 3 shows results of our solution without the additional label embedding. Comparing Fig. 2 with Fig. 3, we can see that without the additional label embedding, the quality of the synthesis images is significantly worse than the original solution. Furthermore, Table. 1 shows that using the additional label embedding can improve the solution by 4.86 in terms of FID. This verifies that the additional label embedding is beneficial for improving model generalization. ## 4 Conclusion In this technical report, we present a solution for 3D object detection of ICCV 2023 OmniObject3D Challenge. The key insight of our solution is to learn effective NeRFs and SDFs Representations with 3D generative adversarial networks for 3D object generation. Overall, this solution is one of the final top-3-place solutions in the ICCV 2023 OmniObject3D Challenge.
2309.06303
Transfer learning from Hermitian to non-Hermitian quantum many-body physics
Identifying phase boundaries of interacting systems is one of the key steps to understanding quantum many-body models. The development of various numerical and analytical methods has allowed exploring the phase diagrams of many Hermitian interacting systems. However, numerical challenges and scarcity of analytical solutions hinder obtaining phase boundaries in non-Hermitian many-body models. Recent machine learning methods have emerged as a potential strategy to learn phase boundaries from various observables without having access to the full many-body wavefunction. Here, we show that a machine learning methodology trained solely on Hermitian correlation functions allows identifying phase boundaries of non-Hermitian interacting models. These results demonstrate that Hermitian machine learning algorithms can be redeployed to non-Hermitian models without requiring further training to reveal non-Hermitian phase diagrams. Our findings establish transfer learning as a versatile strategy to leverage Hermitian physics to machine learning non-Hermitian phenomena.
Sharareh Sayyad, Jose L. Lado
2023-09-12T15:12:15Z
http://arxiv.org/abs/2309.06303v1
# Transfer learning from Hermitian to non-Hermitian quantum many-body physics ###### Abstract Identifying phase boundaries of interacting systems is one of the key steps to understanding quantum many-body models. The development of various numerical and analytical methods has allowed exploring the phase diagrams of many Hermitian interacting systems. However, numerical challenges and scarcity of analytical solutions hinder obtaining phase boundaries in non-Hermitian many-body models. Recent machine learning methods have emerged as a potential strategy to learn phase boundaries from various observables without having access to the full many-body wavefunction. Here, we show that a machine learning methodology trained solely on Hermitian correlation functions allows identifying phase boundaries of non-Hermitian interacting models. These results demonstrate that Hermitian machine learning algorithms can be redeployed to non-Hermitian models without requiring further training to reveal non-Hermitian phase diagrams. Our findings establish transfer learning as a versatile strategy to leverage Hermitian physics to machine learning non-Hermitian phenomena. _Introduction._ The interplay between various degrees of freedom in many-body systems results in the emergence of novel phases of matter, including superconducting [1; 2; 3; 4; 5; 6], Mott insulating [7; 8; 9; 10; 11], nematic [12; 13; 14; 15; 16] and topological [17; 18; 19; 20; 21; 22] phases. Due to their inherent complexity, these systems are often studied computationally, using, e.g., quantum Monte Carlo methods [23; 24; 25] and tensor network approaches [26; 27; 28]. In recent years, machine learning methods [29; 30] have provided a complementary strategy to rationalize phases of matter, often in combination with conventional quantum many-body methods. The demonstrations of these roles played by machine learning methods in tackling many-body problems results in characterizing different phases of matter [31; 32; 33; 34; 35; 36; 37; 38; 39; 40], deep learning of the quantum dynamics [41; 42; 43; 44], obtaining many-body wave functions [45; 46; 47; 48; 49], and optimizing the performance of computational simulations [50]. Exploring correlated physics in open quantum systems attracts great interest mainly because of the systematic treatment of loss and gain in these systems, which quantitatively reproduces experimental observations[51; 52; 53; 54; 55]. In recent years, along with brute force studies of open quantum systems, understanding their effective descriptions based on non-Hermitian physics get momentum [56; 57; 58; 59; 60; 61]. The studies of non-Hermitian models have initially focused on single-particle models [62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78], and its extension to the fully interacting realm has also gained attention recently [79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95]. Aside from these case studies, unraveling the physics of interacting non-Hermitian systems remains an open challenge due to the scarcity of exactly solvable models, and as conventional (Hermitian) many-body methods cannot be directly applied to the non-Hermitian limit. Specifically, obtaining the phase boundaries, understanding the stability of certain phases against non-Hermiticity, and characterizing exotic phases with no Hermitian counterparts remain in general open problems. Similar to the realm of Hermitian physics, machine learning methods, and specifically supervised [96; 97; 98; 99], unsupervised [99; 100], and graph-informed methods [101] allowed to identify various phases of non-Hermitian non-interacting systems. In these methodologies, the inputs to train learning models are collected from non-Hermitian noninteracting systems and are used to characterize non-Hermitian phase diagrams. As computational methods for Hermitian interacting models are numerically less demanding and more stable than their Figure 1: **Non-Hermitian transfer learning:** Schematic illustration of the transfer learning methodology from Hermitian models to non-Hermitian physics. As an input, for each point of the phase diagram of the Hermitian model, short-range two-point (solid lines) and four-point (dashed lines) correlation functions are computed (Eqs.(4) and (5)). The generated correlators for Hermitian systems are used to train a machine learning architecture, which in turn allows predicting the phase diagram from short-range correlators of the non-Hermitian model. The machine learning methodology allows extracting quasi-degeneracies and correlation entropies from the short-range correlators of the non-Hermitian model. non-Hermitian counterparts, learning phase diagrams of non-Hermitian many-body systems from Hermitian correlated models would open up a promising strategy to leverage many-body methods developed for interacting Hermitian models. In this manuscript, we show that machine learning methods purely trained on Hermitian many-body data can predict interacting regimes in non-Hermitian interacting models. For concreteness purposes, we explore the different regimes of the non-Hermitian dimerized Kitaev-Hubbard chain using machine learning techniques schematically shown in Fig. 1. Here, we collect various correlation functions, orders of quasi-degeneracies, and correlation entropies at different parameter regimes of the Hermitian limit of our model. Using this input, we demonstrate that non-Hermitian regime crossovers can be identified using a machine-learning methodology trained on short-range Hermitian correlation functions. The outcomes of these supervised learning schemes are degrees of quasi-degeneracies and correlation entropies, which can characterize various regimes of the non-Hermitian model. Our findings reveal that employing correlation entropy as a classifier allows characterizing all regimes of the system. Our machine-learning approach reliably learns various regimes that share similarities with the Hermitian model. Our method also successfully delineates the regime crossovers even when the correlation effect in the non-Hermitian interacting model deforms the Hermitian phase diagram. Non-Hermitian interacting model.We focus on an interacting non-Hermitian model whose phase boundaries can be solved exactly in the thermodynamic limit [91]. The non-Hermitian dimerized Kitaev-Hubbard Hamiltonian on a chain with length \(L\) is given by \[\mathcal{H}= -\sum_{j=1}^{L-1}\left[t_{j}\left(c_{j}^{\dagger}c_{j+1}+c_{j+1}^ {\dagger}c_{j}\right)+\Delta_{j}\left(c_{j}^{\dagger}c_{j+1}^{\dagger}+c_{j+1 }c_{j}\right)\right]\] \[+\sum_{j=1}^{L-1}(U_{j}-\mathrm{i}\delta_{j})\left(2n_{j}-1 \right)\left(2n_{j+1}-1\right), \tag{1}\] where \(c_{j}^{\dagger}\) (\(c_{j}\)) is a creation (annihilation) operator for spinless fermion at site \(j\) associated with the fermion density \(n_{j}=c_{j}^{\dagger}c_{j}\). Here \(t_{j}\), \(\Delta_{j}\), and \(U_{j}-i\delta_{j}\) denote, respectively, real-valued dimerized hopping amplitude, superconducting pairing amplitude, and complex-valued Hubbard interaction strength. Considering the site-independent parameter \(\mathcal{O}\in\{t,\Delta,U,\delta\}\), \(\mathcal{O}_{j}\in\{t_{j},\Delta_{j},U_{j},\delta_{j}\}\) for \(1\leq j\leq L\) reads \(\mathcal{O}_{j}=\mathcal{O}(1-\eta)\) if \(j\,\mathrm{mod}\,2=0\) and \(\mathcal{O}_{j}=\mathcal{O}(1+\eta)\) if \(j\,\mathrm{mod}\,2=1\), where \(\eta\) is the real-valued dimerization parameter. Figure 3: **Non-Hermitian interacting model**: The regimes of the non-Hermitian many-body model with \(L=16\) on the \(U/t-\eta\) plane at \(\delta=0.5\). The results in (a) and (b) are calculated by exact diagonalization. The regimes in (c,d) are obtained using architectures trained by two-point correlations, whereas (e,f) are trained on both two-point and four-point correlation functions. The quasi-degeneracy in (c,e) is treated as a discrete classifier for [x], while it is treated as a regression problem in (d,f). The boundaries in the thermodynamic limit are shown by cyan dashed lines and black dashed-dotted lines. It is observed that while two-point correlators fail to predict the non-Hermitian regimes in (c), the inclusion of four-point correlators recovers accurate regime crossovers (e). Figure 2: **Hermitian interacting model:** The phase diagram of the Hermitian many-body model with \(L=16\) on the \(U/t-\eta\) plane at \(\delta=0.0\). The results in (a) and (b) are calculated by exact diagonalization. Panels (c,d) use a machine learning architecture that uses solely two-point correlation functions as input. In contrast, panels (e,f) use an architecture trained on both two-point and four-point correlation functions. The quasi-degeneracy in (c,e) is treated as a discrete classifier for [x], while it is treated as a regression problem in (d,f). The boundaries in the thermodynamic limit are shown by cyan dashed lines. The Hamiltonian in Eq. (1) is exactly solvable when \(\Delta=t\). At this parameter regime, the interacting model can be mapped to a quadratic fermionic model upon successive two Jordan-Wigner transformations and a spin rotation [91]. Through this procedure, one can show that the spectrum of the effective quadratic Hamiltonian undergoes gap closure upon setting \(\frac{U}{t}=\pm\sqrt{\left|\frac{\delta^{2}}{t^{2}}-\frac{(1\pm\eta)^{2}}{(1 \mp\eta)^{2}}\right|}\), and \(\frac{U}{t}=\pm\frac{1\pm\eta}{1\mp\eta}.\) These relations ensure the closure of the real-line gaps and the appearance of zero degeneracies in the imaginary part of the spectrum, respectively. Note that these two equations coincide when the non-Hermiticity parameter vanishes, i.e., \(\delta=0\). As \(\mathcal{H}\) respects the charge conjugation symmetry, eigenvalues come in pairs such that the set of all energies satisfy \(\{\varepsilon\}=\{\varepsilon^{*}\}\). This implies that degeneracies of phases can be merely obtained by vanishing real parts of the spectrum. In a finite system, finite size effects will give rise to small splitting between degenerate states in the thermodynamic limit. For finite models, it is thus convenient to define the quasi-degeneracy \(\chi\) given by \[\chi=\sum_{\alpha}e^{-\lambda|\varepsilon_{\alpha}-\varepsilon_{0}|} \tag{2}\] with \(\varepsilon_{\alpha}\) being the \(\alpha\)th eigenvalue, and \(\varepsilon_{0}\) the ground state [102]. The parameter \(\lambda\) controls the energy resolution of the quasi-degeneracy, which in the limiting case \(\lim_{\lambda\to\infty}\lim_{L\to\infty}\chi\) becomes the thermodynamic degeneracy of the ground state [103]. We will focus our analysis on system sizes with \(L=16\), that are large enough to show different transition regimes that would converge to the different phases of the model in the thermodynamic limit. In addition to the quasi-degeneracy \(\chi\), we can characterize the phase boundaries using the electronic correlation entropy given by [104, 105, 106, 107, 34] \[\mathcal{C}_{\rm corr}=-\frac{1}{L}\sum_{j=1}^{L}s_{j}\log(s_{j}), \tag{3}\] where \(0\leq s_{j}\leq 1\) is the \(j\)th eigenvalue of the correlation matrix. The elements of the correlation matrix \(C^{\rm mat}\) are two-point correlation functions that read \(C^{\rm mat}_{ij}=|{\rm det}[\sum_{l^{\prime}}^{[\chi]}\rho_{l^{\prime}_{j}}^{ l^{\prime}}]|\) with \(\rho_{lj}^{l^{\prime}}=\langle\Psi_{l}|c_{i}^{\dagger}c_{j}|\Psi_{l^{\prime}}\rangle\), where \(\Psi_{l}\) is the \(l\)th eigenstate on the ground state manifold, and \([\chi]\) is the closest integer to \(\chi\). The correlation matrix \(\mathcal{C}_{\rm corr}\) measures many-body entanglement and vanishes in systems described by Hartree-Fock product states [108, 109, 110, 111]. It is worth noting that while superconducting states can be represented as a product state in the Nambu basis, the previous definition of correlation entropy yields a finite value for superconducting states. Large values of \(\mathcal{C}_{\rm corr}\) in certain regions of the phase diagram imply that the system cannot be represented by a Hartree-Fock product state. Machine learning methodology.We now present the machine learning methodology to learn the different regimes of the interacting models, taking as target functions \(\chi\) and \(\mathcal{C}_{\rm corr}\). The input of our machine-learning algorithm corresponds to short-range many-body correlators in the form of two-point and four-point correlation functions given by \[d_{ij} =\langle c_{i}^{\dagger}c_{j}\rangle_{[\chi]},\quad f_{ij}= \langle c_{i}^{\dagger}c_{j^{\prime}}^{\dagger}\rangle_{[\chi]}, \tag{4}\] \[k_{ij} =\langle\kappa_{ij}\kappa_{ij}^{\dagger}\rangle_{[\chi]},\quad p_ {ij}=\langle n_{i}n_{j}\rangle_{[\chi]}, \tag{5}\] where \(\kappa_{ij}=c_{i}c_{j}\) and \(\langle\hat{A}\rangle_{[\chi]}\equiv|{\rm det}[\sum_{ll^{\prime}}^{[\chi]}A_ {ll^{\prime}}]|\) with \(A_{ll^{\prime}}=\langle\Psi_{l}|\hat{A}|\Psi_{l^{\prime}}\rangle\). Here, \(i,j\) run on four neighboring sites in the middle of the chain so that the algorithm relies solely on short-range correlation functions. These correlation functions are used to predict the quasi-degeneracy \(\chi\) and the correlation entropy \(\mathcal{C}_{\rm corr}\). We collect 20000 different non-Hermitian interacting realizations on the \((U/t,\eta)\) plane, taking the non-Hermiticity parameter as \(\delta\in\{0,0.5\}\). To predict the quasi-degeneracy, we explore two strategies, the first one is based on transforming the task in a classification problem for \([\chi]\), and the second one is a regression problem for \(\chi\). The prediction of \(\mathcal{C}_{\rm corr}\) is treated as a regression problem. The details of our NN architecture for each of these cases are presented in the Supplemental Materials (SM) [112]. Results.We now present the predictions of different regimes based on various correlators for our Hermitian Figure 4: **Correlation entropy predictions**: The regimes of the non-Hermitian many-body model with \(L=16\) on the \(U/t-\eta\) plane at \(\delta=0.0\) (a,c,e), \(0.5t\) (b,d,f). The trained models are obtained using the Hermitian datasets with \(\delta=0.0\). The color bar denotes \(\mathcal{C}_{\rm corr}\). The regimes panels (c,d) are obtained using the machine learning model trained by two-point correlation functions, whereas (e,f) are trained on both two-point and four-point correlation functions. The boundaries in the thermodynamic limit given in the main text are shown by cyan dashed lines and black dashed-dotted lines. and non-Hermitian limits. We start with the Hermitian phase diagram shown in Fig. 2 (a,b). These panels present the numerical regimes obtained with the exact diagonalization method [113]. The finite-size effect pushed the regime crossovers to smaller \(\eta\) values from the phase boundaries in the thermodynamic limit, a feature that can be systematically analyzed using finite size scaling [91]. Performing this scaling gives rise to the thermodynamic phase boundaries shown in the cyan lines [91]. The associated predicted regime crossovers using \(\chi\) are displayed in Fig. 2 (c,d,e,f). Here, we compare the true (Fig. 2ab) and predicted (Fig. 2(c,d,e,f)) phase diagrams obtained from training the NN model using the two-point correlation functions (Fig. 2(c,d)) or the combination of both two-point and four-point correlation functions (Fig. 2(e,f)). The values of \([\chi]\) in Fig. 2(a,c,e) are discrete, and the predicted results belong to different classes of \([\chi]\). In panels Fig. 2(b,d,f), a regression architecture is used to predict \(\chi\), and the predicted results Fig. 2(d,f) are obtained as a regression problem. We now examine how the regimes of the non-Hermitian interacting model can be deduced from short-range correlators using a model trained by the Hermitian dataset with \(\delta=0.0\), as shown in Fig. 3. Fig. 3(c,d,e,f) shows the predicted phase crossovers obtained by the algorithm trained with Hermitian data, which should be compared with true outputs of the non-Hermitian problem shown in Fig. 3(a,b). Interestingly, the predicted results based on two-point correlation functions based on a classification architecture for \([\chi]\) (Fig. 3(c)) display a large discrepancy. Such inaccurate prediction is eliminated by incorporating four-point correlation functions into the considered observables, as shown in Fig. 3(e). We further note that if we phrase the task as a regression problem, as shown in Fig. 3(b,d,f), the predicted phase boundaries based on training with two-point correlation functions are more reliable, as shown in Fig. 3(d). These results show that the quasidegeneracy of the non-Hermitian model can be extracted from a model trained purely on Hermitian data. Aside from \(\chi\), the different regimes can be characterized using the correlation entropy \(\mathcal{C}_{\text{corr}}\) both in Hermitian \(\delta=0\) and non-Hermitian \(\delta=0.5t\) systems as respectively shown in Fig. 4 (a,b). Finite-size effects are reflected in the deviations from the cyan lines, which are inherited by the changes of \([\chi]\) that impact the definition of the correlation entropy. Interestingly, \(\mathcal{C}_{\text{corr}}\) exhibits further transitions, quantitatively described by the analytic phase boundaries. The absence of a finite size effect in different regions of the parameter space, delineated by the black dashed-dotted lines, signals the exponential convergence towards the ground state due to finite correlation gaps. Similar behavior is reported in Mott insulators [114, 34] and magnetic vortex liquids [115]. In Fig. 4, we present the various regimes for Hermitian (Fig. 4(a,c,e)) and non-Hermitian (Fig. 4(b,d,f)) systems using a model trained on Hermitian models with only two-point (Fig. 4(c,d)) or the combination of two-point and four-point correlation functions (Fig. 4(e,f)). Overall, all the thermodynamic phase boundaries are qualitatively signaled by the correlation entropy. In the non-Hermitian cases, we can identify some regions, mainly inside the black diamond-like phase boundaries, featuring differences from the true results. These differences are reduced when including four-point correlation functions in the training of the Hermitian model; see also the SM [112]. It is worth noting that the regions with the most discrepancies have a topological superconducting nature, suggesting that phases with topological and many-body effects require higher-point correlation functions to be inferred with short-range information. Our machine learning models trained only in Hermitian Hamiltonians can characterize the regimes of non-Hermitian interacting systems. It is interesting to note that, while we observe a general agreement, small discrepancies between the machine learning predicted regimes and the computationally exact ones can be observed. This is because non-Hermitian many-body systems can show richer ground states than their Hermitian analog due to the extent of their spectrum in the complex plane. As a result, many-body wavefunctions in non-Hermitian models are genuinely different from their Hermitian counterparts, as these wavefunctions can span different regions of the Hilbert space beyond the original Hermitian training. Interestingly, this discrepancy opens the possibility of using our machine learning algorithms to directly identify non-Hermitian phases that do not have a Hermitian counterpart. Conclusion.To summarize, we have demonstrated a transfer machine learning methodology whereby training on Hermitian many-body models allows us to predict different regimes of interacting non-Hermitian quantum many-body models. This opens the possibility of employing Hermitian many-body physics to understand the phase boundaries of non-Hermitian systems, leveraging solutions and methodologies currently only applicable to quantum many-body models. Our findings reveal that the prediction of quasi-degeneracy or correlation entropy allows the identification of different regions in interacting systems. Interestingly, these two methodologies are affected in a qualitatively different manner for finite size effects, with the correlation entropy showing the fastest convergence to the thermodynamic limit. Our machine-learning methodology relies on short-range correlation functions, which open the possibility to potential deployments of our technique in experimental setups. Our results establish transfer learning as a promising strategy to map regimes on non-Hermitian quantum many-body models and to identify regimes featuring phenomena not observable in Hermitian models. Acknowledgements: S.S. thanks F. Marquardt for the helpful discussions. J.L.L. acknowledges the computational resources provided by the Aalto Science-IT project, the financial support from the Academy of Finland Projects No. 331342, No. 336243 and No 349696, and the Jane and Aatos Erkko Foundation.
2309.10561
A multimodal deep learning architecture for smoking detection with a small data approach
Introduction: Covert tobacco advertisements often raise regulatory measures. This paper presents that artificial intelligence, particularly deep learning, has great potential for detecting hidden advertising and allows unbiased, reproducible, and fair quantification of tobacco-related media content. Methods: We propose an integrated text and image processing model based on deep learning, generative methods, and human reinforcement, which can detect smoking cases in both textual and visual formats, even with little available training data. Results: Our model can achieve 74\% accuracy for images and 98\% for text. Furthermore, our system integrates the possibility of expert intervention in the form of human reinforcement. Conclusions: Using the pre-trained multimodal, image, and text processing models available through deep learning makes it possible to detect smoking in different media even with few training data.
Robert Lakatos, Peter Pollner, Andras Hajdu, Tamas Joo
2023-09-19T12:15:06Z
http://arxiv.org/abs/2309.10561v1
# A multimodal deep learning architecture for smoking detection with a small data approach ###### Abstract **Introduction:** Covert tobacco advertisements often raise regulatory measures. This paper presents that artificial intelligence, particularly deep learning, has great potential for detecting hidden advertising and allows unbiased, reproducible, and fair quantification of tobacco-related media content. **Methods:** We propose an integrated text and image processing model based on deep learning, generative methods, and human reinforcement, which can detect smoking cases in both textual and visual formats, even with little available training data. **Results:** Our model can achieve 74% accuracy for images and 98% for text. Furthermore, our system integrates the possibility of expert intervention in the form of human reinforcement. **Conclusions:** Using the pre-trained multimodal, image, and text processing models available through deep learning makes it possible to detect smoking in different media even with few training data. keywords: AI supported preventive healthcare, pre-training with generative AI, multimodal deep learning, automated assessment of covert advertisement, few shot learning, smoking detections + ## 1 Introduction The WHO currently estimates that smoking causes around 8 million deaths a day. It is the leading cause of death from a wide range of diseases, for example, heart attacks, obstructive pulmonary disease, respiratory diseases, and cancers. 15% of people aged 15 years and over smoke in the OECD countries and 17% in the European Union [1]. Moreover, of the 8 million daily deaths, 15% result from passive smoking [2]. The studies [3; 4] below highlight the influence of smoking portrayal in movies and the effectiveness of health communication models. However, quantifying media influence is complex. For internet media like social sites, precise ad statistics are unavailable. Furthermore, calculating incited and unmarked ads poses a significant difficulty as well. Therefore, accurate knowledge of the smoking-related content appearing in individual services can be an effective tool in reducing the popularity of smoking. Methods for identifying content include continuous monitoring of advertising intensity [5], structured data generated by questionnaires [6], and AI-based solutions that can effectively support these goals. The authors of the article "Machine learning applications in tobacco research" [7] point out in their review that artificial intelligence is a powerful tool that can advance tobacco control research and policy-making. Therefore, researchers are encouraged to explore further possibilities. Nonetheless, these methods are highly data-intensive. In the case of image processing, an excellent example of this is the popular ResNet [8] image processing network, which was trained on the ImageNet dataset [9] containing 14,197,122 images. Regarding text processing, we can mention the popular and pioneering BERT network [10] trained by the Toronto BookCorpus [11] was trained by the 4.5 GB of Toronto BookCorpus. Generative text processing models such as GPT [12] are even larger and were trained with significantly more data than BERT. For instance, the training set of GPT 3.0 was the CommonCrawl [13] dataset, which has a size of 570 GB. The effective tools for identifying the content of natural language texts are topic modeling [14] and the embedding of words [15; 16; 17], tokens, sentences [18], or characters [19] clustering [20]. For a more precise identification of the content elements of the texts, we can use the named-entity recognition [21] techniques. In image processing, we can highlight classification and object detection to detect smoking. The most popular image processing models are VGG [22], ResNet [8], Xception [23], EfficientNet [24], Inception [25], and YOLO [26]. Moreover, there are architectures like CAMFFNet [27], which are specifically recommended for smoking detection. The development of multimodal models also is gaining increasing focus [28; 29], which can use texts and images the solve the tasks at the same time. For movies, scene recognition is particularly challenging compared to images [30]. Scene recognition is also linked to sensitive events such as fire, smoke, or other disaster detection systems [31], but there are attempts to investigate point-of-sale and tobacco marketing practices [32] as well. We concluded that there is currently no publicly available specific smoking-related dataset that would be sufficient to train a complex model from scratch. Hence, we propose a multimodal architecture that uses pre-trained image and language models to detect smoking-related content in text and images. By combining image processing networks with multimodal architectures and language models, we leverage textual and image data simultaneously. This offers a data-efficient and robust solution that can be further improved with expert input. This paper demonstrates the remarkable potential of artificial intelligence, especially deep learning, for the detection of covert advertising, alongside its capacity to provide unbiased, replicable, and equitable quantification of tobacco-related media content. ## 2 Methods ### Model Architecture As illustrated in Figure 1 by a schematic flow diagram, our solution relies on pre-trained language and image processing models and can handle both textual and image data. Figure 1: Schematic flow diagram of the architecture. The first step of our pipeline is to define the incoming data format because need to direct the data to the appropriate model for its format. The video recordings are analyzed with multimodal and image processing models, while the texts are analyzed with a large language model. In the case of video recordings, we applied the CLIP-ViT-B-32 multilingual [33; 34] model. The model has been developed for over 50 languages with a special training technique [33]. The model supports Hungarian, which was our target language. We use the CLIP-ViT-B-32 model as a filter. After filtering, to achieve more accurate results, we recommend using the pre-trained EfficientNet B5 model, which we fine-tuned with smoking images for the classification task. To process texts, we use name entity recognition to identify smoking-related terms. For this purpose, we have integrated into our architecture an XLM-RoBERTa model [35] that is pre-trained, multilingual, and also supports the Hungarian language, which is important to us. ### Format check The first step in processing is deciding whether the model has to process video recordings or text data. Since there are many formats for videos and texts, we chose the simple solution of only supporting mp4 and txt file formats. The mp4 is a popular video format, and practically all other video recording formats can be converted to mp4. We consider txt files utf8-encoded raw text files that are ideally free of various metadata. It is important to emphasize that here we ignore the text cleaning processes required to prepare raw text files. The reason is that we did not deal with faulty or txt files requiring further cleaning during the trial. ### Processing of videos and images The next step in the processing of processing video footage is to break it down into frames by sampling every second. The ViT image encoder of the CLIP-ViT-B-32 model was trained by its creators for various image sizes. For this, they used the ImageNet [9] dataset in which the images have an average size of 469\(\times\)387 pixels. The developers of CLIP-ViT-B-32 do not recommend an exact resolution for the image encoder. The model specification only specifies a minimum resolution of 224\(\times\)224. In the case of EfficientNetB5, the developers have optimized an image size of 224\(\times\)224. For these reasons, we have taken this image size as a reference and transformed the images sampled from the video recordings to this image size. ### Multimodal filtering The images sampled from the video recordings were filtered using the CLIP-ViT-B-32 multilingual v1 model. The pre-trained CLIP-ViT-B-32 multilingual v1 model consists of two main components from a ViT [36] image processing model and a DistilBERT-based [37] multilingual language model. We convert into a 512-long embedded vector [16] the images and texts with CLIP-ViT-B-32. The embedded vectors for texts and images can be compared based on their content meaning if we measure cosine similarities between the vectors. The cosine similarity is a value falling in the interval [-1,1], and the similarity of two vectors will be larger the closer their cosine similarity is to 1. Since we aimed to find smoking-related images, we defined a smoking-related term. We converted it to a vector and measured it against the embedded vectors generated from the video images. The term we chose was the word "smoking". We can use more complex expressions, which could complicate the measurement results interpretation. The cosine similarity of the vectors produced by embedding the images always results in a scalar value compared to the vector created from our expression related to "smoking". However, the decision limit between the distances measured between the vectors produced by the CLIP-ViT-B-32 model is not always clear. Namely, even in the case of images with meanings other than "smoking", we get a value that is not too distant. We had to understand the distribution of the smoking images to eliminate this kind of blurring of the decision boundary. To this end, we examined the characteristics of the distribution of the images. It is clear from Figure 2 that because the images with a semantic meaning closer to smoking appear randomly in a video recording, it is difficult to grasp the series of images that can be useful for us. Figure 2 is actually a function whose vertical axis has the cosine similarity values belonging to the individual images. At the same time, the horizontal axis shows the position of the images in the video. To solve this problem, we introduced the following procedure. If we put the cosine similarity values in ascending order, we get a function that describes the ordered evolution of the cosine similarity values. The ordered function generated from Figure 2 can be seen in Figure 3. As shown in Figures 2 and 3, we found that if we take the similarity value of the images sampled from the given sample to the word "smoking", their average results in a cutting line, and we can use it as a filter. Figure 3: The images are in an orderly manner based on the cosine similarity values. Figure 2: The cosine similarity of the images obtained from the video recording in chronological order. Furthermore, considering the specifics of the video recordings, we consider that the average can be corrected with a constant value. In this mean, the constant value can thus also be defined as the hyperparameter of the model. We chose the 0 default value for the correction constant because of more apparent measurements. Because the choice of the best constant value may differ depending on the recording type and may distort the exact measurement results. ### Fine-tuned image classification After filtering the image set with a multimodal model, we applied an image processing model to classify the remaining images further to improve accuracy. Among the publicly available datasets on smoking, we have used the "smoker and non-smoker" [38] for augmented [39] fine-tuning. We selected the following models for the task. EfficientNet, Inception, ResNet, VGG, and Xception. The EfficientNet B5 version was the best, with an accuracy of 93.75%. Table S1 of the supplemental contains our detailed measurement results concerning all models. ### Processing of text In the case of detecting smoking terms in texts, we approached the problem as an NER task and focused on the Hungarian language. Since we could not find a dataset containing annotated smoking phrases available in Hungarian. Therefore, to generate the annotated data, we used the generational capabilities of ChatGPT, the smoking-related words of the Hungarian synonyms and antonyms dictionary [40], and prompt engineering. Accordingly, we selected words related to smoking from the synonyms and antonyms dictionary and asked ChatGPT to suggest further smoking-related terms besides words from the Hungarian dictionary. Finally, we combined the synonyms and the expressions generated by ChatGPT into a single dictionary. We created blocks of a maximum of 5 elements from the words in our dictionary. Each block contained a random combination of a maximum of 5 words. The blocks are disjoint, so they do not contain the same words. This mixing step was done 10 times. This means that, in one iteration, we could form 8 blocks of 5-element disjunct random blocks from our 43-word dictionary. By doing all these 10 times, we produced 80 blocks. However, due to the 10 repetitions, the 80 blocks were no longer disjoint. In other words, if we string all the blocks together, we get a dictionary in which every synonym for smoking appears a maximum of 10 times. We made a prompt template to which, by attaching each block, we instructed ChatGPT to generate texts containing the specified expressions. Since ChatGPT uses the Hungarian language well, the generated texts contained our selected words by the rules of the Hungarian language, with the correct conjugation. An example of our prompts is illustrated in Table 1. We did not specify how long texts should be generated by ChatGPT or that every word of a 5-element block should be included in the generated text. When we experimented with ChatGPT generating fixed-length texts, it failed. Therefore, we have removed the requirement for this. Using this method, we created a smoking-related corpus consisting of 80 paragraphs, 49000 characters, and 7160 words. An English example of a generated text is presented in Table 2. To find the best model according to the possibilities of our computing environment and the support of the Hungarian language, we tested the following models: XLM RoBERTa base and large, DistilBERT base cased, hubBERT base [41], BERT base multilingual [42], Sentence-BERT [43]. The best model was the XLM RoBERTa large one, which achieved 98% accuracy \begin{table} \begin{tabular}{|r|} \hline \multicolumn{2}{|c|}{Generate a short text about smoking.} \\ The text strictly contains the following words in the different sentences: \\ smoking, tobacco, cigar \\ \hline \end{tabular} \end{table} Table 1: A 3 elements example prompt for ChatGPT. \begin{table} \begin{tabular}{|r|} \hline Smoking is a widespread and addictive habit that involves inhaling \\ and exhaling the smoke produced by burning tobacco. Whether it’s \\ a hand-rolled cigar or a manufactured cigarette, the act of \\ smoking revolves around the consumption of tobacco. Despite the well-known \\ health risks, many individuals continue to engage in smoking due \\ to its addictive nature. The allure of a cigar or \\ a cigarette can be strong, making it challenging for people \\ to quit smoking even when they are aware of its \\ detrimental effects. Education and support are crucial in helping individuals \\ break free from the cycle of smoking and its associated \\ harms. \\ \hline \end{tabular} \end{table} Table 2: An example paragraph generated by from the prompt of Table 1. and 96% F1-score on the validation dataset and an F1-score of 91% with an accuracy of 98% on the test dataset. ### Human reinforcement In the architecture we have outlined, the last step in dealing with the lack of data is to ensure the system's continuous development capability. For this, we have integrated human confirmation into our pipeline. The essence is that our system's hyperparameters should be adjustable and optimizable during operation and that the data generated during detection can be fed back for further fine-tuning. The cutting line used in multimodal filtering is a hyperparameter of our model. As a result, a more accurate result can be achieved by using human confirmation during the operation. The tagged images and annotated texts from the processed video recordings and texts are transferred to permanent storage in the last step of the process. This dynamically growing dataset can be further validated with additional human support, and possible errors can be filtered. So, False positives and False negatives can be fed back into the training datasets. ## 3 Results We collected video materials to test the image processing part of our architecture. The source of the video materials was the video-sharing site YouTube. Taking into account the legal rules regarding the usability of YouTube videos, we have collected 5 pieces short advertising films from the Malboro and Philip Moris companies. We ensured not to download videos longer than 2 minutes because longer videos, such as movies, would have required a special approach and additional pre-processing. Furthermore, we downloaded the videos at 240p resolution and divided them into frames by sampling every second. Each frame was transformed to a resolution of 224\(\times\)224 pixels. We manually annotated all videos. The downloaded videos averaged 64 seconds and contained an average of 13 seconds of smoking. With the multimodal filtering technique, we discarded the images that did not contain smoking. Multimodal filtering found 25 seconds of smoking on average in the recording. The accuracy of the identified images was 62%. The multimodal filtering could filter out more than half of the 64-second, on average, videos. We also measured the performance of the fine-tuned EfficientNet B5 model by itself. The model detected an average of 28 seconds of smoking with 60% accuracy. We found that the predictions of the two constructions were sufficiently diverse to connect them using the boosting ensemble [44] solution. By connecting the two models, the average duration of perceived smoking became 12 seconds with 4 seconds on average error and 74% accuracy. The ensemble solution was the best approach since the original videos contained an average of 13 seconds of smoking. We deleted the videos after the measurements and did not use them anywhere for any other purpose. We created training and validation datasets from Hungarian synonyms for smoking using ChatGPT. We trained our chosen large language models until their accuracy on the validation dataset did not increase for at least 10 epochs. The XLM-RoBERTa model achieved the best performance on the validation dataset with an F1-score of 96% and 98% accuracy. For the final measurement, we created test data from an online text related to smoking by manual annotation [45]. The text of the entire test data is included in the Table S20 supplemental. The fine-tuned XLM-RoBERTa model achieved 98% accuracy and 0.91 F1 score on the test dataset. ## 4 Conclusions Multimodal and image classification models are powerful for classification tasks. In return, however, they are complex and require substantial training data, which can reduce their explainability and usability. In turn, our solution showed that pre-trained multimodal and image classification models exist that allow smoking detection even with limited data and in the matter of low-resource languages if we use the potential of human reinforcement, generative, and ensemble methods. In addition, we see further development opportunities if our approach is supplemented with an object detector, which can determine the time of occurrence of objects and their position. Moreover, with the expected optimization of the automatic generation of images in the future and the growth of the available computing power, our method used for texts can work in the case of images. ## Funding The project no. KDP-2021 has been implemented with the support provided by the Ministry of Culture and Innovation of Hungary from the National Research, Development, and Innovation Fund, financed under the C1774095 funding scheme. Also, this work was partly funded by the project GINOP-2.3.2-15-2016-00005 supported by the European Union, co-financed by the European Social Fund, and by the project TKP2021-NKTA-34, implemented with the support provided by the National Research, Development, and Innovation Fund of Hungary under the TKP2021-NKTA funding scheme. In addition, the study received further funding from the National Research, Development and Innovation Office of Hungary grant (RRF-2.3.1-21-2022-00006, Data-Driven Health Division of National Laboratory for Health Security).
2302.14461
Role-playing software architecture styles
Software Architecture, from definition to maintenance and evolution, is a complex aspect of software development and, consequently, a challenging subject when it comes to teaching it, and learning it. Many research efforts have been devoted to designing teaching approaches, strategies and tools. Most of them, however, focus on the knowledge itself and the ways to convey it to students, rather than on the different learning styles of students themselves. Teaching methods which predominantly rely on verbal and written communication, are very well aligned with some learning styles. However, students with learning styles that benefit more from physical activity or first-hand experience, need to defer to cognitive processes that are less natural to them. In this work, we propose an innovative use of role-playing as teaching strategy for architecture models of reference (i.e. layered, pipe and filter, client-server, etc.). This role-playing of different software architectures, in which students play the part of specific components in the system, intends to complement other classical teaching materials, such as in-person or recorded lectures, lab assignments, or development projects. Addressing all learning styles within a classroom is key to ensure that we favour and foster the students' different learning processes, and give everyone an even playfield in which to best develop their capabilities as Software Architects.
Laura M. Castro
2023-02-28T10:08:57Z
http://arxiv.org/abs/2302.14461v1
# Role-playing software architecture styles ###### Abstract Software Architecture, from definition to maintenance and evolution, is a complex aspect of software development and, consequently, a challenging subject when it comes to teaching it, and learning it. Many research efforts have been devoted to designing teaching approaches, strategies and tools. Most of them, however, focus on the knowledge itself and the ways to convey it to students, rather than on the different learning styles of students themselves. Teaching methods which predominantly rely on verbal and written communication, are very well aligned with some learning styles. However, students with learning styles that benefit more from physical activity or first-hand experience, need to defer to cognitive processes that are less natural to them. In this work, we propose an innovative use of role-playing as teaching strategy for architecture models of reference (i.e. layered, pipe & filter, client-server, etc.). This role-playing of different software architectures, in which students play the part of specific components in the system, intends to complement other classical teaching materials, such as in-person or recorded lectures, lab assignments, or development projects. Addressing all learning styles within a classroom is key to ensure that we favour and foster the students' different learning processes, and give everyone an even playfield in which to best develop their capabilities as Software Architects. learning styles, role-playing, software architecture models + Footnote †: Supported by the Centro de Investigación de Galicia “CITIC”, funded by Xunta de Galicia and the European Regional Development Fund (grant ED431G 2019/01. ## I Introduction Teaching Software Architecture (SA) in a meaningful and effective way is difficult. Many reasons have been posed to explain this reality, although there seems to be some consensus around the idea that SA exposes students to concepts that have significantly greater scale than handled before, and which are not easy to reproduce within the classroom-and-academic-term context [13]. These concepts require the development of correspondingly increased abilities of abstraction, being also often the first time issues like performance or security are addressed [4]. Last but not least, there are non-technical skills at play, such as communication [7] and decision-making [2], that need to be fostered, too [3]. In this paper, we propose a SA teaching approach driven by the co-existence of different learning profiles, as defined by David Kolb [6, 8], in the classroom. Taking into account that traditional activities such as lectures and development projects are more aligned with some of those learning profiles, we contribute a set of role-playing activities designed to improve student's understanding of different software architectures characteristics, strengths and weaknesses, that target those learning profiles which are usually forgotten. ## II Role-playing as a teaching tool The use of participatory teaching methods features a wide range of options which are applicable in the context of computer science [5]: brainstorming, directed dialogues, small discussion groups, debates, panel discussions, or role-playing, amongst others. From this range of choices, for this work we have decided to use role-playing as a teaching tool. Role-playing, sometimes referred to as _role-play simulation_ in educational settings, is an experiential learning method in which learners are involved in a proposed scenario by representing an interacting part in it. The scenario is outlined by the teacher or professional, and while it must allow improvisation, it represents a safe and supportive environment where students will develop their own meaningful first-person experience. Role-playing is widely acknowledged as a powerful technique across multiple avenues of training and education. There is previous experience in software engineering featuring role-playing, most often to simulate the conditions of an industrial environment [10], of software product lines [16], or for requirements engineering [15]. In these cases, students are asked to play the part of stakeholders, from clients to the different technical roles within the development team. However, the way we have chosen to use role-playing to teach SA is not by assigning human roles to students, but by assigning them the roles of software components, as we will discuss in Section III. While "traditional" (i.e. human-focused) role-playing has been used before in the context of SA teaching [9] (as a trade-off analysis method), we have only found one previous similar approach to ours (i.e. software-focused): specifically, in the context of a programming course where students play the part of objects in order to grasp the concepts of object-orientation [1]. ## III Role-playing architecture models SA at University of A Coruna is a 6-ECTS (_European Credit Transfer and accumulation System_[12]) course for Software Engineering students. This translates in practice into 150 hours of work, typically spread amongst the 15 weeks of a term. Of the 10 weekly hours, 3 correspond to class and lab hours, and 7 to student autonomous work. In this course, students are exposed to architectural concepts and practices: from architectural patterns to non-functional requirements' analysis, the impact of the latter in the former, architectural representation and modelling, etc. The goal is to offer students a learning environment in which they can acquire the skills that allow them to carry out architectural tasks in the context of software development: component identification and characterisation, assignment of responsibilities, motivated election of communication and integration alternatives, and architectural evolution and maintenance. The teaching methodologies that were being used in the SA course at UDC were clearly aligned with some of Kolb's learning styles (cfg. 1). Lectures, and the materials provided (bibliographic references, articles, videos, assignments) work well for people which more prominent learning traits are watching and thinking, i.e. those students that are reflective and take the most out of observation, and conceptualisation. Complementarily, practical work during lab sessions gave additional coverage to those most prominently guided by active experimentation. However, those with _accommodating_ and _diverging_ learning styles, in which experimentation or observation would most benefit from combination with concrete experience of the subject at hand, are entirely left up to self-manage their needs in their autonomous work. Our proposal to address this situation was to role-play different architectures of reference, meaning that **scenarios were designed to provide students with a concrete experience of the structure, advantages and disadvantages of different architectural models**. ### _Role-playing a layered architecture_ The instructions given for the first role-playing scenario are shown in Table I. Students were made to form systems with different number of layers, but the same functionality. The expected outcomes are first-hand experiences of: * Layers reuse from one system to the next, increased when responsibilities are specific rather than broad. * Limited impact of changes, affecting to single layers (or adjacent, at most). * Request processing in two different moments at each layer (on the way _in_, on the way _out_, on both), and consequences (i.e., security). * Structural limitations to performance, related to the number of layers. ### _Role-playing a pipe & filter architecture_ The instructions given for role-playing a pipeline are shown in Table II (page 3). Groups of students were asked to define their own filters, and design their own pipeline. Expected outcomes were first-hand experience of: * Filters reuse from one pipeline to the next, increased when responsibilities are specific rather than broad. * Limited impact of changes, affecting to single filters (or adjacent, at most). * Advantage to performance given by structural parallelism, increased by the possibility of duplicating filter stages when one is a bottleneck. * Differentiated input and output points in the system. * One-way path for requests and consequences (i.e. error handling). ### _Role-playing a client-server architecture_ Table III (page 3) shows the instructions given for the client-server scenario, the first of the distributed architectural models. Students showcased the creation and evolution of several systems. The fist-hand experiences outcomes were: * Independence of services, in terms of development, availability, etc. * Robustness: to the ability to have a working system as soon as the first service, and the directory, are operational, service independence adds the ability to keep in operation even if one or several services are down. * Directory component as single point of failure, that can be mitigated by having different directories (opens the door for offering tailored service catalogues to distinct client profiles). ### _Role-playing a leader-follower architecture_ The instructions given for role-playing a leader-follower architecture are shown in Table IV. Students played the dynamic evolution during operation of several systems. Expected outcomes were first-hand experience of: * Independence of workers in terms of availability. * Robustness: ability to stay operational even if some workers go down. * Scalability: ability to elastically react to demand by starting more workers. * Resource consumption optimization: ability to elastically react to demand by stopping idle workers. * Leader as single point of failure. Fig. 1: SA course at UDC: link between methodologies/learning styles ### _Role-playing a peer-to-peer architecture_ The instructions given for the last role-playing scenario are shown in Table V. Part of the students were made to form one network of peers, while the rest were to perform as clients. Expected experiences-as-outcomes were: * Need to embed both management logic and business logic into each peer (as opposed to the leader-follower architecture, in which these responsibilities are assigned to distinct components). * Need to manage life of requests, to ensure clients get some reply even when the network is under heavy load. * Difficulties of implementing security measures to differentiate malicious from faulty or unavailable peers. * Absence of coordination (impact on scalability, self-load balance). * No single point of failure: maximum availability. ## IV Discussion Literature shows that multi-role projects are an effective teaching strategy [14]. It also reveals that role-playing affects three areas that support student learning and engagement, namely: identified personalized learning, deepened content understandings, and enhanced collaboration skills [11]. However, empirical evaluation also reports that while many students feel that the infusion of role-playing aspects into the courses supported their learning and engagement, some other students do not [11]. This is coherent with the fact that certain teaching activities resonate better with certain learning profiles, and may also explain our quantitative evaluation. In any case, we consider this does not pose a threat to the validity of our efforts, since our role-playing activities aim precisely to provide an advantage to those learning profiles that benefit the less from the rest of teaching methodologies already present. There are a couple of factors that may influence how impactful this is when evaluating an initiative such as ours in the context of a whole class or course: * On the one hand, given that experimental learning experiences that address all learning styles are not commonplace, it makes sense to assume that students, especially by the time they reach the university level, have already adapted to the lack of methods and activities that best resonate with them, in the case of the less-often addressed learning profiles. * On the other hand, given that we do not know how the distribution of different learning profiles amongst students in general, and neither amongst our CS students at UDC, it may also be the case their numbers are not significant when considered as part of the whole class. ## V Conclusions In this paper, we have faced the challenge of extending the teaching methodologies that were being applied in an undergraduate SA course. The goal was to take into account the different learning profiles that students have. In doing so, we aim to provide a more fair learning environment, where every individual has opportunities to connect with the learning experiences in the most meaningful and effective way to them. Given that the teaching methods that were in place were found to be miss-aligned when it came to students with _accommodating_ or _diverging_ learning styles (following Kolb's nomenclature), the way we have addressed this challenge has been by incorporating role-playing in an innovative way. We have designed a set of scenarios where students play the part of components in different architectural models. Such role-playing game provides concrete experiences of the structure, advantages, and disadvantages of different architectural models.
2309.07009
OYXOY: A Modern NLP Test Suite for Modern Greek
This paper serves as a foundational step towards the development of a linguistically motivated and technically relevant evaluation suite for Greek NLP. We initiate this endeavor by introducing four expert-verified evaluation tasks, specifically targeted at natural language inference, word sense disambiguation (through example comparison or sense selection) and metaphor detection. More than language-adapted replicas of existing tasks, we contribute two innovations which will resonate with the broader resource and evaluation community. Firstly, our inference dataset is the first of its kind, marking not just \textit{one}, but rather \textit{all} possible inference labels, accounting for possible shifts due to e.g. ambiguity or polysemy. Secondly, we demonstrate a cost-efficient method to obtain datasets for under-resourced languages. Using ChatGPT as a language-neutral parser, we transform the Dictionary of Standard Modern Greek into a structured format, from which we derive the other three tasks through simple projections. Alongside each task, we conduct experiments using currently available state of the art machinery. Our experimental baselines affirm the challenging nature of our tasks and highlight the need for expedited progress in order for the Greek NLP ecosystem to keep pace with contemporary mainstream research.
Konstantinos Kogkalidis, Stergios Chatzikyriakidis, Eirini Chrysovalantou Giannikouri, Vassiliki Katsouli, Christina Klironomou, Christina Koula, Dimitris Papadakis, Thelka Pasparaki, Erofili Psaltaki, Efthymia Sakellariou, Hara Soupiona
2023-09-13T15:00:56Z
http://arxiv.org/abs/2309.07009v2
# OYXOY: A Modern NLP Test Suite for Modern Greek ###### Abstract This paper serves as a foundational step towards the development of a linguistically motivated and technically relevant evaluation suite for Greek NLP. We initiate this endeavor by introducing four expert-verified evaluation tasks, specifically targeted at natural language inference, word sense disambiguation (through example comparison or sense selection) and metaphor detection. More than language-adapted replicas of existing tasks, we contribute two innovations which will resonate with the broader resource and evaluation community. Firstly, our inference dataset is the first of its kind, marking not just _one_, but rather _all_ possible inference labels, accounting for possible shifts due to e.g. ambiguity or polysemy. Secondly, we demonstrate a cost-efficient method to obtain datasets for under-resourced languages. Using ChatGPT as a language-neutral parser, we transform the Dictionary of Standard Modern Greek into a structured format, from which we derive the other three tasks through simple projections. Alongside each task, we conduct experiments using currently available state of the art machinery. Our experimental baselines affirm the challenging nature of our tasks and highlight the need for expedited progress in order for the Greek NLP ecosystem to keep pace with contemporary mainstream research. ## 1 Introduction It is a well known fact that the natural language processing world is running at multiple speeds. A select few languages claim the lion's share in the literature, boasting a plethora of models and a constant stream of results, while others are struggling to keep up with last year's state of the art. Meanwhile, multilingual models, despite being heralded as the end-all solution to the issue, often fall short of expectations (Wu and Dredze, 2020; Ogueji et al., 2021; Pfeiffer et al., 2021; Espana-Bonet and Barron-Cedeno, 2022; Havaldar et al., 2023; Papadimitriou et al., 2023, _inter alia_). The assumption that one-size-fits-all multilingual models can effectively bridge the language gap is hard to either refute or validate, given the disproportionate distribution of training and evaluation resources among languages (Joshi et al., 2020; Yu et al., 2022; Kreutzer et al., 2022). Further muddying the waters is the dubious quality of the increasingly trending multi- and mono-lingual resources generated through minimally supervised machine translations from English (Artetxe et al., 2020; Wang and Hersh-Covich, 2023). While such endeavors can certainly make for good first steps, they are neither sufficient nor without risks. The wide adoption of the practice threatens resource plurality, as more and more "new" datasets are in fact old in all but language. Furthermore, it condones the accumulation of academic authority to a select few, namely the authors of the originals, promoting the unhindered perpetuation of their biases and oversights as universal across languages. Worse yet, it outsources linguistic expertise to machine labor, as we are now entrusting our automated processes with capturing the nuances of under-represented languages; exactly _those_ languages that require opinionated and targeted expert attention the most. And while a discussion on the structural causes behind the problem and the ways to incentivize change is long overdue, here we set our aims towards something more actionable. Noting the striking absence of evaluation benchmarks for modern Greek, and the language's limited presence in multi-lingual resources, we set out to develop a linguistically motivated and technically relevant suite of evaluation tasks. This paper aims to kickstart this endeavor, while serving as an open invitation to interested parties. Concretely, we set the pace with four evaluation tasks: 1. a handcrafted dataset for inference, consisting of 1 762 sentence pairs, each pair adorned with a linguistic characterization in the form of tags _a la_ SuperGlue and labeled with a subset (rather than an element) of {Neutral, Entailment, Contradiction}, aiming to account for all possible inference relations between premise and hypothesis 2. a structured translation of the Dictionary of Standard Modern Greek, from which we project into three tasks: 1. a word sense disambiguation task _a la_ Words-in-Context, consisting of 117 662 phrase pairs that correspond to two usage examples for a single word, where the system is tasked with telling whether the two occurrences have the same meaning or not 3. a more compact & linguistically informed version of the same task consisting of 14 416 unique phrases containing polysemous words, each word associated to a number of senses and their periphrastic definitions, where the system is tasked with telling which word sense is associated with each usage example 4. a metaphor detection task, associating each of the previous phrases to a boolean label indicating whether the word in focus is used metaphorically or not To facilitate research with these tasks, we supply accessible entry points to the raw data in the form of Python interfaces. For each task, we conduct experiments using the currently available state of the art machinery and establish baseline scores for comparisons.1 Footnote 1: Data, interfaces and the code necessary to replicate our experiments is available at github.com/StergiosCha/OYXOY. ## 2 Oyxoy Inspired by Glue and SuperGlue (Wang et al., 2018, 2019), our goal is to develop a language-adapted suite that selects and extends a few key aspects of the original. Our project, which we lightly dub OYXOY (pronounced /\(\mathtt{\dot{x}}\)xu/), is not primarily focused on offering general diagnostics, but rather on highlighting the semantic, syntactic, and morphological attributes of the Greek language, and quantifying their impact on NLP systems. To that end, we present four high-level tasks that require varying degrees of lexical & sentential meaning comprehension. ### Natural Language Inference Our first task is a staple of computational semantics that has endured the test of time: natural language inference (NLI). In their most common form, NLI tasks present the system with an ordered pair of sentences (called a premise and a hypothesis), and request one of three inference relations that must hold between premise to hypothesis: Entailment, Contradiction and Neutral/Unknown. Despite its apparent simplicity and the heaps of progress in modern NLP, the conquest of NLI has proven challenging to this day. Neural systems show a tendency to abuse spurious data patterns over actually performing the (often complicated) reasoning required to solve the problem, resulting in limited generalization capacity across datasets. For our dataset, we follow Wang et al. (2018, 2019) in establishing a hierarchy of rudimentary but descriptive linguistic tags that encompass an array of phenomena that can influence the direction of inference. For a glimpse at the full hierarchy of tags used, refer to Table 2. These tags are intended to find use outside the model's input/output pipeline, providing a guide for categorizing results and drawing finer-grained quantitative evaluations. Where our dataset diverges from established practices is in providing an explicit account of inference-level ambiguities not only through the tagging but also through the labeling scheme. Rather than annotating each example pair with any _one_ inference label, we instead specify _all_ possible labels that may hold. To do so, we implicitly consider the product space of all possible readings of both premise and hypothesis, and construct the label set arising out of all pairwise interactions; Figure 1 shows two concrete examples under different settings. To create the collection of samples that make up the dataset, we follow a three stage process. At the first stage, each author independently wrote a number of sentence pairs together with a sug gested set of tags and labels. Afterwards, each author was given a collection of sentence pairs from other authors with the tags and labels hidden, and was tasked with assigning the tags and labels they deemed most appropriate. This way, we end up with four unique tag and label sets for each pair. Finally, we perform an aggregation of the proposed annotations and jointly go through any and all examples that contain at least one tag or label that does not reach a majority (i.e. counts less than three votes). We resolve disagreements by adding or removing annotations, thus ensuring internal consistency within the dataset. At the end of the process, we end up with 1 049 samples, of which 110 contain more than a single label. The dataset as a whole contains 454 Neutral, 414 Entailment and 292 Contradiction assignments. In parallel to the above, we re-annotate the Greek version of FraCaS (Amanaki et al., 2022) according to our format specifications, skipping directly to the third stage of the pipeline described earlier. The derived dataset contains an additional 713 examples, revealing 30 of them as multi-labeled, with a label distribution of 264 Neutral, 345 Entailment and 134 Contradiction. We serve the two datasets independently, but as a single resource. ### Repurposing the Lexicon Transitioning to our next objective, a resource targeting lexical semantics, we immediately run into a roadblock. The construction of a sufficiently large dataset centered on the _word_ requires a prohibitive investment of time and effort. Facing the very same challenge, contemporary contributions have established the practice of turning to either machine translation or crowd-sourced labor, with hired workers being overlooked by applied practitioners (at best, if at all). Albeit pragmatic, this approach compromises the quality of the generated resources, dismissing domain expertise in the pursuit of improved cost efficiency (a prerequisite, in turn, for quantity). As an alternative, we redirect our focus towards a frequently-overlooked traditional resource: the _lexicon_. Reputable lexica offer a rare mixture of linguistic rigor and extensive coverage virtually for free, making them a prime candidate for adaptation and repurposing into modern applications. In what follows, we showcase how this insight can be put into practice, enacting a sensible and effective way forward for under-resourced languages. We begin by procuring a copy of the Dictionary of Standard Modern Greek (Triantafyllides, 1998).2 The dictionary is provided in the form of a minimally structured SQL database, associating each lemma with its lexical entry, a raw text field containing a periphrastic definition and a few usage examples for each of its senses. Unfortunately, senses and examples are not structurally differentiated by the database, but are rather presented in the same field, further intertwined with supplementary details such as usage conditions, morphological information, etc. Instead, the database relies on a combination of formatting strategies, including enumeration and styling, to differentiate between definitions and examples. However, these strategies are not consistently applied across the lexicon. To make matters worse, definitions and examples are often woven together (that is, they materialize as non-contiguous strings), and can at times follow ad-hoc hierarchical arrangements. Consequently, even though the textual content effectively conveys information visually, parsing this content with traditional methods proves nigh impossible. As a workaround, and considering that parsing unstructured data is a staple task for large language models, we employ ChatGPT (Brown et al., 2020) for the problem at hand. Footnote 2: Hosted online at www.greek-language.gr/greekLang/modern_greek/tools/lexica/triantafyllides. Our pipeline is as follows. We first utilize the existing database fields to filter the lexical entries that seem to contain at least one example. This results in a collection of 28 831 unique lemmata, each mapped to its lexical entry. We randomly sample 100 of them, which we then manually convert into a succinct and minimally structured JSON format, specifying (i) the _lemma_ and (ii) a list of _senses_, each sense structured as a _definition_ and a list of _examples_. We put extra effort into disentangling hierarchical senses, repeating the elided parts of non-contiguous definitions and examples and removing enumeration identifiers. The yield of this process then serves as the training set for a quick one-shot tuning of ChatGPT3, the input being the raw text (stripped of HTML tags for token economy) and the target being the structured JSON representation. We pass all remaining entries through the trained model. From the model output, we filter out senses that contain no examples and entries that contain less than two senses, and end up with 16 079 examples spread over 7 677 senses and 2 512 entries. Finally, we manually check each and every example and entry, throwing away the occasional parsing error, homogenizing the presentation and fixing the JSON formatting as needed. The result is 14 416 examples spread over 6 896 senses and 2 326 entries, from which we derive the three evaluation tasks described in the subsections to follow. The Role of ChatGPTOur decision to incorporate a large language model model into our data preparation process does not entail any of the epistemological risks commonly associated with generative models and/or data augmentation. In our use case, the model does not need a deep understanding of the Greek language, the expertise of a trained linguist, or the creativity required of a human annotator, as it's neither generating new examples nor annotating existing ones per se. Rather, it suffices for it to recognize the inconsistent yet intuitive hierarchical enumeration patterns present in the data, and to convert them into recurring structures with consistent formatting. Large language models' attested proficiency in this scenario align them perfectly with our needs, allowing us to utilize the authoritative resource of the lexicon while minimizing tedious human labor and cost expenditure. Indeed, our inspection of the model's output shows a generally high-quality translation, strictly faithful to the original input, with only a few minor occasional inconsistencies4. Footnote 4: The model is sometimes overeager, extending the output specification with additional fields, in what seems like an attempt to capture all the information provided in the raw input. #### 2.2.1 Words-in-Context The first task is essentially a replica of the Words-in-Context (WiC) part of SuperGlue. It is formulated as a binary classification problem, where the system is presented with two sentences containing the same (potentially polysemous) word, and is tasked with telling whether the two occurrences correspond to the same meaning or not. In order to successfully resolve the task, the system needs a dynamic embedding strategy, capable of disambiguating words depending on their surrounding context. As such, it serves as a primitive test suite for the lexical semantic capacities of bidirectional transformers. Obtaining the task from our dataset is trivial; it suffices to consider the sum of the product space of examples for each lexical entry (with the diagonals removed), zipped with a boolean sign indicating whether the two examples stem from the same sense. Doing so yields 117 662 data points (i.e., one order of magnitude larger than the corresponding fragment of SuperGlue), with a label ratio of 1 positive to about 6 negative. #### 2.2.2 Sense Selection The above formulation is straightforward, and directly compatible with the standard sequence classification pipeline commonly employed by NLP architectures. As such, it makes for an accessible entry point for evaluation. However, it represents a dramatic simplification of the disambiguation problem, requiring two usages in juxtaposition and providing little information on _what_ the sense of each usage is. Our source dataset allows us to do better. Given that we have periphrastic definitions Figure 1: NLI examples 761 and 879, showcasing multiple inferences. In the first example, \(\varphi\lambda\phi\) [/fii6/] _(to kiss)_ can be a unidirectional or a reciprocal action (i.e., _to give a kiss to_ vs. _to exchange kisses with_). In the second example, pro-drop allows for two possible readings, where either Giorgos or Maria can be the subject of the embedded clause. for all5 the possible meanings of each word, we can reframe the task as sense selection. Given a word, the set of its possible meanings and a usage context, we can prompt a model to predict the meaning most likely employed in the given context. Using periphrastic definitions as a proxy for meaning induces a better informed and more realistic evaluation task, requiring and benefiting from high-quality contextual representations both at the lexical and the sentential level (since the word under scrutiny will now need to be contrasted to the full set of "meanings"). It is also more faithful to the source dataset, since the count of data points is now in alignment with the number of distinct usage examples (as duplication is no longer necessary). Each of the 14 416 points is associated with 3.8 candidate definitions, on average. Footnote 5: Excluding the ones removed by the filtering process. #### 2.2.3 Metaphor Detection Our projection of the raw textual entries into structured JSON entries has done away with most fields irrelevant to word disambiguation. However, we have consciously kept markers of metaphoric usage, and homogenized their presentation.6 This enables us to filter senses (and by extension, usage examples) that are used metaphorically, providing the means for another kind of task altogether: metaphor detection. Making the simplifying assumption that metaphor is only present in those examples where the word defined is used in a metaphoric sense, we end up with 1 017 examples of metaphor (7% of the total of all examples) concentrated around 571 senses and associated with 499 entries, yielding a heavily imbalanced dataset for metaphor detection. Footnote 6: They are indicated with \((\mu\nu\varphi_{\cdot})\) in the periphrastic definition. ## 3 Experimental Baselines To quantitatively evaluate the difficulty of the tasks described in the previous section, and in order to facilitate future research in this direction, we set up some experimental baselines using the current state-of-the-art machinery available for modern Greek. All our experiments rest on the tried and tested fine-tuning process for BERT-like models Kenton and Toutanova (2019), using Greek BERT as our universal core model Koutsikakis et al. (2020). ### Natural Language Inference Despite our efforts to create a comprehensive evaluation suite for natural language inference, the practical use of our dataset presents several challenges. First and foremost, its comparatively small size renders it unsuitable for fine-tuning purposes. This becomes especially problematic considering the lack of NLI datasets tailored specifically for Greek. Compounding these challenges is the fact that our dataset utilizes a multi-label setup, which complicates direct cross-dataset evaluations. To address these challenges, we have chosen to leverage XNLI Conneau et al. (2018), a cross-lingual dataset for language inference of substantial size; while XNLI was not initially designed for training purposes, it presents a viable solution considering the constraints we face. We employ an iterative approaching when splitting our dataset, aiming for a 30/70 division and taking care to keep the ratio consistent for each of the linguistic tags used. We then fine-tune BERT, training on the joined test set of XNLI and the smaller of the two splits, evaluating on the dev set of XNLI, and testing on the larger split. This setup accounts for domain adaptation, while allowing us to frame the problem as multi-label classification (where the XNLI problems are "coincidentally" single-label). Concretely, we independently contextualize the premise and hypothesis sentences, concatenate their [CLS] tokens and project them into three independent logits via an intermediate feed-forward layer of dimensionality 64, gated by the GELU activation function Hendrycks and Gimpel (2016). We train using AdamW Loshchilov and Hutter (2018) with a batch size of 32 and a learning rate of 10\({}^{-5}\). Despite heavy regularization (weight decay of 0.1, dropout of 0.33 and early stopping), the model is quick to overfit the training set, with development set performance lagging significantly behind (despite the matching domain). Since accuracy is no longer a suitable performance metric, owing to the multi-label setup we have adopted, we report per-class precision, recall and F1 scores over the test set instead, averaged over three repetitions. The results, presented in Table 1, are largely underwhelming, indicative of the difficulty of the dataset and confirming the inadequacy of (the Greek fragment of) XNLI as a training and evaluation resource - a fact also noted by Evdaimon et al. (2023) and consistent with the comparatively low scores of Amanaki et al. (2022). To gain a better understanding of the trained model's behavior across different linguistic phenomena, we group samples according to their linguistic tags, and measure the average Jaccard similarity coefficient between predicted and true labels (i.e., the length of the intersection over the length of the union between the two sets). As Table 2 suggests, performance is consistently low across the board. The model seems to especially struggle with recognizing the effect of embedded clauses (regardless of whether they are restrictive or not), focus associating operators, non-intersective adjectives, hypom and hypernymy, antonymy and negation. ### Sense Disambiguation For both variants of the sense disambiguation task, we split the dataset's examples into three subsets: a 60% training set, a 20% development set, and a 20% test set. Additionally, we designate 10% of the total lexical entries as test-only, and move the associated examples from the training set to the test set. This will allow us to evaluate the model's performance separately on in- and out-of-vocabulary examples (IV and OOV, respectively), i.e. involving words that have or have not been encountered during training. To find the relevant word within each example, we lemmatize examples using SpaCy (Honnibal et al., 2020, model el_core_news_sm) and identify the element within each sequence that corresponds to the source entry's lemma, falling back to the element with the minimal edit distance if no absolute match can be found. Following tokenization, this permits us to create a boolean mask for each example, selecting only these tokens that are associated with the word/lemma of interest. Words-in-ContextFor the WiC variant, we gather minibatches consisting of all examples that belong to the same lexical entry. We contextualize examples independently, and extract the representations of the words of interest by mean pooling the last layer representations of the tokens selected by each example's mask. We then compute pairwise similarity scores between pairs in the cartesian \begin{table} \begin{tabular}{l c c c} \hline \hline **Label** & **Prec.** & **Rec.** & **F1** \\ \hline Unkn. & \(0.32{\pm}4.9\%\) & \(0.41{\pm}1.0\%\) & \(0.35{\pm}3.7\%\) \\ Ent. & \(0.52{\pm}2.8\%\) & \(0.46{\pm}2.7\%\) & \(0.48{\pm}1.1\%\) \\ Contr. & \(0.20{\pm}0.7\%\) & \(0.26{\pm}7.6\%\) & \(0.23{\pm}0.6\%\) \\ \hline \hline \end{tabular} \end{table} Table 1: Per-label test metrics for NLI. \begin{table} \begin{tabular}{l c} \hline \hline & Logic \\ \hline Disjunction & \(0.32{\pm}3.2\%\) \\ Conjunction & \(0.41{\pm}1.6\%\) \\ Negation & \\ Single & \(0.30{\pm}1.6\%\) \\ Multiple & \(0.46{\pm}5.6\%\) \\ Negative Concord & \(0.32{\pm}0.4\%\) \\ Comparatives & \(0.42{\pm}3.5\%\) \\ Quantification & \\ Existential & \(0.43{\pm}1.0\%\) \\ Universal & \(0.36{\pm}1.3\%\) \\ Non-Standard & \(0.37{\pm}2.8\%\) \\ Temporal & \(0.32{\pm}1.1\%\) \\ Conditionals & \(0.32{\pm}3.2\%\) \\ Lexical Entailment & \\ \hline Redundancy & \(0.33{\pm}1.1\%\) \\ Factivity & \\ Factive & \(0.41{\pm}2.2\%\) \\ Non-Factive & \(0.32{\pm}4.0\%\) \\ Intersectivity & \\ Intersective & \(0.38{\pm}4.2\%\) \\ Non-Intersective & \(0.29{\pm}7.4\%\) \\ Restrictivity & \\ Restrictive & \(0.28{\pm}2.9\%\) \\ Non-Restrictive & \(0.27{\pm}4.0\%\) \\ Lexical Semantics & \\ Synonymy & \(0.46{\pm}2.9\%\) \\ Hyponymy & \(0.47{\pm}1.8\%\) \\ Hypernymy & \(0.29{\pm}5.6\%\) \\ Antonymy & \(0.30{\pm}3.2\%\) \\ Meronymy & \(0.50{\pm}2.5\%\) \\ Morph. Modification & \(0.33{\pm}1.8\%\) \\ FAO & \(0.28{\pm}1.3\%\) \\ Symmetry/Collectivity & \(0.44{\pm}4.1\%\) \\ Predicate-Argument Structure \\ \hline Alternations & \(0.38{\pm}2.0\%\) \\ Ambiguity & \(0.40{\pm}2.9\%\) \\ Anaphora/Coreference & \(0.39{\pm}0.1\%\) \\ Ellipsis & \(0.44{\pm}1.7\%\) \\ Core Arguments & \(0.55{\pm}5.0\%\) \\ Common Sense/Knowledge & \(0.36{\pm}0.3\%\) \\ \hline \hline \end{tabular} \end{table} Table 2: Per-tag test metrics for NLI. The tag hierarchy follows along Wang et al. (2019), with few divergences. For Logic, we replace Double Negation with Multiple Negations and differentiate it from Negative Concord. We add a tag for Non-Standard Quantification, and drop the Numeral/Interval tag. For Lexical Entailment, we substitute Morphological Negation with the (more general) Morphological Modification. We subcategorize Lexical Semantics, specifying left-to-right or premise-to-hypothesis (directional) lexical relations. Finally, we merge Common Sense and World Knowledge into a single meta-tag. product of examples by applying the dot-product operator on the extracted representations, scaling the results by the inverse of the square root of the model's dimensionality. These similarity scores serve as logits for binary cross entropy training, predicting whether the two occurrences of the word share the same sense between the two examples. Sense SelectionFor the sense selection variant, we create batches by (i) sampling over training examples and (ii) constructing the set union of all related (candidate) definitions, together with a binary boolean relation specifying whether an example and a definition belong to the same entry. We then independently contextualize all examples and definitions, extracting contextual word representations for each example as before, and taking each definition's [CLS] token representation as a proxy for the sense's meaning. We compare each word (in the context of a single example) to each meaning using the same scaled dot-product mechanism as before, masking out invalid pairs according to the example-to-definition relation mentioned earlier. We finally obtain softmax scores for each example yielding a probability distribution over candidate meanings, which serves as the model outputs for standard negative log-likelihood training. We train on either task using AdamW with a learning rate of \(10^{-5}\), a weight decay of \(10^{-2}\) and a \(25\%\) dropout applied at the dot-product indices, and perform model selection on the basis of development set accuracy; once more, development and training set performances quickly diverge after a few epochs. At this point, we note that both tasks use the same notion of sense agreement and both our models approximate it by means of the same vector operation; their difference lies in the fact that one compares a word occurrence to a word occurrence (or: an example to an example), whereas the other compares a word occurrence to a set of "meanings" (or: an example to all candidate definitions) (Hauer and Kondrak, 2022). Intuitively, it would make sense that a model that has acquired the sense selection task should be able to perform adequately on the WiC task without further training; indeed, if two word occurrences select the same meaning (i.e., maximize their similarity to the same vector), they must also be similar to one another. To test this hypothesis, we simply apply the model obtained by fine-tuning on the sense selection task, except now recasting the test set in the form of the WiC task. We report repetition-averaged aggregates in Table 3. Performance is not astonishing, but remains well above the random baselines for both tasks (\(25\%\) for sense selection and \(16.7\%\) for WiC), indicating that the core model has some capacity for learning and generalization. Sense selection may initially appear as the more challenging of the two tasks, seeing as it involves selecting one target out of multiple options. Nonetheless, the model achieves a consistently higher absolute accuracy there; evidently, comparing one example to a fixed set of senses is easier than comparing two ad-hoc usage examples. To our surprise, we find that the task transfer setup works straight out of the box, to the point where the transfer model in fact outperforms the in-domain model without as much as recalibrating the sigmoid classification threshold. One might hypothesize that this is due to the model memoizing a fixed set of senses and their representations. However, this is not entirely the case: interestingly, accuracy now improves instead of declining in the OOV fragment of the test set. We interpret this as evidencing that the sense selection formulation produces a higher quality error signal, which induces a better informed disambiguation prior during fine-tuning, allowing the (more rudimentary) WiC task to be captured without additional effort. ### Metaphor Detection The last task, metaphor detection, is also the simplest one, being essentially a case of sequence classification. We start by filtering all entries that have at least one metaphoric sense, so as to alleviate the severe class imbalance of the full dataset. From the 499 filtered entries, we reserve \(5\%\) for use as an OOV test set. We extract all examples from all entries, and assign to each example a boolean label, indicating whether the sense the example is associated with is metaphoric or not. This produces \(3\,015\) examples (\(2\,856\) IV and \(159\) OOV), with a class distribution of about 1 positive to 2 negative. We proceed with training using once more a 60/20/20 split on the IV set. We attach a feedforward classifier to the contextualized [CLS] token and train using binary cross entropy, optimizing with the same hyper-parameter setup as before. Our results, presented in Table 4, showcase a good ability to recognize metaphoric senses in the words trained on, and a decent gener alization potential to unseen words. Unlike prior experiments, we detect a high variability in the results between repetitions; one model instance has a moderate performance that does not differ between the two subsets of the test set, whereas another achieves a near-perfect score on the IV subset while being barely above the random baseline in the OOV subset. ## 4 Related Work NLI is widely considered one of the core problems towards natural language understanding, with a plethora of evaluation suites Bowman et al. (2015); Conneau et al. (2018); Wang et al. (2018, 2019); Nie et al. (2020) which continue to pose significant challenge for current state-of-the-art models Glockner et al. (2018); Talman and Chatzkiyriakidis (2019); Belinkov et al. (2019); McCoy et al. (2019); Richardson et al. (2020), _inter alia_). Like GLUE and SuperGlue, our inference examples come packed with linguistic tags to facilitate diagnostic analysis. Unlike other datasets, our examples may specify more than one inference label, accounting for all possible sentence readings. At the time of writing, other than a fragment of XNLI (produced by automatic translation), the only NLI dataset for Greek we are aware of is by Amanaki et al. (2022) (which we adapt here to our format). Sense repositories, i.e., mappings between words and sets of meanings are often framed as dictionary-like structures Fellbaum (1998); Navigli and Ponzetto (2012). Our dataset stands out in providing both a definition and a collection of examples for each sense, allowing the incorporation of either or both into various possible tasks and model pipelines; we show three concrete examples of how this can be accomplished. The tasks obtained, namely words-in-context, sense selection and metaphor detection, are of prime importance for the experimental validation of the lexical semantic capacities of language processing systems Ma et al. (2021); Zhang and Liu (2023); Choi et al. (2021); Sengupta et al. (2022); Luo et al. (2023). To the best of our knowledge, this is the first dataset of its kind, and among the first lexical resources for Greek in general. ## 5 Conclusions and Future Work Our vision is that of an open-source, community-owned, dynamically adapted, gold-standard suite that enables the linguistically conscious evaluation of the capacities of Greek language models. We have presented four novel tasks and corresponding baselines towards that goal. While our results aren't directly comparable to existing benchmarks, they do highlight the significant challenge our tasks present. This underscores the urgency for accelerated progress within the Greek NLP ecosystem to stay aligned with contemporary mainstream research. Pending community feedback, we hope to enrich the existing datasets by scaling them up, correcting possible artifacts and extending the language domain with regional and dialectal variations. Possible tasks that we would like the project to eventually incorporate include gender bias detection, paraphrase identification, and natural language inference with explanations, among others. We are curious to continue experimenting with ways to utilize traditional resources, and exploring their potential as dataset generators for under-resourced languages in conjunction with large language models. \begin{table} \begin{tabular}{l c c} **Subset** & \# Examples & **Accuracy** \\ \hline IV & 572 & \(0.84{\pm}6.29\%\) \\ OOV & 159 & \(0.71{\pm}2.94\%\) \\ \hline Total & 731 & \(0.82{\pm}4.29\%\) \\ \end{tabular} \end{table} Table 4: Test set performance on the metaphor detection task. \begin{table} \begin{tabular}{l c c c c} & \multicolumn{2}{c}{**Sense Selection**} & \multicolumn{2}{c}{**Words-in-Context**} \\ **Subset** & \# examples & accuracy & \# pairs & accuracy\({}^{1}\) & accuracy\({}^{2}\) \\ \hline IV & 2 494 & \(0.63{\pm}0.20\%\) & 8 274 & \(0.50{\pm}0.41\%\) & \(0.51{\pm}1.7\%\) \\ OOV & 1 289 & \(0.64{\pm}0.41\%\) & 9 954 & \(0.48{\pm}1.77\%\) & \(0.54{\pm}0.2\%\) \\ \hline Total & 3 784 & \(0.63{\pm}0.29\%\) & 18 678 & \(0.49{\pm}1.09\%\) & \(0.53{\pm}0.86\%\) \\ \end{tabular} \({}^{1}\)In-domain evaluation of the words-in-context model. \({}^{2}\)Transfer evaluation of the sense selection model. \end{table} Table 3: Test set sizes and performance metrics for the two sense disambiguation tasks. ### Limitations The NLI dataset's limited size renders it inadequate as a comprehensive resource for training and evaluating NLI systems from scratch. Furthermore, the examples were crafted by the authors of this paper, who belong to a distinct demographic, unavoidably introducing our own cultural, sociopolitical, and linguistic biases. The focus is exclusively on standard modern Greek, omitting examples of regional or dialectal language use. Finally, while the tag set employed may provide valuable information, it offers only a coarse and incomplete summary of the full range of linguistic phenomena observed in the wild. The lexical dataset, conversely, is not indicative of our opinions as authors; the source dictionary may contain language use that is outmoded or socially exclusive. The dataset structure is sufficient for us to extract the three tasks we have presented, but might prove lacking for more complex tasks (like tasks requiring hierarchical or clustered sense arrangements, for instance). Despite efforts to ensure semantic accuracy in every entry, sense, and example, occasional mistakes may have gone unnoticed. Users should approach the resource critically, keeping this in mind. Regarding our baselines, we have experimented with only a single model. While we acknowledge this might entangle the effects of dataset difficulty and model robustness, we justify ourselves in refraining from experimenting with more models, since this is neither the prime concern of this paper, nor a practice that we necessarily agree with. ## Acknowledgements We would like to acknowledge the Centre for the Greek Language for allowing us access to the digitized version of the dictionary. The first author would like to thank Savvas Papadopoulos for sharing his technical expertise on using ChatGPT effectively. The project has benefited from a grant from the Special Account for Research Funding of the Technical University of Crete (grant number: 11218).
2301.00144
MHD simulation of Solar Eruption from Active Region 11429 Driven by Photospheric Velocity Field
Data-driven simulation is becoming an important approach for realistically characterizing the configuration and evolution of solar active regions, revealing the onset mechanism of solar eruption events and hopefully achieving the goal of accurate space weather forecast, which is beyond the scope of any existing theoretical modelling. Here we performed a full 3D MHD simulation using the data-driven approach and followed the whole evolution process from quasi-static phase to eruption successfully for solar active region NOAA 11429. The MHD system was driven at the bottom boundary by photospheric velocity field, which is derived by the DAVE4VM method from the observed vector magnetograms. The simulation shows that a magnetic flux rope was generated by persistent photospheric flow before the flare onset and then triggered to erupt by torus instability. Our simulation demonstrates a high degree of consistency with observations in the pre-eruption magnetic structure, the time scale of quasi-static stage, the pattern of flare ribbons as well as the time evolution of magnetic energy injection and total unsigned magnetic flux. We further found that an eruption can also be initiated in the simulation as driven by only the horizontal components of photospheric flow, but a comparison of the different simulations indicates that the vertical flow at the bottom boundary is necessary in reproducing more realistically these observed features, emphasizing the importance of flux emergence during the development of this AR.
Xinyi Wang, Chaowei Jiang, Xueshang Feng
2022-12-31T07:46:28Z
http://arxiv.org/abs/2301.00144v1
# MHD simulation of Solar Eruption from Active Region 11429 Driven by Photospheric Velocity Field ###### Abstract Data-driven simulation is becoming an important approach for realistically characterizing the configuration and evolution of solar active regions, revealing the onset mechanism of solar eruption events and hopefully achieving the goal of accurate space weather forecast, which is beyond the scope of any existing theoretical modelling. Here we performed a full 3D MHD simulation using the data-driven approach and followed the whole evolution process from quasi-static phase to eruption successfully for solar active region NOAA 11429. The MHD system was driven at the bottom boundary by photospheric velocity field, which is derived by the DAVE4VM method from the observed vector magnetograms. The simulation shows that a magnetic flux rope was generated by persistent photospheric flow before the flare onset and then triggered to erupt by torus instability. Our simulation demonstrates a high degree of consistency with observations in the pre-eruption magnetic structure, the time scale of quasi-static stage, the pattern of flare ribbons as well as the time evolution of magnetic energy injection and total unsigned magnetic flux. We further found that an eruption can also be initiated in the simulation as driven by only the horizontal components of photospheric flow, but a comparison of the different simulations indicates that the vertical flow at the bottom boundary is necessary in reproducing more realistically these observed features, emphasizing the importance of flux emergence during the development of this AR. Magnetohydrodynamic (MHD) -- Sun: corona -- Methods: numerical -- Sun: magnetic fields ## 1 Introduction As driven by solar eruptions, the solar-terrestrial environment often experiences variations, which are known as space weather, and forecasting the space weather precisely is not only an important scientific topic but can also avoid damage of the sensitive on-ground and space-based critical infrastructures. Though many theoretical models have been proposed and significant process has been made in understanding the triggering mechanism of solar eruptions (Forbes, 2000; Chen, 2011; Schmieder et al., 2013; Priest, 2014), reproducing the whole life-span from quasi-static stage to eruption using numerical models constrained and driven by observed vector magnetogram possess unprecedented capabilities in revealing the onset mechanism of the real eruption events, and can potentially be used for accurate space weather forecast (Jiang et al., 2022; Jiang, 2022). Previous study reproduced the whole process of energy accumulation and release successfully (Jiang et al., 2016), showing an MHD system can be driven to erupt by inputting time series of vector magnetograms at the bottom boundary (B-driven). There are also other data-driven models, in which the evolution of MHD system is driven by the electric field (E-driven, e.g., Cheung and DeRosa, 2012; Hayashi et al., 2018; Pomoell et al., 2019; Price et al., 2019) or the velocity field (V-driven, e.g., Hayashi et al., 2019; Guo et al., 2019; Liu et al., 2019; He et al., 2020; Zhong et al., 2021) on the photosphere (bottom boundary). Though the B-driven method can fully match the magnetogram, it will introduce considerable errors of magnetic divergence from the bottom boundary. This shortage will vanish in E-driven model, however, deriving both the induction and potential components of the electric field on the photosphere is not an easy task (Fisher et al., 2010, 2012, 2015, 2020), and the photospheric flow also needs to be properly set to follow the Ohm's law. In most of the theoretical models of solar eruption, the key structure in favor of eruption is assumed to be formed through the movement of the footpoints of magnetic field lines (Moore and Labonte, 1980; Moore and Roumeliotis, 1992; Antiochos et al., 1999; Lin and Forbes, 2000; Jiang et al., 2021), which is driven by the horizontal flow, and the vertical component of the photospheric flow will be responsible for the flux emergence process. Therefore, with the photospheric velocity field determined, the photospheric magnetic field can be generated self-consistently. Furthermore, in the V-driven approach, there is no need to solve the complex momentum equation at the bottom boundary (which is the most time-consuming part in solving MHD equations). Due to these advantages, the V-driven method has attracted many previous studies to focus on this topic. For example, with the velocity field derived by DAVE4VM method (Schuck, 2008), Hayashi et al. (2019) used the projected normal characteristics method to update the physical variables other than velocity at the bottom boundary. However, the total magnetic energy kept almost the same level of that of the initial state without obvious magnetic energy injection. Jiang et al. (2021) updated the magnetic field by solving directly the magnetic induction equation at the bottom boundary and their model can inject the magnetic energy from bottom boundary successfully. He et al. (2020) drove the magnetic evolution by inputting the DAVE4VM velocity field and vector magnetogram simultaneously. The formation process of an magnetic flux rope (MFR) was obtained in their model. Unfortunately, these simulations didn't drive the system to erupt. The only work we know that obtained an eruption by the velocity field derived from observation was shown in Kaneko et al. (2021). The velocity field was derived from the electric field on the photosphere using Ohm's law and then was input at the bottom boundary in their zero-beta model to drive the system to erupt. Two eruptions they produced were identified from the evolution curves of the magnetic and kinetic energies, however, the kinetic energy showed an overall increase without obvious quasi-static evolution during the pre-eruption stage. Before the first eruption, the kinetic energy was comparable with its peak during the first eruption and the amount of the release of the magnetic energy was also too small, which didn't show the feature of a typical eruption event, i.e., a large amount of magnetic energy was converted into kinetic energy impulsively. As described above, the whole energy accumulation process from quasi-static evolution (of typically tens of hours) to impulsive eruption has not been realized in a self-consistent way using the V-driven MHD model, and this is one of the motivations of this work. In this Letter, we applied a V-driven model to investigate the evolution and eruption of a well-studied active region (AR) NOAA 11429. Previous studies found persistent shearing flow and flux cancellation near the main polarity inversion line (PIL) of this AR (Shimizu et al., 2014; Zheng et al., 2017), which were suggested to be responsible for the eruptions on 2012 March 7, 9 and 10 (Dhakal et al., 2018, 2020). An analysis of the MFR reconstructed from vector magnetograms using the nonlinear force-free field (NLFFF) model suggests that the homologous eruptions are triggered by torus instability (TI) of the MFR (Chintzoglou et al., 2015). Zhang et al. (2021) suggested the helical kink instability may also take effect. Nevertheless, whether these mechanisms were at work requires to be further studied using dynamic modeling of this AR evolution and eruption, which is absent in all the previous studies and is the other motivation of this work. In our simulation, the dynamic process from the beginning of 2012 March 4 to the eruption on March 5 in this AR was reproduced self-consistently as driven by the photospheric velocity derived from vector magnetogram using DAVE4VM method. Our simulation shows that an MFR was generated near the main PIL before the flare onset and the initiation of this eruption event depended mainly on TI of the preformed MFR. ## 2 Data and Model AR 11429 showed a complex \(\beta\gamma\delta\) configuration and is very flare-productive, which has produced 3 X-class flares from 2012 March 5 to 7. It first appeared on the eastern solar limb on March 4 and was located on the eastern part of the solar disk before March 8. During this period, the AR kept developing as characterized by the increasing total unsigned magnetic flux, indicating the obvious flux emergence by vertical flow on the photosphere. Since it was the first X-class flare of this AR, here we focus on the initiation process of the X1.1 flare (which is accompanied with a halo CME moving at a speed of 1531 km s\({}^{-1}\)) around 04:00 UT on March 5 as shown in the white box in Figure 1A. Before the flare onset, there was persistent shearing flow near the main PIL (Figure 1B) and as a result, the horizontal magnetic field there was highly sheared (Figure 1C), which indicates that a large amount of free magnetic energy is stored ready for an eruption. An hot loop first erupted away as shown in AIA 94 A (Lemen et al., 2012) at around 03:31 UT (Figure 1D and E) and after that a pair of hook shape flare ribbons appeared near the main PIL in AIA 1600 A at 03:36 UT (Figure 1F), i.e., the flare event started. To understand the formation of the pre-eruptive coronal magnetic field and the triggering mechanism of this eruption, we used the DARE-MHD model (Jiang et al., 2016) to study the dynamic evolution of this AR. For saving the computational time, the strength of magnetic field from the magnetograms were reduced by a factor of 25 before being input into our code. The initial plasma density was set as a hydrostatic isothermal model with a fixed temperature as \(T=1\times 10^{6}\) K and a modified solar gravity (Jiang et al., 2021) to get a plasma background that mimics the real environment in the solar corona basing on two key parameters, the plasma \(\beta\) and the Alfven speed (in particular, the minimum \(\beta\) is \(6.3\times 10^{-4}\) and maximum Alfven speed \(V_{\rm A}\sim 4800\) km s\({}^{-1}\) in the final equilibrium we obtained below). We chose to use the magnetograms of HMI SHARP data set (Schou et al., 2012; Bobra et al., 2014) at 00:00UT on March 4 to reconstruct the initial magnetic field since there were no obvious MFR at that time. This magnetogram was smoothed first using Gaussian smoothing with FWHM of 6 pixels and a NLFFF model from this smoothed magnetogram was extrapolated by our CESE-NLFFF-MHD code (Jiang and Feng, 2012, 2013). Since the code (like many other NLFFF codes) does not give a perfect force-free solution but with residual Lorentz forces, we input the extrapolated field into the DARE-MHD model, along with the initial background plasma, to let the MHD system relax until the kinetic and magnetic energies were almost unchanged, i.e., an MHD equilibrium was obtained, and the initial state was ready. At the bottom boundary, we solve the magnetic induction equation to update the magnetic field with the velocity field derived by DAVE4VM method to update the magnetic field on the photosphere. The DAVE4VM velocity was strengthened by a factor of 13.7 (determined by the ratio of the time series magnetograms' original time cadence as 720 seconds to the time cadence in our simulation as \(0.5\times 105\) seconds, i.e., \(\frac{720}{0.5\times 105}\)) to speed up our simulation and thus the time scale of quasi-static evolution prior to the eruption onset is shorten by the same times. At the side and top boundaries, all the variables are extrapolated from the neighboring inner points with zero gradient along the normal direction of the boundary surface and the normal component of magnetic field is further modified by the divergence-free condition. The Powell-source terms and diffusion control terms was used to deal with the divergence error of the magnetic field as described in Jiang et al. (2010). We set the computational domain sufficiently large as \([-368,368]\) Mm in both \(x\) and \(y\) direction and \([0,736]\) Mm in \(z\) direction with grid resolution varies from \(1^{\prime\prime}\) to \(8^{\prime\prime}\) using adaptive mesh refinement. The highest resolution is used mainly for the regions with strong magnetic gradients and current density, in particular, the current sheets (CSs). Explicit value of magnetic resistivity was not used in our simulation and the magnetic reconnection was controlled by the resistivity of the numerical method only, which mimicked the low-resistivity plasma better. As the total unsigned flux of this AR kept increasing before the flare onset, we energized the MHD system by full 3D DAVE4VM velocity field (\(v_{x}^{D},v_{y}^{D},v_{z}^{D}\)) (will be referred to as V3D simulation) and horizontal component (\(v_{x}^{D},v_{y}^{D},0\)) (V2D) respectively, and a comparison of the results for these two simulations will show the importance of flux emergence through the vertical flow on the photosphere. ## 3 Results ### Overall Process The evolution curves of the total magnetic and kinetic energies in the computational domain as well as the total unsigned magnetic flux at the bottom boundary are shown in Figure 2. For the 'V3D' simulation, as driven by the time-series velocity field (V3D) for a time duration of 150 minutes, the total magnetic energy in our simulation model experienced an overall increase firstly and then a rapid decrease, which is associated with an eruption event. The eruption can be identified from the energy evolution with onset time \(t_{\rm E,V3D}=120\) minutes. At the very beginning from \(t=0\) to \(t=22\) minutes, the magnetic energy injection curve (black dashed line, which is computed by the time integration of the total Poynting flux of the 'V3D' simulation at the bottom surface) matches well with the solid 'V3D' curve (the magnetic energy increase of the 'V3D' simulation) and 'OB' curve (the magnetic energy injection computed by DAVE4VM velocity and magnetograms) in Figure 2A. However, in the time duration of \(t\in[97,122]\) minutes, the total magnetic energy (blue solid line) is higher than the magnetic energy injection. Such an unphysical mismatch of the inputted energy from the boundary and the cumulative energy in the volume is likely owing to the insufficient resolution for the bottom boundary, since the magnetic field at the bottom boundary has accumulated a very large gradient in this phase (see Discussions). The kinetic energy keeps a very low value of around \(10^{-3}E_{\rm po}\) (which is the potential field energy corresponding to the magnetic field at t=0) before the major eruption begins at \(t_{\rm E,V3D}=120\) minutes when the magnetic energy reaches about 1.85 \(E_{\rm po}\). The magnetic energy decreases to about 1.7 \(E_{\rm po}\) at the peak of the total kinetic energy (i.e., \(E_{\rm k}=5.6\times 10^{-2}E_{\rm po}\)), and keeps decreasing to 1.45 \(E_{\rm po}\) in total through this eruption (0.4 \(E_{\rm po}\) free energy loss). That is, about one third of the magnetic energy loss has been converted to kinetic energy in 10 minutes. If multiplied by a factor of 13.7 determined by the rate of speeding up in our velocity-driven simulation, the quasi-static evolution time of 'V3D' run is 27.4 hours. This time scale is very close to the observation one which is 27.6 hours. We have also driven the simulation by the horizontal velocity (V2D) and found that it can also produce an eruption with rather similar onset time. However, comparing the different curves of magnetic energy evolution in Figure 2A and the curves of total unsigned magnetic flux in Figure 2B respectively, the blue solid lines labeled by 'V2D' are obvious lower than those from the 'V3D' simulation. This clearly shows that though horizontal velocity can also drive the field to erupt (with a delayed onset time for about 6 minutes compared with 'V3D'), the vertical velocity \(v_{z}^{D}\) is necessary in accounting for the larger increase of the total magnetic energy and total unsigned flux as shown in observations, therefore leading to a stronger eruption. The evolution of the simulated magnetic energy, total unsigned flux and the time scale of the 'V3D' run before the major eruption are more consistent with observations, showing the importance of the vertical photospheric plasma flow in the numerical modeling of solar eruptions. Our simulated magnetic structure (the first panel in Figure 3E) has reasonable consistency with observations in the pre-eruption image of AIA 171 A (the second panel in Figure 3E), and the synthetic image of coronal emission from current density of our simulation (the last panel in Figure 3E) is reasonable consistent with the image of AIA 131 A (the third panel in Figure 3E). Also the quasi separatrix layers (QSLs), where the magnetic reconnection is most likely to take place and thus represent the position of the flare ribbons (Titov et al., 2002; Liu et al., 2016), at the bottom boundary (Figure 4F) has approximately the same patterns as the flare ribbons in AIA 1600 A (Figure 4E). These results confirm the validity of our V-driven DARE-MHD model as well as the DAVE4VM method. ### Eruption Initiation Mechanism Since the actual velocity field must contain \(v_{z}\), here we analyzed the 'V3D' run to study the onset mechanism of this eruption. As we can see, a group of twisted field lines (represented by the blue solid lines in Figure 3A and B) formed and was embedded in the surrounding shear arcades, which is similar to an MFR in morphology. In addition, the ejection of the hot loop (also called hot channel) as observed in AIA 94 A (shown by the white arrows in Figure 1D and E) also suggested the existence of an MFR (Cheng et al., 2017) before the flare onset. To identify the formation of MFR before the eruption, we calculated the QSLs and the twist number (Berger and Prior, 2006) of our simulated magnetic fields, which are given in Figure 3D, Figure 5A and B, respectively. As shown in the third panel of Figure 3D, a strong QSL appeared near the core field and grew up to be a QSL ring in the last panel, which separated the MFR and the background magnetic field, thus representing the existence of an MFR in topology. The QSL ring intersects itself below the MFR, forming a typical hyperbolic flux tube (HFT), where the CS developed and magnetic reconnection took place subsequently to further drive the eruption (Jiang et al., 2018). The isosurface of \(T_{w}=-1\), which represents the position and shape of the MFR, are shown in Figure 5A and B respectively. It became larger and higher, illustrating more magnetic flux were twisted as driven by persistent photospheric flow. However, the isosurface of \(T_{w}=-2\), which is the lower threshold of kink instability (KI) according to the statistics of Duan et al. (2019), is barely visible and KI may take little effect. Among different triggering mechanisms, TI is considered as an efficient way of MFR eruption. In Figure 5D, we plot the key controlling parameter that determines the onset of TI, i.e., the decay index of the strapping field. The variation of the decay index \(n\) was calculated along the \(z\) direction of the overlying field at \(t=119\) minutes before the eruption onset. Since the potential field is not always the good approximation of the strapping field (especially when the strapping field is highly sheared), we calculated the decay index of our simulated field instead and plotted the decay index of the corresponding potential field at the same time for comparison. The critical height (above which \(n>1.5\)) of the simulated field is located at 60 Mm as labeled by the vertical black dashed line in Figure 5D. When the MFR was just formed, it is small and it's apex was at a low height (about 40 Mm in Figure 5A). About 14 minutes later, it had grown up to be a huge twist structure and entered the unstable zone (above 60 Mm as shown in Figure 5B), after which it erupted violently. The formation and the evolution of the MFR, i.e., there was an preformed MFR and the MFR erupted after it entered the unstable zone, strongly suggest that TI is the triggering mechanism of this eruption. To further test this assumption, we used the V3D-driven data at \(t=100\) minutes as the initial condition and ran our code without bottom velocity driven (by setting all the three velocity components as zero at the bottom boundary, thus referred to as 'V0D') to see whether the magnetic field will erupt. The evolution of energies of this 'V0D' run are shown as the red solid lines in Figure 2A. During the time duration of \(t\in[100,124]\) minutes, the magnetic energy kept almost unchanged, while the toroidal flux of the MFR (defined as \(\int_{s}B_{z}ds\), where \(s\) denotes the region of \(T_{w}<-1\) and \(B_{z}<0\) at the bottom boundary) kept increasing as shown in Figure 5C. In 'V3D' simulation, the MFR (the isosurface of \(T_{w}=-1\)) became larger and higher than the critical height before the eruption onset (Figure 5A and B) as the toroidal flux increasing (Figure 5C), after which TI took effect and finally led to a strong eruption. Similarly, the toroidal flux of 'V0D' simulation increased before and decreased after the eruption (Figure 5C), showing the slow reconnection can took place spontaneously without velocity driving and made the MFR larger until the eruption started at \(t_{\rm E,V0D}=124\) minutes as identified from the energy evolution curve. Since the current density was weaker and the current layer was thicker (as shown in the third panel in Figure 3C) than a true CS (the first panel in Figure 4C), the magnetic gradient was lower in the current layer than in the CS, and thus the diffusivity was relatively uniform, which only allowed slow reconnection (i.e., a slow dissipation of the current) to take place without impulsive energy release (i.e., not a Pescheck-type reconnection, Yokoyama & Shibata, 1994). The eruption of this 'V0D' run has the similar onset time and strength with the 'V3D' run, which further indicates that after \(t=100\) minutes in 'V3D' run, the velocity field on the bottom boundary is not necessary and the instability is sufficient to trigger the eruption. It follows the basic developing stage of TI, i.e., the rising MFR stretched the overlying field and consequently the flare CS formed below the MFR. Since we didn't use any explicit value of magnetic resistivity, and since the CS formed in a dynamic way, the width of CS could become very thin to trigger the fast reconnection easily as pointed out by Jiang et al. (2021), which further drives the eruption. Based on the analysis of our simulation and the supplementary numerical experiment ('V0D' simulation), along with the consistency between our simulation results and observations that have been shown above, we conclude that TI is the initiation mechanism of the X1.1 flare in AR 11429 on 2012 March 5. ## 4 Conclusions and Discussions We have carried out a full 3D MHD simulation of an X-class flare eruption event on 2012 March 5 in AR 11429 using the V-driven DARE-MHD model. An MHD equilibrium was obtained by relaxing the NLFFF reconstructed by the CESE-MHD-NLFFF code and was set as the initial condition. Then the initial state was driven to evolve by the DAVE4VM velocity field on the bottom boundary of our simulation box. The analysis of the quasi-static evolution stage before the eruption onset shows the gradual formation of an MFR above the main PIL. When the MFR first appeared, it was relatively low and then it grew up into the torus unstable region, after which TI was triggered. Then the energy conversion process was accomplished by reconnection in the CS that is formed due to the stretching effect of the erupting MFR. The images of SDO observation and the important physical quantities computed from the magnetograms are reasonably consistent with our result in the pre-flare magnetic structure, the morphology of flare ribbons, evolution curves of the magnetic and kinetic energies as well as the total unsigned magnetic flux. The time duration of the quasi-static evolution process before the simulated eruption is also very close to the actual time scale before the flare onset. Nevertheless, our simulation does not reproduce accurately the evolution of magnetic field as shown in the observed magnetograms. As can be seen in the last panel of Figure 3A, part of the magnetic flux was transported to and concentrated at the edge of the AR, while this pileup was not observed in HMI magnetograms. One likely reason for this flux pileup is that, the DAVE4VM method only solves the normal component of the magnetic induction equation in the least-square sense (Lumme et al., 2019), which may not reproduce the velocity precisely at every point and can only be used to approximate the overall distribution and evolution of magnetic flux in the AR. In addition, the magnetic flux on the photosphere should be dispersed and dissipated by granular and supergranular convection (Wang et al., 1989) as well as small scale turbulent diffusion in practical situation. Therefore, without dealing properly with these effects, the flux pileup will be more obvious than observations and as a result the magnetic energy injection rate of 'V3D' run will be overall higher than the 'OB' one as shown in Figure 2A (comparing the black dashed line and the black solid line). This unrealistic flux pileup may also contribute to the overshoot of the magnetic energy increase (Figure 2A, the mismatch of the black dashed line and the green line before the eruption), because it results in a very large magnetic gradient at the bottom boundary, thus even the finest spatial resolution in our simulation was inadequate to capture the too high gradient during the time duration close to the eruption onset (i.e., \(t\in[97,122]\) minutes). The insufficient grid resolution led to the considerable numerical error there, which thus makes the magnetic energy increase higher than the magnetic energy injection in 'V3D' simulation. The proper settings of numerical diffusion and grid res olution, along with the improved methods for deriving the photospheric flow will need to be considered in future works for a more accurate data-driven simulation of solar eruptions. To summarize, our simulation shows that, besides inputting the time series of vector magnetogram, the numerical model of solar corona can be driven to erupt by inputting the time series velocity field at the bottom boundary, in which the driver, i.e., the bottom velocity field is derived from time series of vector magnetograms. The numerical model we established here shows the possibility of driving the evolution of solar corona using different physical variables, which offers a straightforward way for revealing the eruption mechanisms in real events. Thus, such model has a great potentiality in forecasting the onset time as well as the strength of solar eruptions and evaluating the quantitative impact on space weather. This work is jointly supported by the National Natural Science Foundation of China (NSFC 41731067, 42030204, 42174200), the Fundamental Research Funds for the Central Universities (grant No. HIT.OCEF.2021033), and Shenzhen Science and Technology Program (grant Nos. RCJC20210609104422048 and JCYJ20190806142609035). The computational work was carried out on TianHe-1(A), National Supercomputer Center in Tianjin, China, and we thank Jun Chen for his informative and helpful discussions.
2309.04500
$C^{\ast}$-algebraic approach to the principal symbol. III
We treat the notion of principal symbol mapping on a compact smooth manifold as a $\ast$-homomorphism of $C^{\ast}$-algebras. Principal symbol mapping is built from the ground, without referring to the pseudodifferential calculus on the manifold. Our concrete approach allows us to extend Connes Trace Theorem for compact Riemannian manifolds.
Y. Kordyukov, F. Sukochev, D. Zanin
2023-09-08T02:16:58Z
http://arxiv.org/abs/2309.04500v1
# \(C^{*}\)-algebraic approach to the principal symbol. III ###### Abstract. We treat the notion of principal symbol mapping on a compact smooth manifold as a \(*\)-homomorphism of \(C^{*}\)-algebras. Principal symbol mapping is built from the ground, without referring to the pseudodifferential calculus on the manifold. Our concrete approach allows us to extend Connes Trace Theorem for compact Riemannian manifolds. Key words and phrases:principal symbol, smooth manifold, singular traces 2020 Mathematics Subject Classification: 47L20,47L80 ## 1. Introduction This paper is motivated by the theory of pseudodifferential operators. A central notion of that theory is that of a principal symbol, which is roughly a homomorphism from the algebra of pseudo-differential operators into an algebra of functions, [14, Lemma 5.1], [13, Theorem 5.5], [32, pp. 54-55]. Usually, it is defined in a manner inhospitable for operator theorists. However, in [30], a new approach to a principal symbol mapping on a certain \(C^{*}\)-subalgebra \(\Pi\) in \(B(L_{2}(\mathbb{R}^{d}))\) is proposed; this mapping turns out to be a \(*\)-homomorphism from \(\Pi\) into a commutative \(C^{*}\)-algebra. The \(C^{*}\)-algebra \(\Pi\) contains all classical compactly based pseudodifferential operators. This provides a very simple and algebraic approach to the theory. Whereas our approach is more elementary than the classical approach, the \(C^{*}\)-algebra \(\Pi\) introduced in [30] (see also [20]) is much wider than the class of a classical compactly based pseudo-differential operators of order \(0\) on \(\mathbb{R}^{d}.\) The aim of this paper is to extend this \(C^{*}\)-algebraic approach to the setting of smooth compact manifolds. The \(C^{*}\)-algebra \(\Pi\) in the Definition 1.1 below is the closure (in the uniform norm) of the \(*\)-algebra of all compactly supported _classical_ pseudodifferential operators of order \(0.\) However, we use an elementary definition of \(\Pi\) which does not involve pseudodifferential operators. The idea to consider this closure may be discerned yet in [3] (see Proposition 5.2 on p.512). For the recent development of this idea we refer to [30, 20]. Let \(D_{k}=\frac{\partial}{\partial t_{k}}\) be the \(k-\)th partial derivative operator on \(\mathbb{R}^{d}\) (these are unbounded self-adjoint operators on \(L_{2}(\mathbb{R}^{d})\)). In what follows, \(\nabla=(D_{1},\cdots,D_{d})\) and \(\Delta=\sum_{k=1}^{d}\frac{\partial^{2}}{\partial^{2}t_{k}}=-\sum_{k=1}^{d}D_ {k}^{2}.\) Let the \(d-\)dimensional vector \(\frac{\nabla}{(-\Delta)^{\frac{1}{2}}}\) be defined by the functional calculus. Let \(M_{f}\) be the multiplication operator by the function \(f.\) **Definition 1.1**.: _Let \(\pi_{1}:L_{\infty}(\mathbb{R}^{d})\to B(L_{2}(\mathbb{R}^{d})),\)\(\pi_{2}:L_{\infty}(\mathbb{S}^{d-1})\to B(L_{2}(\mathbb{R}^{d}))\) be defined by setting_ \[\pi_{1}(f)=M_{f},\quad\pi_{2}(g)=g(\frac{\nabla}{\sqrt{-\Delta}}),\quad f\in L _{\infty}(\mathbb{R}^{d}),\quad g\in L_{\infty}(\mathbb{S}^{d-1}).\] _Let \(\mathcal{A}_{1}=\mathbb{C}+C_{0}(\mathbb{R}^{d})\) and \(\mathcal{A}_{2}=C(\mathbb{S}^{d-1}).\) Let \(\Pi\) be the \(C^{*}\)-subalgebra in \(B(L_{2}(\mathbb{R}^{d}))\) generated by the algebras \(\pi_{1}(\mathcal{A}_{1})\) and \(\pi_{2}(\mathcal{A}_{2}).\)_ According to [30], there exists an \(*\)-homomorphism \[\operatorname{sym}:\Pi\to\mathcal{A}_{1}\otimes_{\min}\mathcal{A}_{2}\simeq C (\mathbb{S}^{d-1},\mathbb{C}+C_{0}(\mathbb{R}^{d})) \tag{1.1}\] such that \[\operatorname{sym}(\pi_{1}(f))=f\otimes 1,\quad\operatorname{sym}(\pi_{2}(g))=1 \otimes g.\] Here, \(\mathcal{A}_{1}\otimes_{\min}\mathcal{A}_{2}\) is the minimal tensor product of the \(C^{*}\)-algebras \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\) (see Propositions 1.22.2 and 1.22.3 in [25]). Elements of \(\mathcal{A}_{1}\otimes_{\min}\mathcal{A}_{2}\) are identified with continuous functions on \(\mathbb{R}^{d}\times\mathbb{S}^{d-1}.\) This \(*\)-homomorphism is called a principal symbol mapping. It properly extends the notion of the principal symbol of the classical pseudodifferential operator. It is natural to ask whether \(C^{*}\)-algebraic approach works in the general setting of smooth compact manifolds. It makes sense to de-manifoldize the question and reformulate it in a purely Euclidean fashion. We begin with the natural question on the properties of the \(C^{*}\)-algebra \(\Pi.\) **Question 1.2**.: _The natural unitary action of the group of diffeomorphisms on \(\mathbb{R}^{d}\) is defined as follows. Let \(\Phi:\mathbb{R}^{d}\to\mathbb{R}^{d}\) be a diffeomorphism. Let \(U_{\Phi}\in B(L_{2}(\mathbb{R}^{d}))\) be a unitary operator given by setting_ \[U_{\Phi}\xi=|\mathrm{det}(J_{\Phi})|^{\frac{1}{2}}\cdot(\xi\circ\Phi),\quad \xi\in L_{2}(\mathbb{R}^{d}).\] _Here, \(J_{\Phi}\) is the Jacobian matrix of \(\Phi.\)_ _Is the \(C^{*}\)-algebra \(\Pi\) invariant under the action \(T\to U_{\Phi}^{-1}TU_{\Phi}?\) Does the \(*\)-homomorphism \(\operatorname{sym}\) behave equivariantly under this action?_ Theorem 3.5 provides a positive answer to Question 1.2 (under the additional requirement that \(\Phi\) is affine outside of some ball). This additional assumption yields, in particular, that \(\Phi\) extends to a diffeomorphism of the projective space \(P^{d}(\mathbb{R}).\) We emphasise that the Question 1.2 in full generality remains open. Furthermore, Theorem 3.11 proves an invariance of \(\Pi\) and equivariance of \(\operatorname{sym}\) under local diffeomorphisms. The resolution of Question 1.2 has opened an avenue for the definition of the \(C^{*}\)-algebra \(\Pi_{X}\) associated with an arbitrary compact smooth manifold \(X.\) This \(C^{*}\)-algebra has a remarkable property: it admits a \(*\)-homomorphism \(\operatorname{sym}_{X}:\Pi_{X}\to C(S^{*}X),\) where \(S^{*}X\) is the cosphere bundle of \(X\) (see the Subsection 2.7). If \(X=\mathbb{R}^{d},\) then \(\operatorname{sym}_{X}\) coincides with the mapping \(\operatorname{sym}\) above. Every _classical_ order \(0\) pseudodifferential operator \(T\) on \(X\) belongs to \(\Pi_{X}\) and its principal symbol in the sense of pseudodifferential operators equals \(\operatorname{sym}_{X}(T).\) On the other hand, not every element of \(\Pi_{X}\) is pseudodifferential (e.g. because principal symbol of a pseudodifferential operator is necessarily smooth, while that of element of \(\Pi_{X}\) is only continuous). An approach to pseudodifferential calculi based on \(C^{*}\)-algebras theory was first suggested by H.O. Cordes [8] (see [21] for the case of a closed manifold). Below, we briefly describe the construction of \(\Pi_{X}\) via the patching process (see more precise description in Subsection 7.2). Let \(X\) be a compact smooth manifold with an atlas \((\mathcal{U}_{i},h_{i})_{i\in\mathbb{I}}.\) We will fix a sufficiently good measure \(\nu\) on \(X\), given by a continuous positive density (see Definition 2.20). If \(T\in B(L_{2}(X,\nu))\) is compactly supported in some chart \((\mathcal{U}_{i},h_{i})\) (i.e., there exists \(\phi\in C_{c}^{\infty}(\mathcal{U}_{i})\) such that \(T=TM_{\phi}=M_{\phi}T\)), then, by composing with \(h_{i},\) we can transfer \(T\) to an operator on \(L_{2}(\mathbb{R}^{d}).\) **Definition 1.3**.: _Let \(X\) be a compact smooth manifold equipped with a continuous positive density \(\nu\) and let \(T\in B(L_{2}(X,\nu)).\) We say that \(T\in\Pi_{X}\) if_ 1. _for every_ \(i\in\mathbb{I}\) _and for every_ \(\phi\in C_{c}(\mathcal{U}_{i}),\) _the operator_ \(M_{\phi}TM_{\phi}\) _transferred to an operator on_ \(L_{2}(\mathbb{R}^{d})\) _belongs to_ \(\Pi;\)__ 2. _for every_ \(\psi\in C(X),\) _the operator_ \([T,M_{\psi}]\) _is compact._ **Theorem 1.4**.: _If \(X\) is a smooth compact manifold and if \(\nu\) is a continuous positive density on \(X,\) then \(\Pi_{X}\) is a \(C^{*}\)-algebra and there exists (see Definition 7.8) a surjective \(*\)-homomorphism_ \[\operatorname{sym}_{X}:\Pi_{X}\to C(S^{*}X)\] _such that_ \[\ker(\operatorname{sym}_{X})=\mathcal{K}(L_{2}(X,\nu)).\] In other words, we have a short exact sequence \[0\to\mathcal{K}(L_{2}(X,\nu))\stackrel{{\operatorname{id}}}{{ \to}}\Pi_{X}\stackrel{{\operatorname{sym}_{X}}}{{\to}}C(S^{*}X )\to 0.\] This short exact sequence first appeared in [3] (see Proposition 5.2 on p.512) and plays an important role in index theory (see, for instance, [5, Section 24.1.8] or [4, Section 2]). It is essentially equivalent to the fact that for any operator \(T\in\Pi_{X}\) with principal symbol \(a\in C(S^{*}X),\) \[\inf\{\|T+K\|_{\infty}:\ K\in\mathcal{K}(L_{2}(X,\nu))\}=\|a\|_{C(S^{*}X)}.\] For singular integral operators this result was proved by Gohberg [10] and Seeley [26]. Proofs in the language of pseudodifferential operators have been given in [12, 14]. It should be noted that the definition given in [3] is somewhat imprecise (see [21], in particular, a discussion on p. 329). As a corollary of Theorem 1.4, we provide a version of Connes Trace Theorem (see Theorem 1.5 below). As stated, it extends Theorem 1 in [7]. Connes Trace Theorem is ubiquitous in Non-commutative Geometry. It serves as a ground for defining a general notion of the non-commutative integral and non-commutative Yang-Mills action (that is, Theorem 14 in [7] is taken as a definition in the non-commutative setting). We now compare our Theorem 1.5 with various versions of Connes Trace Theorem available in the literature. Original proof of Connes was, according to [11] "somewhat telegraphic". For example, it was not mentioned in [7] that the manifold is Riemannian and that pseudodifferential operator featuring in Theorem 1 in [7] is classical. Two proofs are given in [11] (Theorem 7.18 on p.293) and both of them rely on the assumption of ellipticity of the underlying pseudo-differential operator (this assumption is redundant as demonstrated in our approach). Despite their critique of Connes exposition, the authors of [11] also do not mention the classicality of their pseudodifferential operator. Another two proofs are given in [2]. As authors of [2] admit, their proofs are quite sketchy, however, they provide a correct statement. The advantage of our approach is threefold: (a) we consider a strictly larger class of operators (b) we consider a strictly larger class of traces (c) we work in a convenient category of \(C^{*}\)-algebras (i.e., non-commutative topological spaces) and not in a category of classical pseudodifferential operators which does not have a natural counterpart in Non-commutative Geometry. **Theorem 1.5**.: _Let \(\varphi\) be a normalised continuous trace on \(\mathcal{L}_{1,\infty}.\) Let \((X,G)\) be a compact Riemannian manifold and let \(\nu\) be the Riemannian volume. If \(T\in\Pi_{X},\) then_ \[\varphi(T(1-\Delta_{G})^{-\frac{d}{2}})=c_{d}\int_{T^{*}X}\operatorname{sym}_{ X}(T)e^{-q_{X}}d\lambda,\] _where \(\lambda\) is the Liouville measure on \(T^{*}X\) and \(e^{-q_{X}}\) is the canonical weight of the Riemannian manifold (as defined in Subsection 2.8)._ When \(T\) is a classical pseudodifferential operator, the right hand side coincides with Wodzicki residue of \(T(1-\Delta_{G})^{-\frac{d}{2}}.\) We refer the reader to the extensive discussion of this matter in [18]. One should note a sharp contrast between the setting of Theorem 1.5 and that of Theorem 1.4. Indeed, in the latter theorem, the (smooth compact) manifold is rather arbitrary, while in the former it is Riemannian. The Riemannian structure of \(X\) in Theorem 1.5 is needed in two places: (a) there is no natural measure on the cosphere bundle of an arbitrary smooth manifold (but such a measure arises naturally if the manifold is Riemannian) (b) Riemannian structure provides us with a natural second order differential operator (i.e, Laplace-Beltrami operator). In the setting of a general smooth manifoid, the second issue can be circumvented by replacing \(\Delta_{G}\) with an arbitrary elliptic second order differential operator (whose resolvent falls into the ideal \(\mathcal{L}_{d,\infty}\)). However, the lack of a natural measure on \(S^{*}X\) prevents us from stating Theorem 1.5 in that generality. We now briefly describe the structure of the paper. Section 2 collects known facts used further in the text. Theorems 3.5 and 3.11 in Section 3 assert equivariant behavior of the principal symbol mapping in Euclidean setting under the action of diffeomorphisms. Theorem 3.5 is proved in Section 5. Theorem 3.11 is proved in Section 6. Our main result, Theorem 1.4 is proved in Section 7 with the help of Globalisation Theorem from Subsection 7.1 (proved in Appendix A). Finally, Connes Trace Theorem on compact Riemannian manifolds (that is, Theorem 1.5) is proved in Section 8. ## 2. Preliminaries and notations As usual, \(B(H)\) denotes the \(*\)-algebra of all bounded operators on the Hilbert space \(H\) and \(\mathcal{K}(H)\) denotes the ideal of all compact operators in \(B(H).\) As usual, Euclidean length of a vector \(t\in\mathbb{R}^{d}\) is denoted by \(|t|.\) We frequently use the equality \[[D_{k},M_{f}]=M_{D_{k}f},\quad f\in C^{\infty}(\mathbb{R}^{d}). \tag{2.1}\] ### Principal ideals in \(B(h)\) It is well known that every ideal in \(B(H)\) consists of compact operators. Undoubtedly, the most important ideals are the principal ones. Among them, a special role is played by the ideal \(\mathcal{L}_{p,\infty},\) a principal ideal generated by the operator \(\operatorname{diag}(((k+1)^{-\frac{1}{p}})_{k\geq 0}).\) We frequently use the following property (related to the Holder inequality) of this scale of ideals \[\mathcal{L}_{p,\infty}\cdot\mathcal{L}_{q,\infty}=\mathcal{L}_{r,\infty},\quad \frac{1}{r}=\frac{1}{p}+\frac{1}{q}.\] We mention in passing that \(\mathcal{L}_{p,\infty}\) is quasi-Banach for every \(p>0\) (however, we do not need the quasi-norms in this text). ### Traces on \(\mathcal{L}_{1,\infty}\) **Definition 2.1**.: _If \(\mathcal{I}\) is an ideal in \(B(H),\) then a unitarily invariant linear functional \(\varphi:\mathcal{I}\to\mathbb{C}\) is said to be a trace._ Since \(U^{-1}TU-T=[U^{-1},TU]\) for all \(T\in\mathcal{I}\) and for all unitaries \(U\in B(H),\) and since the unitaries span \(B(H),\) it follows that traces are precisely the linear functionals on \(\mathcal{I}\) satisfying the condition \[\varphi(TS)=\varphi(ST),\quad T\in\mathcal{I},S\in B(H).\] The latter may be reinterpreted as the vanishing of the linear functional \(\varphi\) on the commutator subspace which is denoted \([\mathcal{I},B(H)]\) and defined to be the linear span of all commutators \([T,S]:\ T\in\mathcal{I},\)\(S\in B(H).\) Note that \(\varphi(T_{1})=\varphi(T_{2})\) whenever \(0\leq T_{1},T_{2}\in\mathcal{I}\) are such that the singular value sequences \(\mu(T_{1})\) and \(\mu(T_{2})\) coincide. For \(p>1,\) the ideal \(\mathcal{L}_{p,\infty}\) does not admit a non-zero trace while for \(p=1,\) there exists a plethora of traces on \(\mathcal{L}_{1,\infty}\) (see e.g. [19]). An example of a trace on \(\mathcal{L}_{1,\infty}\) is the Dixmier trace introduced in [9] that we now explain. **Example 2.2**.: _Let \(\omega\) be an extended limit. Then the functional \(\mathrm{Tr}_{\omega}:\mathcal{L}_{1,\infty}^{+}\to\mathbb{R}_{+}\) defined by setting_ \[\mathrm{Tr}_{\omega}(A)=\omega\Big{(}\Big{\{}\frac{1}{\log(2+n)}\sum_{k=0}^{n} \mu(k,A)\Big{\}}_{n\geq 0}\Big{)},\quad 0\leq A\in\mathcal{L}_{1,\infty},\] _is additive and, therefore, extends to a trace on \(\mathcal{L}_{1,\infty}.\) We call such traces Dixmier traces. These traces clearly depend on the choice of the functional \(\omega\) on \(l_{\infty}.\)_ An extensive discussion of traces, and more recent developments in the theory, may be found in [19] including a discussion of the following facts. 1. All Dixmier traces on \(\mathcal{L}_{1,\infty}\) are positive. 2. All positive traces on \(\mathcal{L}_{1,\infty}\) are continuous in the quasi-norm topology. 3. There exist positive traces on \(\mathcal{L}_{1,\infty}\) which are not Dixmier traces. 4. There exist traces on \(\mathcal{L}_{1,\infty}\) which fail to be continuous. We are mostly interested in _normalised traces_\(\varphi:\mathcal{L}_{1,\infty}\to\mathbb{C},\) that is, satisfying \(\varphi(T)=1\) whenever \(0\leq T\) is such that \(\mu(k,T)=\frac{1}{k+1}\) for all \(k\geq 0.\) Traces on \(\mathcal{L}_{1,\infty}\) play a fundamental role in Non-commutative Geometry. For example, they allow to write Connes Character Formula (we refer the reader to Section 5.3 in [18] and references therein). ### Sobolev spaces Sobolev space \(W^{m,2}(\mathbb{R}^{d}),\)\(m\in\mathbb{Z}_{+},\) consists of all distributions \(f\in L_{2}(\mathbb{R}^{d})\) such that every distributional derivative \(D^{\alpha}f,\)\(\alpha\in\mathbb{Z}_{+}^{d},\) of order \(|\alpha|_{1}=\sum_{k=1}^{d}\alpha_{k}\leq m\) also belongs to \(L_{2}(\mathbb{R}^{d}).\) The importance of Sobolev spaces in the theory of differential operators can be seen e.g. from the fact that \(W^{1,2}(\mathbb{R}^{d})\) is the domain of the self-adjoint tuple \(\nabla\). Also, \(W^{2,2}(\mathbb{R}^{d})\) is the domain of the self-adjoint positive operator \(-\Delta.\) We refer the reader to the books [1], [31] for further information on Sobolev spaces. Further we need the following standard result (see e.g. p.322 in [31]). **Theorem 2.3**.: _Sobolev space \(W^{m,2}(\mathbb{R}^{d}),\)\(m\in\mathbb{Z}_{+},\) is invariant under diffeomorphisms which are affine outside of some ball._ ### Pseudodifferential operators If \(p\in C^{\infty}(\mathbb{R}^{d}\times\mathbb{R}^{d})\) (that is, bounded smooth function whose derivatives are also bounded functions), then the Calderon-Vaillancourt theorem (see e.g. unnumbered proposition on p.282 in [28]) the operator \(\operatorname{Op}(p)\) defined by the formula (here \(\mathcal{F}\) is Fourier transform on \(\mathbb{R}^{d}\)) \[(\operatorname{Op}(p)\xi)(t)=(2\pi)^{-\frac{d}{2}}\int_{\mathbb{R}^{d}}e^{i(t,s)}p(t,s)(\mathcal{F}\xi)(s)ds,\quad\xi\in L_{2}(\mathbb{R}^{d}), \tag{2.2}\] is bounded in \(L_{2}(\mathbb{R}^{d}).\) If \(m\in\mathbb{Z},\)\(m\leq 0\) is such that \[\sup_{t,s\in\mathbb{R}^{d}}(1+|s|^{2})^{\frac{|\beta|_{1}-m}{2}}|D_{t}^{\alpha }D_{s}^{\beta}p(t,s)|<\infty,\quad\alpha,\beta\in\mathbb{Z}_{+}^{d}, \tag{2.3}\] then we say that \(\operatorname{Op}(p)\in\Psi^{m}(\mathbb{R}^{d}).\) For \(m>0,\) the class \(\Psi^{m}(\mathbb{R}^{d})\) is defined by the same formula. The difference is that, for \(m>0,\) operators in \(\Psi^{m}(\mathbb{R}^{d})\) are no longer bounded as operators from \(L_{2}(\mathbb{R}^{d})\to L_{2}(\mathbb{R}^{d});\) instead, they are bounded operators from \(W^{m,2}(\mathbb{R}^{d})\) to \(L_{2}(\mathbb{R}^{d}).\) The key property is that \[\Psi^{m_{1}}(\mathbb{R}^{d})\cdot\Psi^{m_{2}}(\mathbb{R}^{d})\subset\Psi^{m_{ 1}+m_{2}}(\mathbb{R}^{d}),\quad m_{1},m_{2}\in\mathbb{Z}. \tag{2.4}\] Moreover, by Theorem 2.5.1 in [24], we have \[\operatorname{Op}(p_{1})\cdot\operatorname{Op}(p_{2})\in\operatorname{Op}(p _{1}p_{2})+\Psi^{m_{1}+m_{2}-1}(\mathbb{R}^{d}), \tag{2.5}\] whenever \(\operatorname{Op}(p_{1})\in\Psi^{m_{1}}(\mathbb{R}^{d})\) and \(\operatorname{Op}(p_{2})\in\Psi^{m_{2}}(\mathbb{R}^{d}).\) The next lemma follows immediately from (2.4) and (2.5). **Lemma 2.4**.: _If \(m_{1},m_{2}\in\mathbb{Z},\)_ \[T_{l}\in\operatorname{Op}(p_{l})+\Psi^{m_{l}-1}(\mathbb{R}^{d}),\quad \operatorname{Op}(p_{l})\in\Psi^{m_{l}}(\mathbb{R}^{d}),\quad l=1,2,\] _then_ \[T_{1}T_{2}-\operatorname{Op}(p_{1}p_{2})\in\Psi^{m_{1}+m_{2}-1}(\mathbb{R}^{d }).\] Let \(T\in\Psi^{m}(\mathbb{R}^{d}),\)\(m<0,\) and let \(\psi\in C_{c}^{\infty}(\mathbb{R}^{d})\) be such that \(T=M_{\psi}T.\) Recall that the operator \(M_{\psi}(1-\Delta)^{\frac{m}{2}}\) is compact (see e.g. Theorem 4.1 in [27]). Thus, \[T=M_{\psi}(1-\Delta)^{\frac{m}{2}}\cdot(1-\Delta)^{-\frac{m}{2}}T\in\mathcal{ K}(L_{2}(\mathbb{R}^{d}))\cdot B(L_{2}(\mathbb{R}^{d}))=\mathcal{K}(L_{2}( \mathbb{R}^{d})). \tag{2.6}\] Differential operators of order \(m\geq 0\) with smooth bounded coefficients (all derivatives of the coefficients are also assumed bounded) belong to \(\Psi^{m}(\mathbb{R}^{d}).\) Indeed, it follows directly from (2.2) that \[\sum_{|\alpha|_{1}\leq m}M_{f_{\alpha}}D^{\alpha}=\operatorname{Op}(p),\quad p (t,s)=\sum_{|\alpha|_{1}\leq m}f_{\alpha}(t)s^{\alpha},\quad t,s\in\mathbb{R}^ {d}. \tag{2.7}\] The following standard result is available, e.g. in Theorem 1.6.20 in [17]. **Theorem 2.5**.: _Let \(T\in\Psi^{m}(\mathbb{R}^{d}),\)\(m\geq 0,\) extend to a self-adjoint positive operator \(T:W^{m,2}(\mathbb{R}^{d})\to L_{2}(\mathbb{R}^{d}).\) For every \(z\in\mathbb{C},\) we have \((T+1)^{z}\in\Psi^{m\Re(z)}(\mathbb{R}^{d}).\)_ _If, in addition, \(T\) is a differential operator with positive principal symbol \(p,\) then_ \[(T+1)^{z}-\operatorname{Op}((p+1)^{z})\in\Psi^{m\Re(z)-1}(\mathbb{R}^{d}).\] ### Pseudodifferential-like operators in \(\Pi\) If \(q\in C_{c}^{\infty}(\mathbb{R}^{d}\times\mathbb{S}^{d-1}),\) then we set \[(T_{q}\xi)(t)=(2\pi)^{-\frac{d}{2}}\int_{\mathbb{R}^{d}}e^{i\langle t,s\rangle}q (t,\frac{s}{|s|})(\mathcal{F}\xi)(s)ds,\quad\xi\in L_{2}(\mathbb{R}^{d}). \tag{2.8}\] **Lemma 2.6** (Lemma 8.1 in [30]).: _For every \(q\in C_{c}^{\infty}(\mathbb{R}^{d}\times\mathbb{S}^{d-1}),\) we have \(T_{q}\in\Pi\) and \(\operatorname{sym}(T_{q})=q.\)_ **Lemma 2.7** (Lemma 8.2 in [30]).: _Let \(q\in C_{c}^{\infty}(\mathbb{R}^{d}\times\mathbb{S}^{d-1}).\) If \(\psi\in C_{c}^{\infty}(\mathbb{R}^{d})\) equals \(1\) near \(0,\) then_ \[\operatorname{Op}(p)-T_{q}\in\mathcal{K}(L_{2}(\mathbb{R}^{d})),\] _where_ \[p(t,s)=q(t,\frac{s}{|s|})\cdot(1-\psi(s)),\quad t,s\in\mathbb{R}^{d}.\] ### Cotangent bundle **Notation 2.8**.: _Let \(X\) be a smooth \(d\)-dimensional manifold with atlas \((\mathcal{U}_{i},h_{i})_{i\in\mathbb{I}},\) where \(\mathbb{I}\) is an arbitrary set of indices._ 1. _We denote_ \[\Omega_{i}=h_{i}(\mathcal{U}_{i})\subset\mathbb{R}^{d},\quad\Omega_{i,j}=h_{i }(\mathcal{U}_{i}\cap\mathcal{U}_{j})\subset\mathbb{R}^{d},\quad i,j\in \mathbb{I};\] 2. _We denote by_ \(\Phi_{i,j}:\Omega_{i,j}\to\Omega_{j,i}\) _the diffeomorphism given by the formula_ \[\Phi_{i,j}(t)=h_{j}(h_{i}^{-1}(t)),\quad t\in\Omega_{i,j}.\] In the next fact, we recall the manifold structure of \(T^{*}X.\) **Fact 2.9**.: _Let \(X\) be a \(d\)-dimensional manifold with an atlas \(\{(\mathcal{U}_{i},h_{i})\}_{i\in\mathbb{I}}.\) Let \(T^{*}X\) be the cotangent bundle of \(X\) and let \(\pi:T^{*}X\to X\) be the canonical projection. There exists an atlas \(\{\pi^{-1}(\mathcal{U}_{i}),H_{i}\}_{i\in\mathbb{I}}\) of \(T^{*}X\) such that_ 1. _for every_ \(i\in\mathbb{I},\)__\(H_{i}:\pi^{-1}(\mathcal{U}_{i})\to\Omega_{i}\times\mathbb{R}^{d}\) _is a homeomorphism;_ 2. _for every_ \(i,j\in\mathbb{I}\) _such that_ \(\mathcal{U}_{i}\cap\mathcal{U}_{j}\neq\varnothing,\) _we have_ \[(H_{j}\circ H_{i}^{-1})(t,s)=(\Phi_{i,j}(t),(J_{\Phi_{i,j}}^{*}(t))^{-1}s), \quad t\in\Omega_{i,j},\quad s\in\mathbb{R}^{d}.\] In the next fact, we identify functions on the \(T^{*}X\) and their local representations. It is important that this identification preserves continuity. **Fact 2.10**.: _Let \(F_{i}:\Omega_{i}\times\mathbb{R}^{d}\to\mathbb{C}\) for every \(i\in\mathbb{I}.\) If_ \[F_{i}\circ H_{i}=F_{j}\circ H_{j}\text{ on }\pi^{-1}(\mathcal{U}_{i}\cap \mathcal{U}_{j})\text{ for every }i,j\in\mathbb{I},\] _then there exists a unique function \(F:T^{*}X\to\mathbb{C}\) such that_ \[F_{i}=F\circ H_{i}^{-1}\text{ on }\Omega_{i}\times\mathbb{R}^{d},\quad i\in \mathbb{I}.\] ### Cosphere bundle **Definition 2.11**.: _Define a dilation action \(\lambda\to\sigma_{\lambda}\) of \((0,\infty)\) on each \(\Omega_{i}\times\mathbb{R}^{d}\) by setting_ \[\sigma_{\lambda}:(t,s)\to(t,\lambda s),\quad t\in\Omega_{i},\quad s\in \mathbb{R}^{d}.\] _This action lifts down to an action on \(T^{*}X\) (also denoted by \(\sigma_{\lambda}\))._ _A function on \(T^{*}X\) invariant with respect to this action is called dilation invariant._ **Definition 2.12**.: _Let \(X\) be a compact manifold. \(C^{*}\)-algebra of all continuous dilation invariant functions on \(T^{*}X\backslash\partial_{T^{*}X}\) (here, \(0_{T^{*}X}\) is the zero section of \(T^{*}X\)) is denoted by \(C(S^{*}X)\) and is called the algebra of continuous functions on the cosphere bundle of \(X.\)_ ### Canonical weight of Riemannian manifold If \(X\) is a smooth \(d\)-dimensional manifold, then \(T^{*}X\) has a canonical symplectic structure. The corresponding Liouville measure \(\lambda\) on \(T^{*}X\) satisfies the following property (see, for instance, [6]): \[\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}fdm=\int_{T^{*}X}(f\circ H_{i})d \lambda,\quad f\in C_{c}(\Omega_{i}\times\mathbb{R}^{d}),\quad i\in\mathbb{I}. \tag{2.9}\] Here, \(f\circ H_{i}\) denotes a function on \(T^{*}X\) which equals \(f\circ H_{i}\) on \(\pi^{-1}(\mathcal{U}_{i})\) and which vanishes outside \(\pi^{-1}(\mathcal{U}_{i}).\) However, there is no canonical way to equip the cosphere bundle \(S^{*}X\) of a smooth manifold \(X\) with a measure. The following class of measures is of particular interest: if \(w\in L_{1}(T^{*}X,\lambda),\) then the functional \[f\to\int_{T^{*}X}fwd\lambda,\quad f\in C(S^{*}X),\] generates a measure on \(S^{*}X\) by the Riesz-Markov theorem. However, there is no canonical way to select an integrable function \(w\) on \(T^{*}X.\) This choice becomes possible if we assume in addition a Riemannian structure on \(X.\) Let \(G\) be a Riemannian metric on \(X\). For any \(i\in\mathbb{I},\) the components of the metric \(G\) in the chart \((\mathcal{U}_{i},h_{i})\) give rise to a smooth mapping \(G_{i}:\mathcal{U}_{i}\to\mathrm{GL}^{+}(d,\mathbb{R}).\) (In what follows, \(\mathrm{GL}^{+}(d,\mathbb{R})\) stands for the set of all positive elements in \(\mathrm{GL}(d,\mathbb{R}).\)) For any \(i,j\in\mathbb{I}\) such that \(\mathcal{U}_{i}\cap\mathcal{U}_{j}\neq\varnothing,\) we have \[G_{j}(t)=J^{*}_{\Phi_{j,i}}(h_{j}(t))\cdot G_{i}(t)\cdot J_{\Phi_{j,i}}(h_{j} (t)),\quad t\in\mathcal{U}_{i}\cap\mathcal{U}_{j}.\] Here, \(\Phi_{j,i}\) are given in Notation 2.8. **Notation 2.13**.: _For every \(i\in\mathbb{I},\) let \(\Omega_{i}\) be as in Notation 2.8 and set \(g_{i}=G_{i}\circ h_{i}^{-1}:\Omega_{i}\to\mathrm{GL}^{+}(d,\mathbb{R}).\) We also set_ \[q_{i}(t,s)=\langle g_{i}(t)^{-1}s,s\rangle,\quad t\in\Omega_{i},\quad s\in \mathbb{R}^{d}.\] It can be easily verified by a direct calculation that, for every \(i,j\in\mathbb{I},\) we have \[q_{i}\circ H_{i}=q_{j}\circ H_{j}\text{ on }\pi^{-1}(\mathcal{U}_{i}\cap \mathcal{U}_{j}).\] By Fact 2.10, there exists a function \(q_{X}\) on \(T^{*}X\) such that \[q_{i}=q_{X}\circ H_{i}^{-1}\text{ on }\Omega_{i}\times\mathbb{R}^{d},\quad i \in\mathbb{I}. \tag{2.10}\] The function \(q_{X}\) is the square of the length function on \(T^{*}X\) defined by the induced Riemannian metric on the cotangent bundle \(T^{*}X\). **Definition 2.14**.: _The function \(e^{-q_{X}}\) on \(T^{*}X\) is called the canonical weight of the Riemannian manifold \((X,G)\)._ If \(X\) is compact, then \(e^{-q_{X}}\in L_{1}(T^{*}X,\lambda).\) The functional on \(C(S^{*}X)\) given by the formula \[f\to\int_{T^{*}X}fe^{-q_{X}}d\lambda\] plays a crucial role. It defines the natural measure on \(S^{*}X.\) We note that the latter functional coincides (modulo a constant factor) with integration with respect to the kinematic density on \(S^{*}X\) (see p.318 in [6]). ### Laplace-Beltrami operator on compact Riemannian manifold **Notation 2.15**.: _Let \(\Omega\subset\mathbb{R}^{d}\) be connected and open. Let \(g:\Omega\to\mathrm{GL}^{+}(d,\mathbb{R})\) be a smooth mapping. Laplace-Beltrami operator \(\Delta_{g}:C_{c}^{\infty}(\Omega)\to C_{c}^{\infty}(\Omega)\) is defined by the formula_ \[\Delta_{g}=M_{\det(g)^{-\frac{1}{2}}}\sum_{k,l=1}^{d}D_{k}M_{\det(g)^{\frac{1} {2}}\cdot(g^{-1})_{k,l}}D_{l}.\] **Definition 2.16**.: _Let \((\phi_{n})_{n=1}^{N}\subset C^{\infty}(X)\) be a finite partition of unity. We call it good if each \(\phi_{n}\) is compactly supported in some chart._ Obviously, good partitions of unity exist only on compact manifolds. **Definition 2.17**.: _Let \((X,G)\) be a compact Riemannian manifold. Let \(\Omega_{i}\) be as in Notation 2.8 and let \(g_{i}:\Omega_{i}\to\mathrm{GL}^{+}(d,\mathbb{R})\) be as in Notation 2.13. Let \(\Delta_{g_{i}}:C_{c}^{\infty}(\Omega_{i})\to C_{c}^{\infty}(\Omega_{i}),\)\(i\in\mathbb{I},\) be the Laplace-Beltrami operator as in Notation 2.15. Let \((\phi_{n})_{n=1}^{N}\) be a good partition of unity._ _Laplace-Beltrami operator \(\Delta_{G}:C^{\infty}(X)\to C^{\infty}(X)\) is defined by the formula_ \[\Delta_{G}f=\sum_{n=1}^{N}\left(\Delta_{g_{in}}\left((f\phi_{n})\circ h_{i_{n} }^{-1}\right)\right)\circ h_{i_{n}},\quad f\in C^{\infty}(X).\] _Here, \(i_{n}\in\mathbb{I}\) is chosen such that \(\phi_{n}\) is compactly supported in \(\mathcal{U}_{i_{n}}.\)_ Though Definition 2.17 involves good partition of unity, the operator \(\Delta_{G}\) does not actually depend on the particular choice of a good partition of unity. Theorem 2.4 in [29] yields the following results. The first one is of conceptual importance. The second one is used in the proof of Theorem 1.5. **Theorem 2.18**.: _Let \((X,G)\) be a compact Riemannian manifold. Laplace-Beltrami operator admits a self-adjoint extension \(\Delta_{G}:W^{2,2}(X)\to L_{2}(X).\)_ **Theorem 2.19**.: _Let \(g:\mathbb{R}^{d}\to\mathrm{GL}^{+}(d,\mathbb{R}).\) Suppose that_ 1. \(g\in C^{\infty}(\mathbb{R}^{d},M_{d}(\mathbb{C}))\) _(that is,_ \(g\) _is smooth and all derivatives are bounded);_ 2. \(\det(g)\geq c\) _for some_ \(0<c\in\mathbb{R};\)__ _The operator \(\Delta_{g}:W^{2,2}(\mathbb{R}^{d})\to L_{2}(\mathbb{R}^{d})\) is self-adjoint._ ### Density on a manifold Let \(\mathfrak{B}\) be the Borel \(\sigma\)-algebra on the manifold \(X.\) We need the notion of density on a manifold available, e.g. in [22, p.87]. **Definition 2.20**.: _Let \(\nu\) be a countably additive measure on \(\mathfrak{B}\). We assume that, for every \(i\in\mathbb{I},\) the measure \(\nu\circ h_{i}^{-1}\) on \(\Omega_{i}\) is absolutely continuous with respect to the Lebesgue measure on \(\Omega_{i},\) and its Radon-Nikodym derivative \(a_{i}\) is strictly positive and continuous on \(\Omega_{i}.\)_ _In this case, we say that \(\nu\) is a continuous positive density on \(X.\)_ ## 3. Invariance of principal symbol under diffeomorphisms In this section, we formulate a theorem which provides a partial positive answer to Question 1.2. This result is stated in two versions: Theorem 3.5 for diffeomorphisms of \(\mathbb{R}^{d}\) (which a core technical difficulty) and Theorem 3.11 for local diffeomorphisms (the result which would be actually used). ### Invariance under diffeomorphisms of \(\mathbb{R}^{d}\) We need the following notations. Recall that \(\mathrm{GL}(d,\mathbb{R})\) stands for the group of invertible real \(d\times d\) matrices. **Notation 3.1**.: _Let \(\Phi:\mathbb{R}^{d}\to\mathbb{R}^{d}\) be a diffeomorphism._ 1. _Let_ \(J_{\Phi}:\mathbb{R}^{d}\to\mathrm{GL}(d,\mathbb{R})\) _be the Jacobian matrix of_ \(\Phi;\)__ 2. _Let unitary operator_ \(U:L_{2}(\mathbb{R}^{d})\to L_{2}(\mathbb{R}^{d})\) _be defined by setting_ \[(U_{\Phi}\xi)(t)=|\mathrm{det}(J_{\Phi})|^{\frac{1}{2}}(t)\xi(\Phi(t)),\quad \xi\in L_{2}(\mathbb{R}^{d}),\quad t\in\mathbb{R}^{d}.\] 3. _Let_ \(\Theta_{\Phi}:\mathbb{R}^{d}\times\mathbb{S}^{d-1}\to\mathbb{R}^{d}\times \mathbb{S}^{d-1}\) _be defined by setting_ \[\Theta_{\Phi}(t,s)=(\Phi^{-1}(t),O_{J^{*}_{\Phi}(\Phi^{-1}(t))}s),\quad t\in \mathbb{R}^{d},\quad s\in\mathbb{S}^{d-1},\] _where_ \(J^{*}_{\Phi}\) _is the adjoint to the Jacobi matrix._ _Here, for_ \(A\in\mathrm{GL}(d,\mathbb{R}),\) _we set_ \[O_{A}s=\frac{As}{|As|},\quad s\in\mathbb{S}^{d-1}.\] 4. _Let_ \(\Xi_{\Phi}:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}^{d}\times\mathbb{R} ^{d}\) _be defined by setting_ \[\Xi_{\Phi}(t,s)=(\Phi^{-1}(t),J^{*}_{\Phi}(\Phi^{-1}(t))s),\quad t,s\in\mathbb{ R}^{d}.\] We frequently need the following compatibility lemma. **Lemma 3.2**.: _Let \(\Phi_{1},\Phi_{2}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) be diffeomorphisms. We have_ \[\Theta_{\Phi_{1}\circ\Phi_{2}}=\Theta_{\Phi_{2}}\circ\Theta_{\Phi_{1}}.\] Proof.: Indeed, \[\Theta_{\Phi}(t,s)=(\Phi^{-1}(t),O_{A^{\Phi}(t)}s),\] where \[A^{\Phi}(t)=J^{*}_{\Phi}(\Phi^{-1}(t)).\] We have \[(\Theta_{\Phi_{2}}\circ\Theta_{\Phi_{1}})(t,s)=\Theta_{\Phi_{2}}(t^{\prime},s ^{\prime}),\quad(t^{\prime},s^{\prime})=(\Phi_{1}^{-1}(t),O_{A^{\Phi_{1}}(t)} s).\] Thus, \[(\Theta_{\Phi_{2}}\circ\Theta_{\Phi_{1}})(t,s)=(\Phi_{2}^{-1}(t^{\prime}),O_{A ^{\Phi_{2}}(t^{\prime})}s^{\prime})=(\Phi_{2}^{-1}(\Phi_{1}^{-1}(t)),O_{A^{ \Phi_{2}}(\Phi_{1}^{-1}(t))}O_{A^{\Phi_{1}}(t)}s).\] Note that \[(O_{A_{1}}\circ O_{A_{2}})s=\frac{A_{1}(O_{A_{2}}s)}{|A_{1}(O_{A_{2}}s)|}= \frac{\frac{A_{1}A_{2}s}{|A_{2}s|}}{\frac{|A_{1}A_{2}s|}{|A_{2}s|}}=\frac{A_{ 1}A_{2}s}{|A_{1}A_{2}s|}=O_{A_{1}\cdot A_{2}}s,\quad s\in\mathbb{S}^{d-1}.\] Since \(O_{A_{1}}\circ O_{A_{2}}=O_{A_{1}\cdot A_{2}},\) it follows that \[(\Theta_{\Phi_{2}}\circ\Theta_{\Phi_{1}})(t,s)=((\Phi_{1}\circ\Phi_{2})^{-1}( t),O_{A^{\Phi_{2}}(\Phi_{1}^{-1}(t))\cdot A^{\Phi_{1}}(t)}s).\] At the same time, \[\Theta_{\Phi_{1}\circ\Phi_{2}}=((\Phi_{1}\circ\Phi_{2})^{-1}(t),O_{A^{\Phi_{1} \circ\Phi_{2}}(t)}s).\] It suffices to show that \[A^{\Phi_{1}\circ\Phi_{2}}(t)=A^{\Phi_{2}}(\Phi_{1}^{-1}(t))\cdot A^{\Phi_{1}}(t).\] The latter equality is written as \[J^{*}_{\Phi_{1}\circ\Phi_{2}}((\Phi_{1}\circ\Phi_{2})^{-1}(t))=J^{*}_{\Phi_{2} }(\Phi_{2}^{-1}(\Phi_{1}^{-1}(t)))\cdot J^{*}_{\Phi_{1}}(\Phi_{1}^{-1}(t)).\] Replacing \(t\) with \((\Phi_{1}\circ\Phi_{2})(t)\), we need to verify that \[J^{*}_{\Phi_{1}\circ\Phi_{2}}(t)=J^{*}_{\Phi_{2}}(t)\cdot J^{*}_{\Phi_{1}}(\Phi_{ 2}(t)).\] In other words, \[J_{\Phi_{1}\circ\Phi_{2}}(t)=J_{\Phi_{1}}(\Phi_{2}(t))\cdot J_{\Phi_{2}}(t).\] This is the chain rule property. **Corollary 3.3**.: \(\Theta_{\Phi}:\mathbb{R}^{d}\times\mathbb{S}^{d-1}\to\mathbb{R}^{d}\times \mathbb{S}^{d-1}\) _is a diffeomorphism._ Proof.: Obviously, \(J^{*}_{\Phi}\circ\Phi^{-1}:\mathbb{R}^{d}\to\mathrm{GL}(d,\mathbb{R})\) is a smooth mapping. For every smooth mapping \(A:\mathbb{R}^{d}\to\mathrm{GL}(d,\mathbb{R})\), the mapping \[(t,s)\to O_{A(t)}s,\quad t\in\mathbb{R}^{d},\quad s\in\mathbb{S}^{d-1},\] is smooth. Thus, \(\Theta_{\Phi}\) is smooth. By Lemma 3.2, its inverse is \(\Theta_{\Phi^{-1}}\) which is also a smooth mapping. We are now ready to state the main result in this subsection. **Definition 3.4**.: _We say that \(T\in\Pi\) is compactly supported if there exists \(\phi\in C^{\infty}_{c}(\mathbb{R}^{d})\) such that \(T=T\pi_{1}(\phi)=\pi_{1}(\phi)T.\)_ **Theorem 3.5**.: _Let \(\Phi:\mathbb{R}^{d}\to\mathbb{R}^{d}\) be a diffeomorphism such that \(\Phi\) is affine outside of some ball. If \(T\in\Pi\) is compactly supported, then \(U_{\Phi}^{-1}TU_{\Phi}\in\Pi.\) Furthermore,_ \[\mathrm{sym}\Big{(}U_{\Phi}^{-1}TU_{\Phi}\Big{)}=\mathrm{sym}(T)\circ\Theta_{ \Phi}.\] _If we view symbols as homogeneous functions1 Footnote 1: A function \(f:\mathbb{R}^{d}\times\mathbb{S}^{d-1}\) can be uniquely extended to a homogeneous function on \(\mathbb{R}^{d}\times(\mathbb{R}^{d}\backslash\{0\})\) by setting \[f(t,s)\stackrel{{ def}}{{=}}f(t,\frac{s}{|s|}),\quad t,s\in \mathbb{R}^{d}.\] If \(f\) is homogeneous, then \[(f\circ\Theta_{\Phi})(t,s)=f(\Phi^{-1}(t),\frac{J^{*}_{\Phi}(\Phi^{-1}(t))s}{ |J^{*}_{\Phi}(\Phi^{-1}(t))s|})=f(\Phi^{-1}(t),J^{*}_{\Phi}(\Phi^{-1}(t))s)=(f \circ\Xi_{\Phi})(t,s).\] on \(\mathbb{R}^{d}\times(\mathbb{R}^{d}\backslash\{0\}),\) then \[\mathrm{sym}\Big{(}U_{\Phi}^{-1}TU_{\Phi}\Big{)}=\mathrm{sym}(T)\circ\Xi_{ \Phi}.\] We prove Theorem 3.5 in Section 5. There are two reasons for us to require that \(\Phi\) is affine outside of some ball. The first reason is that having equivariant behavior of the principal symbol under such diffeomorphisms is sufficient in the proof of Theorem 3.11 below. The second reason is that in the proof of Theorem 3.5 we conjugate the Laplacian with \(U_{\Phi}.\) Hence, it is of crucial importance that \(U_{\Phi}\) preserves the domain of Laplacian. Recall that the domain of Laplacian is a Sobolev spaces \(W^{2,2}(\mathbb{R}^{d})\) and, by Theorem 2.3, \(U_{\Phi}\) leaves the domain of Laplacian invariant. ### Invariance under local diffeomorphisms One may ask how does the algebra and the principal symbol mapping (locally) behaves under the change of coordinates. We need the following notations. **Notation 3.6**.: _Let \(H\) be a Hilbert space and let \(p\in B(H)\) be a projection._ 1. _If_ \(T\in B(H)\) _is such that_ \(T=pTp,\) _then we define the operator_ \(\operatorname{Rest}_{p}(T)\in B(pH)\) _by setting_ \(\operatorname{Rest}_{p}(H)=T|_{pH}.\)__ 2. _If_ \(T\in B(pH),\) _then we define_ \(\operatorname{Ext}_{p}(T)\in B(H)\) _by setting_ \(\operatorname{Ext}_{p}(T)=T\circ p.\)__ **Notation 3.7**.: _Let \(\Omega\subset\mathbb{R}^{d}.\) If \(T\in B(L_{2}(\mathbb{R}^{d}))\) is such that \(T=M_{\chi_{\Omega}}TM_{\chi_{\Omega}},\) then \(\operatorname{Rest}_{\Omega}(T)\in B(L_{2}(\Omega))\) is a shorthand for \(\operatorname{Rest}_{M_{\chi_{\Omega}}}(T).\) If \(T\in B(L_{2}(\Omega)),\) then \(\operatorname{Ext}_{\Omega}(T)\in B(L_{2}(\mathbb{R}^{d}))\) is a shorthand for \(\operatorname{Ext}_{M_{\chi_{\Omega}}}(T).\)_ **Notation 3.8**.: _Let \(\Omega,\Omega^{\prime}\subset\mathbb{R}^{d}\) be open sets and let \(\Phi:\Omega\to\Omega^{\prime}\) be a diffeomorphism._ 1. _Let_ \(J_{\Phi}:\Omega\to\operatorname{GL}(d,\mathbb{R})\) _be the Jacobian matrix of_ \(\Phi;\)__ 2. _Let unitary operator_ \(U_{\Phi}:L_{2}(\Omega^{\prime})\to L_{2}(\Omega)\) _be defined by setting_ \[(U_{\Phi}\xi)(t)=|\det(J_{\Phi})|^{\frac{1}{2}}(t)\xi(\Phi(t)),\quad\xi\in L_{ 2}(\Omega^{\prime}),\quad t\in\Omega;\] 3. _Let_ \(\Theta_{\Phi}:\Omega^{\prime}\times\mathbb{S}^{d-1}\to\Omega\times\mathbb{S}^ {d-1}\) _be defined by setting_ \[\Theta_{\Phi}(t,s)=(\Phi^{-1}(t),O_{J_{\Phi}^{*}(\Phi^{-1}(t))}s),\quad t\in \Omega^{\prime},\quad s\in\mathbb{S}^{d-1}.\] 4. _Let_ \(\Xi_{\Phi}:\Omega^{\prime}\times\mathbb{R}^{d}\to\Omega\times\mathbb{R}^{d}\) _be defined by setting_ \[\Xi_{\Phi}(t,s)=(\Phi^{-1}(t),J_{\Phi}^{*}(\Phi^{-1}(t))s),\quad t\in\Omega^{ \prime},\quad s\in\mathbb{R}^{d}.\] **Lemma 3.9**.: _Let \(\Phi_{1}:\Omega\to\Omega^{\prime}\) and \(\Phi_{2}:\Omega^{\prime\prime}\to\Omega\) be diffeomorphisms. We have_ \[\Theta_{\Phi_{1}\circ\Phi_{2}}=\Theta_{\Phi_{2}}\circ\Theta_{\Phi_{1}}.\] Proof.: The proof is identical to that of Lemma 3.2. **Corollary 3.10**.: \(\Theta_{\Phi}:\Omega^{\prime}\times\mathbb{S}^{d-1}\to\Omega\times\mathbb{S}^ {d-1}\) _is a diffeomorphism._ Proof.: Obviously, \(J_{\Phi}^{*}\circ\Phi^{-1}:\Omega^{\prime}\to\operatorname{GL}(d,\mathbb{R})\) is a smooth mapping. For every smooth mapping \(A:\Omega\to\operatorname{GL}(d,\mathbb{R}),\) the mapping \[(t,s)\to O_{A(t)}s,\quad t\in\Omega,\quad s\in\mathbb{S}^{d-1},\] is smooth. Thus, \(\Theta_{\Phi}\) is smooth. By Lemma 3.9, its inverse is \(\Theta_{\Phi^{-1}}\) which is also a smooth mapping. **Theorem 3.11**.: _Let \(\Omega,\Omega^{\prime}\subset\mathbb{R}^{d}\) be open sets and let \(\Phi:\Omega\to\Omega^{\prime}\) be a diffeomorphism. If \(T\in\Pi\) is compactly supported in \(\Omega,\) then_ \[\operatorname{Ext}_{\Omega^{\prime}}\Bigl{(}U_{\Phi}^{-1}\cdot\operatorname{ Rest}_{\Omega}(T)\cdot U_{\Phi}\Bigr{)}\in\Pi.\] _Furthermore,_ \[\operatorname{sym}\Bigl{(}\operatorname{Ext}_{\Omega^{\prime}}\Bigl{(}U_{ \Phi}^{-1}\cdot\operatorname{Rest}_{\Omega}(T)\cdot U_{\Phi}\Bigr{)}\Bigr{)}= \operatorname{sym}(T)\circ\Theta_{\Phi}.\] _If we view symbols as homogeneous functions on \(\mathbb{R}^{d}\times(\mathbb{R}^{d}\backslash\{0\}),\) then_ \[\operatorname{sym}\Bigl{(}\operatorname{Ext}_{\Omega^{\prime}}\Bigl{(}U_{\Phi}^ {-1}\cdot\operatorname{Rest}_{\Omega}(T)\cdot U_{\Phi}\Bigr{)}\Bigr{)}= \operatorname{sym}(T)\circ\Xi_{\Phi}.\] Theorem 3.11 is proved in Section 6 as a corollary of Theorem 3.5. ## 4. Conjugation of differential operators with \(U_{\Phi}\) In this section, we examine the operators \(U_{\Phi}^{-1}D_{k}U_{\Phi}:W^{1,2}(\mathbb{R}^{d})\to L_{2}(\mathbb{R}^{d})\) and \(U_{\Phi}^{-1}\Delta U_{\Phi}:W^{2,2}(\mathbb{R}^{d})\to L_{2}(\mathbb{R}^{d})\) and show that they may be viewed as differential operators. **Lemma 4.1**.: _Let \(\Phi:\mathbb{R}^{d}\to\mathbb{R}^{d}\) be a diffeomorphism such that \(\Phi\) is affine outside of some ball. Mapping \(V_{\Phi}\) by setting \(V_{\Phi}:\xi\to\xi\circ\Phi\) is bounded on \(L_{2}(\mathbb{R}^{d})\) and so is \(V_{\Phi}^{-1}\)._ Proof.: This is a special case of Theorem 2.3 (or it can be verified by hands). **Lemma 4.2**.: _Let \(\Phi:\mathbb{R}^{d}\to\mathbb{R}^{d}\) be a diffeomorphism such that \(\Phi\) is affine outside of some ball. We have_ \[U_{\Phi}^{-1}\pi_{1}(f)U_{\Phi}=V_{\Phi}^{-1}\pi_{1}(f)V_{\Phi}=\pi_{1}(f\circ \Phi^{-1}).\] Proof.: By definition of \(U_{\Phi}\) (in Notation 3.1) and of \(V_{\Phi}\) (in Lemma 4.1), we have \[U_{\Phi}=M_{|\det(J_{\Phi})|^{\frac{1}{2}}}V_{\Phi}.\] It is immediate that \[U_{\Phi}^{-1}M_{f}U_{\Phi}=V_{\Phi}^{-1}M_{|\det(J_{\Phi})|^{-\frac{1}{2}}}M_{ f}M_{|\det(J_{\Phi})|^{\frac{1}{2}}}V_{\Phi}.\] Since \[M_{|\det(J_{\Phi})|^{-\frac{1}{2}}}M_{f}M_{|\det(J_{\Phi})|^{\frac{1}{2}}}=M_{ f},\] it follows that \[U_{\Phi}^{-1}M_{f}U_{\Phi}=V_{\Phi}^{-1}M_{f}V_{\Phi}=M_{f\circ\Phi^{-1}}.\] The assertion of the lemma now follows from the Definition 1.1. **Notation 4.3**.: _Let \(\Phi:\mathbb{R}^{d}\to\mathbb{R}^{d}\) be a diffeomorphism. Denote_ \[(a_{k,l}^{\Phi})_{k,l=1}^{d}=J_{\Phi}^{*}\circ\Phi^{-1},\quad(b_{k,l}^{\Phi}) _{k,l=1}^{d}=|J_{\Phi}^{*}\circ\Phi^{-1}|^{2}.\] **Lemma 4.4**.: _Functions_ \[a_{k}^{\Phi}=\left(|\det(J_{\Phi})|^{-\frac{1}{2}}\cdot D_{k}(|\det(J_{\Phi}) |^{\frac{1}{2}})\right)\circ\Phi^{-1},\quad 1\leq k\leq d,\] \[b_{l}^{\Phi}=2\sum_{k=1}^{d}\Re(\bar{a}_{k}^{\Phi}\cdot a_{k,l}^{\Phi}),\quad 1 \leq l\leq d,\] \[b^{\Phi}=\sum_{k=1}^{d}\sum_{l=1}^{d}D_{l}(\bar{a}_{k,l}^{\Phi}\cdot a_{k}^{ \Phi})+\sum_{k=1}^{d}|a_{k}^{\Phi}|^{2},\] _belong to \(C_{c}^{\infty}(\mathbb{R}^{d})\)._ Proof.: Since \(\Phi\) is a diffeomorphism, it follows that all those functions are smooth. Since \(\Phi\) is affine outside of some ball, it follows that \(J_{\Phi}\) is constant outside of some ball. Thus, \(D_{k}(|\det(J_{\Phi})|^{\frac{1}{2}})=0\) outside of some ball. Using the definition of \(a_{k}^{\Phi}\), we now see that it vanishes outside of some ball. Using the definition of \(b_{l}^{\Phi}\) and \(b^{\Phi}\), we now see that it vanishes outside of some ball. **Lemma 4.5**.: _Let \(\Phi:\mathbb{R}^{d}\to\mathbb{R}^{d}\) be a diffeomorphism such that \(\Phi\) is affine outside of some ball. We have_ \[V_{\Phi}^{-1}D_{k}V_{\Phi}=\sum_{l=1}^{d}M_{a_{k,l}^{\Phi}}D_{l},\quad 1\leq k \leq d,\] _Here, equalities are understood as equalities of differential operators acting from \(W^{1,2}(\mathbb{R}^{d})\) to \(L_{2}(\mathbb{R}^{d}).\)_ Proof.: By the chain rule, we have \[D_{k}V_{\Phi}\xi=D_{k}(\xi\circ\Phi)=\sum_{l=1}^{d}((D_{l}\xi)\circ\Phi)\cdot iD _{k}\Phi_{l},\quad\xi\in W^{1,2}(\mathbb{R}^{d}).\] Using the notations for \(V_{\Phi}\) (in Lemma 4.1) and for the multiplication operator, we can rewrite this formula as follows: \[D_{k}V_{\Phi}=\sum_{l=1}^{d}M_{iD_{k}\Phi_{l}}V_{\Phi}D_{l}.\] Thus, \[V_{\Phi}^{-1}D_{k}V_{\Phi}=\sum_{l=1}^{d}V_{\Phi}^{-1}M_{iD_{k}\Phi_{l}}V_{ \Phi}\cdot D_{l}\stackrel{{ L.\ref{lem:2.1}}}{{=}}\sum_{l=1}^{d}M_{ i(D_{k}\Phi_{l})\circ\Phi^{-1}}D_{l}=\sum_{l=1}^{d}M_{a_{k,l}^{\Phi}}D_{l},\] where the last equality follows from the definition of \(a_{k,l}^{\Phi}\) (in Notation 4.3) and the fact that \(J_{\Phi}=(iD_{l}\Phi_{k})_{k,l=1}^{d}.\) **Lemma 4.6**.: _Let \(\Phi:\mathbb{R}^{d}\to\mathbb{R}^{d}\) be a diffeomorphism such that \(\Phi\) is affine outside of some ball. We have \(U_{\Phi}:W^{m,2}(\mathbb{R}^{d})\to W^{m,2}(\mathbb{R}^{d}),\,m\in\mathbb{Z}_{+}.\)_ Proof.: By definition of \(U_{\Phi}\) (in Notation 3.1) and of \(V_{\Phi}\) (in Lemma 4.1), we have \[U_{\Phi}=M_{h_{\Phi}}V_{\Phi},\quad h_{\Phi}=|\text{det}(J_{\Phi})|^{\frac{1} {2}}.\] By Theorem 2.3, \(V_{\Phi}:W^{m,2}(\mathbb{R}^{d})\to W^{m,2}(\mathbb{R}^{d}).\) Since \(\Phi\) is a diffeomorphism and since \(\Phi\) is affine outside of some ball, it follows that \(h_{\Phi}\) is a smooth function on \(\mathbb{R}^{d}\) which is constant outside of some ball. It follows that \(M_{h_{\Phi}}:W^{m,2}(\mathbb{R}^{d})\to W^{m,2}(\mathbb{R}^{d}).\) A combination of those mappings yields the assertion. By Lemma 4.6, we have \[W^{1,2}(\mathbb{R}^{d})\stackrel{{ U_{\Phi}}}{{\to}}W^{1,2}( \mathbb{R}^{d})\stackrel{{ D_{k}}}{{\to}}L_{2}(\mathbb{R}^{d}) \stackrel{{ U_{\Phi}^{-1}}}{{\to}}L_{2}(\mathbb{R}^{d}),\] \[W^{2,2}(\mathbb{R}^{d})\stackrel{{ U_{\Phi}}}{{\to}}W^{2,2}( \mathbb{R}^{d})\stackrel{{\Delta}}{{\to}}L_{2}(\mathbb{R}^{d}) \stackrel{{ U_{\Phi}^{-1}}}{{\to}}L_{2}(\mathbb{R}^{d}).\] Hence, we may view \(U_{\Phi}^{-1}D_{k}U_{\Phi}\) (respectively, \(U_{\Phi}^{-1}\Delta U_{\Phi}\)) as operators from \(W^{1,2}(\mathbb{R}^{d})\) (respectively, from \(W^{2,2}(\mathbb{R}^{d})\)) to \(L_{2}(\mathbb{R}^{d}).\) **Lemma 4.7**.: _Let \(\Phi:\mathbb{R}^{d}\to\mathbb{R}^{d}\) be a diffeomorphism such that \(\Phi\) is affine outside of some ball. We have_ 1. \[U_{\Phi}^{-1}D_{k}U_{\Phi}=\sum_{l=1}^{d}M_{a_{k,l}^{\Phi}}D_{l}+M_{a_{k}^{\Phi }},\] _._ 2. \[-U_{\Phi}^{-1}\Delta U_{\Phi}=\sum_{l_{1},l_{2}=1}^{d}D_{l_{1}}M_{b_{l_{1},l_{2}} ^{\Phi}}D_{l_{2}}+\sum_{l=1}^{d}M_{b_{l}^{\Phi}}D_{l}+M_{b^{\Phi}}.\] _Here, equalities are understood as equalities of linear operators acting from \(W^{1,2}(\mathbb{R}^{d})\) (respectively, from \(W^{2,2}(\mathbb{R}^{d})\)) to \(L_{2}(\mathbb{R}^{d})\)._ Proof.: Repeating beginning of the proof of Lemma 4.6, we write \[U_{\Phi}=M_{h_{\Phi}}V_{\Phi},\quad h_{\Phi}=|\mathrm{det}(J_{\Phi})|^{\frac{1 }{2}}.\] It is immediate that \[U_{\Phi}^{-1}D_{k}U_{\Phi}=V_{\Phi}^{-1}M_{h_{\Phi}^{-1}}D_{k}M_{h_{\Phi}}V_{ \Phi}. \tag{4.1}\] Clearly, \[M_{h_{\Phi}^{-1}}D_{k}M_{h_{\Phi}}=D_{k}+M_{h_{\Phi}^{-1}}\cdot[D_{k},M_{h_{ \Phi}}]=D_{k}+M_{h_{\Phi}^{-1}\cdot D_{k}h_{\Phi}}. \tag{4.2}\] Combining (4.1) and (4.2), we obtain \[U_{\Phi}^{-1}D_{k}U_{\Phi}=V_{\Phi}^{-1}D_{k}V_{\Phi}+V_{\Phi}^{-1}M_{h_{\Phi} ^{-1}\cdot D_{k}h_{\Phi}}V_{\Phi}. \tag{4.3}\] It follows from Lemma 4.2 and the definition of \(a_{k}^{\Phi}\) (in Lemma 4.4) that \[V_{\Phi}^{-1}M_{h_{\Phi}^{-1}\cdot D_{k}h_{\Phi}}V_{\Phi}=M_{a_{k}^{\Phi}}. \tag{4.4}\] The equality (1) follows by combining Lemma 4.5, (4.3) and (4.4). Taking the adjoint of (1), we write \[U_{\Phi}^{-1}D_{k}U_{\Phi}=\sum_{l=1}^{d}D_{l}M_{\tilde{a}_{k,l}^{\Phi}}+M_{ \tilde{a}_{k}^{\Phi}}.\] Thus, \[U_{\Phi}^{-1}D_{k}^{2}U_{\Phi}=\big{(}\sum_{l=1}^{d}D_{l}M_{\tilde{a}_{k,l}^{ \Phi}}+M_{\tilde{a}_{k}^{\Phi}}\big{)}\cdot\big{(}\sum_{l=1}^{d}M_{a_{k,l}^{ \Phi}}D_{l}+M_{a_{k}^{\Phi}}\big{)}=\] \[=\sum_{l_{1},l_{2}=1}^{d}D_{l_{1}}M_{\tilde{a}_{k,l_{1}}^{\Phi}}M_{a_{k,l_{2}} ^{\Phi}}D_{l_{2}}+\sum_{l_{1}=1}^{d}D_{l_{1}}M_{\tilde{a}_{k,l_{1}}^{\Phi}}M_{a _{k}^{\Phi}}+\] \[+\sum_{l_{2}=1}^{d}M_{\tilde{a}_{k}^{\Phi}}M_{a_{k,l_{2}}^{\Phi}}D_{l_{2}}+M_{ |a_{k}^{\Phi}|^{2}}.\] Clearly, \[\sum_{l_{1}=1}^{d}D_{l_{1}}M_{\tilde{a}_{k,l_{1}}^{\Phi}}M_{a_{k}^{\Phi}}+\sum _{l_{2}=1}^{d}M_{\tilde{a}_{k}^{\Phi}}M_{a_{k,l_{2}}^{\Phi}}D_{l_{2}}=\sum_{l=1 }^{d}D_{l}M_{\tilde{a}_{k,l}^{\Phi}\cdot a_{k}^{\Phi}}+\sum_{l=1}^{d}M_{ \tilde{a}_{k}^{\Phi}\cdot a_{k,l}^{\Phi}}D_{l}=\] \[=\sum_{l=1}^{d}M_{\tilde{a}_{k,l}^{\Phi}\cdot a_{k}^{\Phi}}D_{l}+\sum_{l=1}^{d }M_{\tilde{a}_{k}^{\Phi}\cdot a_{k,l}^{\Phi}}D_{l}+\sum_{l=1}^{d}[D_{l},M_{ \tilde{a}_{k,l}^{\Phi}\cdot a_{k}^{\Phi}}]=\] \[\stackrel{{\eqref{eq:a_a_a_a_a_a_a_a_a_a_a_a_a_a_a_a_a_a_a_a_a_a_a_a_a 1} Thus, \[-U_{\Phi}^{-1}\Delta U_{\Phi}=\sum_{k=1}^{d}U_{\Phi}^{-1}D_{k}^{2}U_{ \Phi}=\sum_{k=1}^{d}\sum_{l_{1},l_{2}=1}^{d}D_{l_{1}}M_{\bar{a}^{\Phi}_{k,l_{1}} \cdot a^{\Phi}_{k,l_{2}}}D_{l_{2}}+\] \[+2\sum_{k=1}^{d}\sum_{l=1}^{d}M_{\Re(\bar{a}^{\Phi}_{k}\cdot a^{ \Phi}_{k,l})}D_{l}+\sum_{k=1}^{d}\sum_{l=1}^{d}M_{D_{l}(\bar{a}^{\Phi}_{k,l} \cdot a^{\Phi}_{k})}+\sum_{k=1}^{d}M_{|a^{\Phi}_{k}|^{2}}.\] By the definition on \(b^{\Phi}_{l}\) and \(b^{\Phi}\) (in Lemma 4.4), we have \[-U_{\Phi}^{-1}\Delta U_{\Phi}=\sum_{k=1}^{d}\sum_{l_{1},l_{2}=1}^{d}D_{l_{1}}M _{\bar{a}^{\Phi}_{k,l_{1}}\cdot a^{\Phi}_{k,l_{2}}}D_{l_{2}}+\sum_{l=1}^{d}M_{ b^{\Phi}_{l}}D_{l}+M_{b^{\Phi}}.\] Consider now the highest order term. Recalling Notation 4.3, we write \[\sum_{k=1}^{d}\bar{a}^{\Phi}_{k,l_{1}}a^{\Phi}_{k,l_{2}}=\big{(}|(a^{\Phi}_{k, l})^{d}_{k,l=1}|^{2}\big{)}_{l_{1},l_{2}}=\big{(}|J^{*}_{\Phi}\circ\Phi^{-1}|^{2 }\big{)}_{l_{1},l_{2}}=b^{\Phi}_{l_{1},l_{2}}.\] This delivers (2). ## 5. Proof of Theorem 3.5 The proof of Theorem 3.5 is somewhat technical and is presented below in the series of lemmas. The strategy is as follows: 1. to show that every compact operator on \(L_{2}(\mathbb{R}^{d})\) belongs to \(\Pi\); 2. to show that the conjugation of \(M_{\phi}\frac{D_{k}}{\sqrt{1-\Delta}}\), \(\phi\in C_{c}^{\infty}(\mathbb{R}^{d})\), by \(U_{\Phi}\) belongs to \(\Pi\) modulo compact operators; 3. to conclude that the conjugation of \(M_{\phi}\frac{D_{k}}{\sqrt{-\Delta}}\), \(\phi\in C_{c}^{\infty}(\mathbb{R}^{d})\), by \(U_{\Phi}\) belongs to \(\Pi\); 4. to conclude the argument in Theorem 3.5; The following assertion is well-known (see e.g. Corollary 4.1.10 in [9]). **Lemma 5.1**.: _Let \(\mathcal{A}\) be a \(C^{*}\)-algebra. Let \(\pi:\mathcal{A}\to B(H)\) be an irreducible representation. One of the following mutually exclusive options holds:_ 1. \(\pi(\mathcal{A})\) _does not contain any compact operator (except for_ \(0\)_);_ 2. \(\pi(\mathcal{A})\) _contains every compact operator._ We now apply Lemma 5.1 to the \(C^{*}\)-algebra \(\mathcal{A}=\Pi\) and infer that \(\Pi\) contains the ideal \(\mathcal{K}(L_{2}(\mathbb{R}^{d}))\). **Lemma 5.2**.: _The algebra \(\mathcal{K}(L_{2}(\mathbb{R}^{d}))\) is contained in \(\Pi\) and coincides with the kernel of the homomorphism \(\mathrm{sym}.\)_ Proof.: Since \(\Pi\) contains \(\pi_{1}(\mathcal{A}_{1})\), it follows (here, \(X^{\prime}\) denotes the commutant of the set \(X\subset B(L_{2}(\mathbb{R}^{d}))\)) that \[\Pi^{\prime}\subset\Big{(}\pi_{1}(\mathcal{A}_{1})\Big{)}^{\prime}=\Big{(}\pi _{1}(L_{\infty}(\mathbb{R}^{d}))\Big{)}^{\prime}=\pi_{1}(L_{\infty}(\mathbb{R} ^{d})).\] Define \(g_{n,k}\in C(\mathbb{S}^{d-1})\), \(1\leq k\leq d\), \(n\in\mathbb{N}\), by setting \(g_{k}(s)=s_{k}^{\frac{1}{2n+1}}\), \(s\in\mathbb{S}^{d-1}\). Clearly, \(\pi_{2}(g_{n,k})\to\mathrm{sgn}(D_{k})\) as \(n\to\infty\) in weak operator topology. Thus, \(\mathrm{sgn}(D_{k})\) belongs to the weak closure of \(\pi_{2}(\mathcal{A}_{2})\) and, hence, to the weak closure of \(\Pi.\) Therefore, \[\Pi^{\prime}\subset(\mathrm{sgn}(D_{k}))^{\prime},\quad 1\leq k\leq d.\] Thus, \[\Pi^{\prime}\subset\Big{(}\bigcap_{1\leq k\leq d}(\operatorname{sgn}(D_{k}))^{ \prime}\Big{)}\cap\pi_{1}(L_{\infty}(\mathbb{R}^{d})).\] For \(t\in\mathbb{R}^{d}\), denote by \(\tilde{t}_{k}\in\mathbb{R}^{d-1}\) the vector obtained by eliminating the \(k\)-th component of \(t.\) If \(f\in L_{\infty}(\mathbb{R}^{d})\) is such that \(\pi_{1}(f)\) commutes with \(\operatorname{sgn}(D_{k})\), then, for almost every \(\tilde{t}_{k}\in\mathbb{R}^{d-1}\), the function \(f(\tilde{t}_{k},\cdot)\) commutes with the Hilbert transform. This easily implies that, for almost every \(\tilde{t}_{k}\in\mathbb{R}^{d-1}\), the function \(f(\tilde{t}_{k},\cdot)\) is constant. If \(f\in L_{\infty}(\mathbb{R}^{d})\) is such that \(\pi_{1}(f)\) commutes with _every_\(\operatorname{sgn}(D_{k})\), \(1\leq k\leq d\), then \(f=\operatorname{const}.\) Hence, \(\Pi^{\prime}\) is trivial. By Proposition II.6.1.8 in [5], representation \(\operatorname{id}:\Pi\to B(L_{2}(\mathbb{R}^{d}))\) is irreducible. We now demonstrate that \(\Pi\) contains a non-zero compact operator. As proved above, for every non-zero \(f\in C_{c}^{\infty}(\mathbb{R}^{d})\), there exists \(1\leq k\leq d\) such that \(\pi_{1}(f)\) does not commutes with \(\operatorname{sgn}(D_{k})\). Since \(\pi_{2}(g_{n,k})\to\operatorname{sgn}(D_{k})\) as \(n\to\infty\) in weak operator topology, it follows that \(\pi_{1}(f)\) does not commute with \(\pi_{2}(g_{n,k})\) for some \(n,k.\) Thus, the operator \([\pi_{1}(f),\pi_{2}(g_{n,k})]\) is a non-zero compact operator, which belongs to \(\Pi.\) The first assertion of the lemma follows now from Lemma 5.1. Let \(q:B(L_{2}(\mathbb{R}^{d}))\to B(L_{2}(\mathbb{R}^{d}))/\mathcal{K}(L_{2}( \mathbb{R}^{d}))\) be the canonical quotient map. Recall (see the proof of Theorem 3.3 in [20]) that \(\operatorname{sym}\) is constructed as a composition \[\operatorname{sym}=\theta^{-1}\circ q,\] where \(\theta^{-1}\) is some linear isomorphism (its definition and properties are irrelevant at the current proof). It follows that the kernel of \(\operatorname{sym}\) coincides with the kernel of \(q\), which is \(\mathcal{K}(L_{2}(\mathbb{R}^{d}))\). **Notation 5.3**.: _Let \(\Phi:\mathbb{R}^{d}\to\mathbb{R}^{d}\) be a diffeomorphism such that \(\Phi\) is affine outside of some ball. Denote_ \[p^{\Phi}(t,s)=\sum_{l_{1},l_{2}=1}^{d}b_{l_{1},l_{2}}^{\Phi}(t)s_{l_{1}}s_{l_{ 2}},\quad r_{k}^{\Phi}(t,s)=\sum_{l=1}^{d}a_{k,l}^{\Phi}(t)s_{l},\quad 1\leq k \leq d.\] _Here, \((a_{k,l}^{\Phi})_{k,l=1}^{d}\) and \((b_{l_{1},l_{2}}^{\Phi})_{l_{1},l_{2}=1}^{d}\) are as in Notation 4.3._ The following two lemmas form the core of our computation. **Lemma 5.4**.: _Let \(\Phi:\mathbb{R}^{d}\to\mathbb{R}^{d}\) be a diffeomorphism such that \(\Phi\) is affine outside of some ball. We have_ \[U_{\Phi}^{-1}\frac{D_{k}}{\sqrt{1-\Delta}}U_{\Phi}\in\operatorname{Op}(\frac{ r_{k}^{\Phi}}{(1+p^{\Phi})^{\frac{1}{2}}})+\Psi^{-1}(\mathbb{R}^{d}).\] Proof.: Lemma 4.7 asserts that \[U_{\Phi}^{-1}D_{k}U_{\Phi}=\operatorname{Op}(r_{k}^{\Phi})+M_{a_{k}^{\Phi}}. \tag{5.1}\] It is immediate that \[\operatorname{Op}(r_{k}^{\Phi})\in\Psi^{1}(\mathbb{R}^{d}),\quad M_{a_{k}^{ \Phi}}\in\Psi^{0}(\mathbb{R}^{d}). \tag{5.2}\] Lemma 4.6 yields that \(-U_{\Phi}^{-1}\Delta U_{\Phi}\) is a self-adjoint positive operator with the domain \(U_{\Phi}^{-1}(W^{2,2}(\mathbb{R}^{d}))=W^{2,2}(\mathbb{R}^{d}).\) Lemma 4.7 and (2.7) yield that \(-U_{\Phi}^{-1}\Delta U_{\Phi}\) is a differential operator of order \(2\) with principal symbol \(p^{\Phi}\geq 0\). By Theorem 2.5 applied with \(T=-U_{\Phi}^{-1}\Delta U_{\Phi}\) and \(z=-\frac{1}{2}\), we have \[(1-U_{\Phi}^{-1}\Delta U_{\Phi})^{-\frac{1}{2}}-\operatorname{Op}((p^{\Phi}+1 )^{-\frac{1}{2}})\in\Psi^{-2}(\mathbb{R}^{d}), \tag{5.3}\] \[\operatorname{Op}((p^{\Phi}+1)^{-\frac{1}{2}})\in\Psi^{-1}(\mathbb{R}^{d}). \tag{5.4}\] Equations (5.1), (5.2), (5.3) and (5.4) yield that the operators \[T_{1}=U_{\Phi}^{-1}D_{k}U_{\Phi},\quad T_{2}=U_{\Phi}^{-1}(1-\Delta)^{-\frac{1 }{2}}U_{\Phi}\] satisfy the assumptions in Lemma 2.4. By Lemma 2.4, we have \[U_{\Phi}^{-1}\frac{D_{k}}{\sqrt{1-\Delta}}U_{\Phi}=T_{1}T_{2}\in\operatorname{ Op}(r_{k}^{\Phi}\cdot(1+p^{\Phi})^{-\frac{1}{2}})+\Psi^{-1}(\mathbb{R}^{d}).\] In our next lemma, we approximate the operators on the left hand side with pseudodifferential-like operators on the right hand side. The latter is defined in (2.8) in Subsection 2.5. **Lemma 5.5**.: _Let \(\Phi:\mathbb{R}^{d}\to\mathbb{R}^{d}\) be a diffeomorphism such that \(\Phi\) is affine outside of some ball. If \(\phi\in C_{c}^{\infty}(\mathbb{R}^{d}),\) then_ \[U_{\Phi}^{-1}M_{\phi}\frac{D_{k}}{\sqrt{1-\Delta}}U_{\Phi}\in T_{(\phi\circ \Phi^{-1}\otimes 1)\cdot q_{k}^{\Phi}}+\mathcal{K}(L_{2}(\mathbb{R}^{d})),\] _where_ \[q_{k}^{\Phi}(t,s)=(O_{J_{\Phi}^{*}(\Phi^{-1}(t))}s)_{k},\quad t\in\mathbb{R}^ {d},\quad s\in\mathbb{S}^{d-1}.\] Proof.: For every \(f\in C^{\infty}(\mathbb{R}^{d})\) and for every \(p\in C^{\infty}(\mathbb{R}^{d}\times\mathbb{R}^{d}),\) we have \[M_{f}\cdot\operatorname{Op}(p)=\operatorname{Op}((f\otimes 1)p).\] Also, \[U_{\Phi}^{-1}M_{\phi}\frac{D_{k}}{\sqrt{1-\Delta}}U_{\Phi}=M_{\phi\otimes\Phi ^{-1}}\cdot U_{\Phi}^{-1}\frac{D_{k}}{\sqrt{1-\Delta}}U_{\Phi}.\] It follows now from Lemma 5.4 that \[U_{\Phi}^{-1}M_{\phi}\frac{D_{k}}{\sqrt{1-\Delta}}U_{\Phi}\in\operatorname{ Op}((\phi\circ\Phi^{-1}\otimes 1)\frac{r_{k}^{\Phi}}{(1+p^{\Phi})^{\frac{1}{2}}})+ \Psi^{-1}(\mathbb{R}^{d}). \tag{5.5}\] Fix a function \(\psi\in C_{c}^{\infty}(\mathbb{R}^{d})\) such that \(\psi=1\) near \(0\) and such that \((\phi\circ\Phi^{-1})\cdot\psi=\phi\circ\Phi^{-1}.\) Set \[e_{k}(t,s)=\phi(\Phi^{-1}(t))\cdot r_{k}^{\Phi}(t,s)\cdot(1+p^{\Phi}(t,s))^{- \frac{1}{2}},\quad t,s\in\mathbb{R}^{d},\] \[f_{k}(t,s)=\phi(\Phi^{-1}(t))\cdot r_{k}^{\Phi}(t,s)\cdot(p^{\Phi}(t,s))^{- \frac{1}{2}}\cdot(1-\psi(s)),\quad t,s\in\mathbb{R}^{d}.\] We have \(e_{k}-f_{k}=g_{k}\cdot h,\) where \[g_{k}(t,s)=\phi(\Phi^{-1}(t))\cdot r_{k}^{\Phi}(t,s),\quad t,s\in\mathbb{R}^ {d},\] \[h(t,s)=(1+p^{\Phi}(t,s))^{-\frac{1}{2}}-(p^{\Phi}(t,s))^{-\frac{1}{2}}\cdot(1- \psi(s)),\quad t,s\in\mathbb{R}^{d}.\] An elementary computation shows that \[\sup_{t,s\in\mathbb{R}^{d}}(1+|s|^{2})^{\frac{|\beta|_{1}+2}{2}}|D_{t}^{ \alpha}D_{s}^{\beta}h(t,s)|<\infty,\quad\alpha,\beta\in\mathbb{Z}_{+}^{d}.\] By the Leibniz rule, we have \[D_{t}^{\alpha}D_{s}^{\beta}(g_{k}\cdot h)=\sum_{0\leq\gamma\leq\alpha}\sum_{0 \leq\delta\leq\beta}c(\alpha,\gamma)c(\beta,\delta)D_{t}^{\gamma}D_{s}^{\delta }g_{k}\cdot D_{t}^{\alpha-\gamma}D_{s}^{\beta-\delta}h.\] This implies \[\sup_{t,s\in\mathbb{R}^{d}}(1+|s|^{2})^{\frac{|\beta|_{1}+1}{2}}|D_{t}^{\alpha} D_{s}^{\beta}(e_{k}-f_{k})(t,s)|<\infty,\quad\alpha,\beta\in\mathbb{Z}_{+}^{d},\] so that \(\operatorname{Op}(e_{k}-f_{k})\in\Psi^{-1}(\mathbb{R}^{d})\) (see (2.3)). It follows now from (5.5) that \[U_{\Phi}^{-1}M_{\phi}\frac{D_{k}}{\sqrt{1-\Delta}}U_{\Phi}-\operatorname{Op}(f _{k})\in\Psi^{-1}(\mathbb{R}^{d}). \tag{5.6}\] Denote for brevity the left hand side of (5.6) by \(T\) (so that \(T\in\Psi^{-1}(\mathbb{R}^{d})\)). Due to the choice of \(\psi\), we have that \(T=M_{\psi}T.\) It follows now from (2.6) (applied with \(m=-1\)) that \(T\) is compact. In other words, we have \[U_{\Phi}^{-1}M_{\phi}\frac{D_{k}}{\sqrt{1-\Delta}}U_{\Phi}-\operatorname{Op}(f _{k})\in\mathcal{K}(L_{2}(\mathbb{R}^{d})). \tag{5.7}\] Appealing to the definition of \(r_{k}^{\Phi}\) and \(p^{\Phi}\), we note that \[r_{k}^{\Phi}(t,s)=(J_{\Phi}^{*}(\Phi^{-1}(t))s)_{k},\quad p^{\Phi}(t,s)=|J_{ \Phi}^{*}(\Phi^{-1}(t))s|^{2},\quad t,s\in\mathbb{R}^{d}.\] Therefore, \[r_{k}^{\Phi}(t,s)\cdot(p^{\Phi}(t,s))^{-\frac{1}{2}}=(O_{J_{\Phi}^{*}(\Phi^{- 1}(t))}\frac{s}{|s|})_{k},\quad t,s\in\mathbb{R}^{d}.\] Thus, \[f_{k}(t,s)=\phi(\Phi^{-1}(t))\cdot q_{k}^{\Phi}(t,\frac{s}{|s|})\cdot(1-\psi(s )),\quad t,s\in\mathbb{R}^{d}.\] By Lemma 2.7 applied with \(q=(\phi\circ\Phi^{-1}\otimes 1)\cdot q_{k}^{\Phi}\), we have \[\operatorname{Op}(f_{k})-T_{(\phi\circ\Phi^{-1}\otimes 1)\cdot q_{k}^{\Phi}} \in\mathcal{K}(L_{2}(\mathbb{R}^{d})). \tag{5.8}\] Combining (5.7) and (5.8), we complete the proof. **Lemma 5.6**.: _Let \(\Phi\) be a diffeomorphism such that \(\Phi\) is affine outside of some ball. If \(\phi\in C_{c}^{\infty}(\mathbb{R}^{d}),\) then_ \[U_{\Phi}^{-1}M_{\phi}\frac{D_{k}}{\sqrt{-\Delta}}U_{\Phi}\in\Pi\] _and_ \[\operatorname{sym}\Bigl{(}U_{\Phi}^{-1}M_{\phi}\frac{D_{k}}{\sqrt{-\Delta}}U _{\Phi}\Bigr{)}=(\phi\circ\Phi^{-1}\otimes 1)\cdot q_{k}^{\Phi}.\] Proof.: Applying bounded Borel function \[t\to(1+|t|^{2})\cdot(\frac{t}{|t|}-\frac{t}{\sqrt{1+|t|^{2}}}),\quad t\in \mathbb{R}^{d},\] to the tuple \(\nabla\), we obtain that \[(1-\Delta)\cdot\Bigl{(}\frac{D_{k}}{\sqrt{-\Delta}}-\frac{D_{k}}{\sqrt{1- \Delta}}\Bigr{)}\in B(L_{2}(\mathbb{R}^{d})).\] Recall that (see e.g. Theorem 4.1 in [27]) \(M_{\phi}(1-\Delta)^{-1}\in\mathcal{K}(L_{2}(\mathbb{R}^{d})).\) Since product of bounded and compact operators is compact, it follows that \[M_{\phi}\Bigl{(}\frac{D_{k}}{\sqrt{-\Delta}}-\frac{D_{k}}{\sqrt{1-\Delta}} \Bigr{)}\in\mathcal{K}(L_{2}(\mathbb{R}^{d}))\] and \[U_{\Phi}^{-1}M_{\phi}\Bigl{(}\frac{D_{k}}{\sqrt{-\Delta}}-\frac{D_{k}}{\sqrt{1- \Delta}}\Bigr{)}U_{\Phi}\in\mathcal{K}(L_{2}(\mathbb{R}^{d})).\] Combining with Lemma 5.5, we obtain \[U_{\Phi}^{-1}M_{\phi}\frac{D_{k}}{\sqrt{-\Delta}}U_{\Phi}-T_{(\phi\circ\Phi^{- 1}\otimes 1)\cdot q_{k}^{\Phi}}\in\mathcal{K}(L_{2}(\mathbb{R}^{d})).\] By Lemma 2.6, we have \[T_{(\phi\circ\Phi^{-1}\otimes 1)\cdot q_{k}^{\Phi}}\in\Pi,\quad\text{sym}\Big{(}T_{ (\phi\circ\Phi^{-1}\otimes 1)\cdot q_{k}^{\Phi}}\Big{)}=(\phi\circ\Phi^{-1} \otimes 1)\cdot q_{k}^{\Phi}.\] The assertion follows by combining the last two equations and Lemma 5.2. **Lemma 5.7**.: _If \(T_{1},T_{2}\in\Pi,\) then_ \[[T_{1},T_{2}]\in\mathcal{K}(L_{2}(\mathbb{R}^{d})).\] Proof.: Since sym is a \(*\)-homomorphism, it follows that \(\text{sym}([T_{1},T_{2}])=0.\) The assertion is now an immediate consequence of Lemma 5.2. **Lemma 5.8**.: _Let \((f_{k})_{k=1}^{m}\subset\mathcal{A}_{1}\) and \((g_{k})_{k=1}^{m}\subset\mathcal{A}_{2}.\) We have_ \[\prod_{k=1}^{m}\pi_{1}(f_{k})\pi_{2}(g_{k})\in\pi_{1}(\prod_{k=1}^{m}f_{k})\pi _{2}(\prod_{k=1}^{m}g_{k})+\mathcal{K}(L_{2}(\mathbb{R}^{d})).\] Proof.: We prove the assertion by induction on \(m.\) For \(m=1,\) there is nothing to prove. So, we only have to prove the step of induction. Let us prove the assertion for \(m=2.\) We have \[\pi_{1}(f_{1})\pi_{2}(g_{1})\pi_{1}(f_{2})\pi_{2}(g_{2})=[\pi_{2}(g_{1}),\pi_{ 1}(f_{1}f_{2})]\cdot\pi_{2}(g_{2})+\] \[+[\pi_{1}(f_{1}),\pi_{2}(g_{1})]\cdot\pi_{1}(f_{2})\pi_{2}(g_{2})+\pi_{1}(f_{1 }f_{2})\pi_{2}(g_{1}g_{2}).\] By Lemma 5.7, we have \[[\pi_{1}(f_{1}),\pi_{2}(g_{1})],[\pi_{2}(g_{1}),\pi_{1}(f_{1}f_{2})]\in \mathcal{K}(L_{2}(\mathbb{R}^{d})).\] Therefore, \[\pi_{1}(f_{1})\pi_{2}(g_{1})\pi_{1}(f_{2})\pi_{2}(g_{2})\in\pi_{1}(f_{1}f_{2} )\pi_{2}(g_{1}g_{2})+\mathcal{K}(L_{2}(\mathbb{R}^{d})).\] This proves the assertion for \(m=2.\) It remains to prove the step of induction. Suppose the assertion holds for \(m\geq 2\) and let us prove it for \(m+1.\) Clearly, \[\prod_{k=1}^{m+1}\pi_{1}(f_{k})\pi_{2}(g_{k})=\pi_{1}(f_{1})\pi_{2}(g_{1}) \cdot\prod_{k=2}^{m+1}\pi_{1}(f_{k})\pi_{2}(g_{k}).\] Using the inductive assumption, we obtain \[\prod_{k=1}^{m+1}\pi_{1}(f_{k})\pi_{2}(g_{k})\in\pi_{1}(f_{1})\pi_{2}(g_{1}) \cdot\pi_{1}(\prod_{k=2}^{m+1}f_{k})\pi_{2}(\prod_{k=2}^{m+1}g_{k})+\mathcal{ K}(L_{2}(\mathbb{R}^{d})).\] Using the assertion for \(m=2,\) we obtain \[\pi_{1}(f_{1})\pi_{2}(g_{1})\cdot\pi_{1}(\prod_{k=2}^{m+1}f_{k})\pi_{2}(\prod_ {k=2}^{m+1}g_{k})\in\pi_{1}(\prod_{k=1}^{m+1}f_{k})\pi_{2}(\prod_{k=1}^{m+1}g_ {k})+\mathcal{K}(L_{2}(\mathbb{R}^{d})).\] Combining the last two equations, we obtain \[\prod_{k=1}^{m+1}\pi_{1}(f_{k})\pi_{2}(g_{k})\in\pi_{1}(\prod_{k=1}^{m+1}f_{k}) \pi_{2}(\prod_{k=1}^{m+1}g_{k})+\mathcal{K}(L_{2}(\mathbb{R}^{d})).\] This establishes the step of induction and, hence, completes the proof of the lemma. **Lemma 5.9**.: _Let \(\Phi\) be a diffeomorphism which is affine outside of some ball. If \(g\in C(\mathbb{S}^{d-1})\) and \(f\in C_{c}(\mathbb{R}^{d}),\) then_ \[U_{\Phi}^{-1}\pi_{1}(f)\pi_{2}(g)U_{\Phi}\in\Pi\] _and_ \[\operatorname{sym}\Bigl{(}U_{\Phi}^{-1}\pi_{1}(f)\pi_{2}(g)U_{\Phi}\Bigr{)}=( f\circ\Phi^{-1}\otimes 1)\cdot g(q_{1}^{\Phi},\cdots,q_{d}^{\Phi}).\] Proof.: Let \(\operatorname{Poly}(\mathbb{S}^{d-1})\) be the algebra of polynomials on \(\mathbb{S}^{d-1}.\) Suppose first that \(g\in\operatorname{Poly}(\mathbb{S}^{d-1})\) is monomial. Let \(g(s)=\prod_{l=1}^{d}s_{l}^{n_{l}}.\) Let \(\phi\in C_{c}^{\infty}(\mathbb{R}^{d})\) be such that \(f\cdot\phi=f.\) Obviously, \[\pi_{1}(f)\pi_{2}(g)=\pi_{1}(f\cdot\phi^{\sum_{l=1}^{d}n_{l}})\cdot\pi_{2}(g) =\pi_{1}(f)\cdot\pi_{1}(\phi^{\sum_{l=1}^{d}n_{l}})\pi_{2}(g).\] Setting \(m=\sum_{l=1}^{d}n_{l},\)\(f_{k}=\phi,\)\(1\leq k\leq m,\) \[g_{k}(s)=s_{l},\quad s\in\mathbb{S}^{d-1},\quad\sum_{i=1}^{l-1}n_{i}<k\leq\sum_ {i=1}^{l}n_{i}.\] In this notations, \[\pi_{1}(\phi^{\sum_{l=1}^{d}n_{l}})\pi_{2}(g)=\pi_{1}(\prod_{k=1}^{m}f_{k})\pi _{2}(\prod_{k=1}^{m}g_{k}).\] By Lemma 5.8, we have \[\pi_{1}(\phi^{\sum_{l=1}^{d}n_{l}})\pi_{2}(g)\in\prod_{k=1}^{m}\pi_{1}(f_{k}) \pi_{2}(g_{k})+\mathcal{K}(L_{2}(\mathbb{R}^{d})).\] Since \[\pi_{1}(f_{k})\pi_{2}(g_{k})=\pi_{1}(\phi)\frac{D_{l}}{\sqrt{-\Delta}},\quad \sum_{i=1}^{l-1}n_{i}<k\leq\sum_{i=1}^{l}n_{i},\] it follows that \[\pi_{1}(f)\pi_{2}(g)\in\pi_{1}(f)\cdot\prod_{l=1}^{d}\Bigl{(}\pi_{1}(\phi) \frac{D_{l}}{\sqrt{-\Delta}}\Bigr{)}^{n_{l}}+\mathcal{K}(L_{2}(\mathbb{R}^{d} )).\] Thus, \[U_{\Phi}^{-1}\pi_{1}(f)\pi_{2}(g)U_{\Phi}\in U_{\Phi}^{-1}\pi_{1}(f)U_{\Phi} \cdot\prod_{l=1}^{d}\Bigl{(}U_{\Phi}^{-1}\pi_{1}(\phi)\frac{D_{l}}{\sqrt{- \Delta}}U_{\Phi}\Bigr{)}^{n_{l}}+\mathcal{K}(L_{2}(\mathbb{R}^{d})).\] By Lemma 5.6, we have \[U_{\Phi}^{-1}\pi_{1}(f)U_{\Phi}\cdot\prod_{l=1}^{d}\Bigl{(}U_{\Phi}^{-1}\pi_{1 }(\phi)\frac{D_{l}}{\sqrt{-\Delta}}U_{\Phi}\Bigr{)}^{n_{l}}\in\Pi\] and \[\operatorname{sym}\Bigl{(}U_{\Phi}^{-1}\pi_{1}(f)U_{\Phi}\cdot\prod_{l=1}^{d} \Bigl{(}U_{\Phi}^{-1}\pi_{1}(\phi)\frac{D_{l}}{\sqrt{-\Delta}}U_{\Phi}\Bigr{)} ^{n_{l}}\Bigr{)}=\] \[=\operatorname{sym}\Bigl{(}U_{\Phi}^{-1}\pi_{1}(f)U_{\Phi}\Bigr{)}\cdot\prod_{ l=1}^{d}\Bigl{(}\operatorname{sym}\Bigl{(}U_{\Phi}^{-1}\pi_{1}(\phi)\frac{D_{l}}{ \sqrt{-\Delta}}U_{\Phi}\Bigr{)}\Bigr{)}^{n_{l}}=\] \[=(f\circ\Phi^{-1}\otimes 1)\cdot\prod_{l=1}^{d}\Big{(}(\phi\circ\Phi^{-1} \otimes 1)\cdot q_{l}^{\Phi}\Big{)}^{n_{l}}=(f\circ\Phi^{-1}\otimes 1)\cdot\prod_{l=1}^{d} \big{(}q_{l}^{\Phi}\big{)}^{n_{l}}.\] By Lemma 5.2, we have \[U_{\Phi}^{-1}\pi_{1}(f)\pi_{2}(g)U_{\Phi}\in\Pi\] and \[\operatorname{sym}\Bigl{(}U_{\Phi}^{-1}\pi_{1}(f)\pi_{2}(g)U_{\Phi}\Bigr{)}=( f\circ\Phi^{-1}\otimes 1)\cdot g(q_{1}^{\Phi},\cdots,q_{d}^{\Phi}).\] By linearity, the same assertion holds if \(g\in\operatorname{Poly}(\mathbb{S}^{d-1}).\) To prove the assertion in general, let \(g\in C(\mathbb{S}^{d-1})\) and consider a sequence \(\{g_{n}\}_{n\geq 1}\subset\operatorname{Poly}(\mathbb{S}^{d-1})\) such that \(g_{n}\to g\) in the uniform norm. We have \[U_{\Phi}^{-1}\pi_{1}(f)\pi_{2}(g_{n})U_{\Phi}\to U_{\Phi}^{-1}\pi_{1}(f)\pi_{2 }(g)U_{\Phi}\] in the uniform norm. Since \[U_{\Phi}^{-1}\pi_{1}(f)\pi_{2}(g_{n})U_{\Phi}\in\Pi,\quad n\geq 1,\] it follows that \[U_{\Phi}^{-1}\pi_{1}(f)\pi_{2}(g)U_{\Phi}\in\Pi\] and \[\operatorname{sym}(U_{\Phi}^{-1}\pi_{1}(f)\pi_{2}(g_{n})U_{\Phi})\to \operatorname{sym}(U_{\Phi}^{-1}\pi_{1}(f)\pi_{2}(g)U_{\Phi})\] in the uniform norm. In other words, \[(f\circ\Phi^{-1}\otimes 1)\cdot g_{n}(q_{1}^{\Phi},\cdots,q_{d}^{\Phi})\to \operatorname{sym}(U_{\Phi}^{-1}\pi_{1}(f)\pi_{2}(g)U_{\Phi})\] in the uniform norm. Thus, \[\operatorname{sym}(U_{\Phi}^{-1}\pi_{1}(f)\pi_{2}(g)U_{\Phi})=(f\circ\Phi^{- 1}\otimes 1)\cdot g(q_{1}^{\Phi},\cdots,q_{d}^{\Phi}).\] Proof of Theorem 3.5.: By the definition of the \(C^{*}\)-algebra \(\Pi\), for every \(T\in\Pi\), there exists a sequence \((T_{n})_{n\geq 1}\) in the \(*\)-algebra generated by \(\pi_{1}(\mathcal{A}_{1})\) and \(\pi_{2}(\mathcal{A}_{2})\) such that \(T_{n}\to T\) in the uniform norm. We can write \[T_{n}=\sum_{l=1}^{l_{n}}\prod_{k=1}^{k_{n}}\pi_{1}(f_{n,k,l})\pi_{2}(g_{n,k,l}).\] By Lemma 5.8, we have \[T_{n}\in\sum_{l=1}^{l_{n}}\pi_{1}(\prod_{k=1}^{k_{n}}f_{n,k,l})\pi_{2}(\prod_{ k=1}^{k_{n}}g_{n,k,l})+\mathcal{K}(L_{2}(\mathbb{R}^{d})).\] Denote for brevity, \[f_{n,l}=\prod_{k=1}^{k_{n}}f_{n,k,l}\in\mathcal{A}_{1},\quad g_{n,l}=\prod_{k =1}^{k_{n}}g_{n,k,l}\in\mathcal{A}_{2}.\] We have \[T_{n}=S_{n}+\sum_{l=1}^{l_{n}}\pi_{1}(f_{n,l})\pi_{2}(g_{n,l}),\quad S_{n}\in \mathcal{K}(L_{2}(\mathbb{R}^{d})).\] By Lemma 5.2, we have \[\operatorname{sym}(T_{n})=\sum_{l=1}^{l_{n}}f_{n,l}\otimes g_{n,l}. \tag{5.9}\] Suppose in addition that \(T\) is compactly supported. In particular, \(T=M_{\phi}T\) for some \(\phi\in C_{c}^{\infty}(\mathbb{R}^{d}).\) Replacing \(S_{n}\) with \(M_{\phi}S_{n}\) and \(f_{n,l}\) with \(\phi\cdot f_{n,l}\) if necessary, we may assume without loss of generality that \(f_{n,l}\in C_{c}^{\infty}(\mathbb{R}^{d})\) for every \(n\geq 1\) and for every \(1\leq l\leq l_{n}.\) By Lemma 5.9, we have \[\sum_{l=1}^{l_{n}}U_{\Phi}^{-1}\pi_{1}(f_{n,l})\pi_{2}(g_{n,l})U_{\Phi}\in\Pi\] and \[\operatorname{sym}\Bigl{(}\sum_{l=1}^{l_{n}}U_{\Phi}^{-1}\pi_{1}(f_{n,l})\pi_ {2}(g_{n,l})U_{\Phi}\Bigr{)}=(\sum_{l=1}^{l_{n}}f_{n,l}\otimes g_{n,l})\circ \Theta_{\Phi},\] where \(\Theta_{\Phi}\) is introduced in Notation 3.1. By Lemma 5.2, we have \(U_{\Phi}^{-1}S_{n}U_{\Phi}\in\Pi\) and \[\operatorname{sym}(U_{\Phi}^{-1}S_{n}U_{\Phi})=0.\] Thus, \(U_{\Phi}^{-1}T_{n}U_{\Phi}\in\Pi\) and \[\operatorname{sym}\Bigl{(}U_{\Phi}^{-1}T_{n}U_{\Phi}\Bigr{)}=(\sum_{l=1}^{l_{ n}}f_{n,l}\otimes g_{n,l})\circ\Theta_{\Phi}=\operatorname{sym}(T_{n})\circ \Theta_{\Phi}.\] Since \(\Pi\) is a \(C^{*}\)-algebra and since \(U_{\Phi}^{-1}T_{n}U_{\Phi}\to U_{\Phi}^{-1}TU_{\Phi}\) in the uniform norm, it follows that \(U_{\Phi}^{-1}TU_{\Phi}\in\Pi\) and \[\operatorname{sym}\Bigl{(}U_{\Phi}^{-1}T_{n}U_{\Phi}\Bigr{)}\to\operatorname {sym}\Bigl{(}U_{\Phi}^{-1}TU_{\Phi}\Bigr{)}\] in the uniform norm. In other words, \[\operatorname{sym}(T_{n})\circ\Theta_{\Phi}\to\operatorname{sym}\Bigl{(}U_{ \Phi}^{-1}TU_{\Phi}\Bigr{)}\] in the uniform norm. Since \(\operatorname{sym}(T_{n})\to\operatorname{sym}(T)\) in the uniform norm, it follows that \[\operatorname{sym}\Bigl{(}U_{\Phi}^{-1}TU_{\Phi}\Bigr{)}=\operatorname{sym }(T)\circ\Theta_{\Phi}.\] ## 6. Invariance of principal symbol under local diffeomorphisms Theorem 3.11 is supposed to be a corollary of Theorem 3.5. To demonstrate this is indeed the case, we need an extension result for diffeomorphisms. The following fundamental result is due to Palais [23] (see Corollary 4.3 there). **Theorem 6.1**.: _Let \(\Phi:\mathbb{R}^{d}\to\mathbb{R}^{d}\) be a smooth mapping. Necessary and sufficient conditions for \(\Phi\) to be a diffeomorphism are as follows:_ 1. _for every_ \(t\in\mathbb{R}^{d},\) _we have_ \(\det(J_{\Phi}(t))\neq 0\)_;_ 2. _we have_ \(\Phi(t)\to\infty\) _as_ \(|t|\to\infty;\)__ The next lemma is also due to Palais [23]. We provide a proof for convenience of the reader. Note that \(B(t,r)\) is the open ball with radius \(r\) centered at \(t.\) **Lemma 6.2**.: _Let \(\Omega\subset\mathbb{R}^{d}\) be an open set and let \(\Phi:\Omega\to\mathbb{R}^{d}\) be a smooth mapping. If \(t\in\Omega\) is such that \(\det(J_{\Phi}(t))\neq 0,\) then there exists a diffeomorphism \(\Phi_{t}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) such that_ 1. \(\Phi_{t}=\Phi\) _on_ \(B(t,r_{1}(t))\) _with some_ \(r_{1}(t)>0;\)__ 2. \(\Phi_{t}\) _is affine outside_ \(B(t,r_{2}(t))\) _for some_ \(r_{2}(t)<\infty;\)__ Proof.: Without loss of generality, \(t=0,\)\(\Phi(0)=0\) and \(J_{\Phi}(0)=1_{M_{d}(\mathbb{R})},\) the unity in the algebra of real \(d\times d\) matrices. Let \(\theta\in C_{c}^{\infty}(\mathbb{R}^{d})\) be such that \(\theta=1\) on the unit ball. Set \[\Psi_{r}(u)=u+\theta(\frac{u}{r})\cdot(\Phi(u)-u),\quad u\in\mathbb{R}^{d}.\] It is clear that \(\Psi_{r}\) is well-defined smooth mapping for every sufficiently small \(r>0.\) A direct calculation shows that \(\det(J_{\Psi_{r}})\to 1\) in the uniform norm as \(r\to 0.\) In particular, for sufficiently small \(r>0,\)\(\det(J_{\Psi_{r}})\) never vanishes. It follows from Theorem 6.1 that, for sufficiently small \(r>0,\)\(\Psi_{r}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) is a diffeomorphism. Choose any such \(r\) and denote it by \(r(0).\) Set \(\Phi_{0}=\Psi_{r(0)}.\) This diffeomorphism obviously satisfies the required properties. **Lemma 6.3**.: _Let \(\Omega,\Omega^{\prime}\subset\mathbb{R}^{d}\) and let \(\Phi:\Omega\to\Omega^{\prime}\) be a diffeomorphism. Let \(B\subset\Omega\) be a ball and let \(\Phi_{0}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) be a diffeomorphism such that \(\Phi_{0}=\Phi\) on \(B.\) If \(T\in B(L_{2}(\mathbb{R}^{d}))\) is supported on \(B,\) then_ \[\operatorname{Ext}_{\Omega^{\prime}}\Bigl{(}U_{\Phi}^{-1}\cdot\operatorname{ Rest}_{\Omega}(T)\cdot U_{\Phi}\Bigr{)}=U_{\Phi_{0}}^{-1}TU_{\Phi_{0}}.\] Proof.: Indeed, since both sides are continuous in weak operator topology, it suffices to prove the assertion for the case when \(T\) is rank \(1\) operator. Let \[T\xi=\langle\xi,\xi_{1}\rangle\xi_{2},\quad\xi\in L_{2}(\mathbb{R}^{d}).\] where \(\xi_{1},\xi_{2}\in L_{2}(\mathbb{R}^{d})\) are supported in \(B.\) It is immediate that \[\Bigl{(}U_{\Phi_{0}}^{-1}TU_{\Phi_{0}}\Bigr{)}\xi=\langle\xi,U_{\Phi_{0}}^{-1 }\xi_{1}\rangle\cdot U_{\Phi_{0}^{-1}}\xi_{2},\quad\xi\in L_{2}(\mathbb{R}^{d }),\] \[\Bigl{(}U_{\Phi}^{-1}\cdot\operatorname{Rest}_{\Omega}(T)\cdot U_{\Phi}\Bigr{)} \xi=\langle\xi,U_{\Phi}^{-1}\xi_{1}\rangle\cdot U_{\Phi^{-1}}\xi_{2},\quad\xi \in L_{2}(U^{\prime}),\] \[\Bigl{(}\operatorname{Ext}_{\Omega^{\prime}}\Bigl{(}U_{\Phi}^{-1}\cdot \operatorname{Rest}_{\Omega}(T)\cdot U_{\Phi}\Bigr{)}\Bigr{)}\xi=\langle\xi \cdot\chi_{\Omega^{\prime}},U_{\Phi}^{-1}\xi_{1}\rangle\cdot U_{\Phi^{-1}}\xi _{2}\cdot\chi_{\Omega^{\prime}},\quad\xi\in L_{2}(\mathbb{R}^{d}).\] Since \(\xi_{1}\) and \(\xi_{2}\) are supported in \(B,\) it follows that expressions in the first and last displays coincide. This proves the assertion for every rank \(1\) operator \(T\) and, therefore, for every \(T.\) Proof of Theorem 3.11.: Let the operator \(T\) be supported on a compact set \(K\subset\Omega.\) Let \(t\in K.\) Let diffeomorphism \(\Phi_{t}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) and numbers \(r_{1}(t)\) and \(r_{2}(t)\) be as in Lemma 6.2. The collection \(\{B(t,r_{1}(t))\}_{t\in K}\) is an open cover of \(K.\) By compactness, one can choose a finite sub-cover. So, let \(\{t_{n}\}_{n=1}^{N}\) be such that \(\{B(t_{n},r_{1}(t_{n}))\}_{n=1}^{N}\) be such finite sub-cover. Let \(\{\Phi_{t_{n}}\}_{n=1}^{N}\) be diffeomorphisms from \(\mathbb{R}^{d}\) to \(\mathbb{R}^{d}\) given by Lemma 6.2 so that \[\Phi(t)=\Phi_{t_{n}}(t),\quad t\in B(t_{n},r_{1}(t_{n})).\] Let \(\{\phi_{n}\}_{n=1}^{N}\) be such that \(\phi_{n}\in C_{c}^{\infty}(B(t_{n},r_{1}(t_{n})))\) and \[\sum_{n=1}^{N}\phi_{n}^{2}=1\quad\text{on }K.\] Set \[T_{0}=\sum_{m=1}^{N}M_{\phi_{m}}[M_{\phi_{m}},T],\quad T_{n}=M_{\phi_{n}}TM_{ \phi_{n}},\quad 1\leq n\leq N.\] We write \[T=\sum_{n=1}^{N}M_{\phi_{n}^{2}}T=\sum_{n=0}^{N}T_{n}. \tag{6.1}\] If \(1\leq n\leq N,\) then \(T_{n}\) is supported on the ball \(B(t_{n},r_{1}(t_{n})).\) By Lemma 6.3, we have \[\operatorname{Ext}_{\Omega^{\prime}}\Bigl{(}U_{\Phi}^{-1}\cdot\operatorname{ Rest}_{\Omega}(T_{n})\cdot U_{\Phi}\Bigr{)}=U_{\Phi_{t_{n}}}^{-1}T_{n}U_{\Phi_{t_{n}}}.\] By Theorem 3.5, we have \(U_{\Phi_{t_{n}}}^{-1}T_{n}U_{\Phi_{t_{n}}}\in\Pi\) and \[\operatorname{sym}(U_{\Phi_{t_{n}}}^{-1}T_{n}U_{\Phi_{t_{n}}})=\operatorname{ sym}(T_{n})\circ\Theta_{\Phi_{t_{n}}}.\] Thus, \[\operatorname{Ext}_{\Omega^{\prime}}\Bigl{(}U_{\Phi}^{-1}\cdot\operatorname{ Rest}_{\Omega}(T_{n})\cdot U_{\Phi}\Bigr{)}\in\Pi \tag{6.2}\] and \[\operatorname{sym}\Bigl{(}\operatorname{Ext}_{\Omega^{\prime}}\Bigl{(}U_{ \Phi}^{-1}\cdot\operatorname{Rest}_{\Omega}(T_{n})\cdot U_{\Phi}\Bigr{)} \Bigr{)}=\operatorname{sym}(T_{n})\circ\Theta_{\Phi}. \tag{6.3}\] If \(n=0,\) then \(T_{0}\) is compact by Lemma 5.7. Clearly, \(T_{0}\) is compactly supported in \(\Omega.\) If \(A\in B(L_{2}(\Omega^{\prime}))\) is compact, then \(\operatorname{Ext}_{\Omega^{\prime}}(A)\in B(L_{2}(\mathbb{R}^{d}))\) is also compact (see Notation 3.6 and recall that a composition of bounded and compact operators is compact). Therefore, \[\operatorname{Ext}_{\Omega^{\prime}}\Bigl{(}U_{\Phi}^{-1}\cdot\operatorname{ Rest}_{\Omega}(T_{0})\cdot U_{\Phi}\Bigr{)}\in\mathcal{K}(L_{2}(\mathbb{R}^{d}))\subset\Pi \tag{6.4}\] and \[\operatorname{sym}\Bigl{(}\operatorname{Ext}_{\Omega^{\prime}}\Bigl{(}U_{ \Phi}^{-1}\cdot\operatorname{Rest}_{\Omega}(T_{0})\cdot U_{\Phi}\Bigr{)} \Bigr{)}=0. \tag{6.5}\] Combining (6.1), (6.2) and (6.4), we obtain \[\operatorname{Ext}_{\Omega^{\prime}}\Bigl{(}U_{\Phi}^{-1}\cdot\operatorname{ Rest}_{\Omega}(T)\cdot U_{\Phi}\Bigr{)}=\sum_{n=0}^{N}\operatorname{Ext}_{ \Omega^{\prime}}\Bigl{(}U_{\Phi}^{-1}\cdot\operatorname{Rest}_{\Omega}(T_{n}) \cdot U_{\Phi}\Bigr{)}\in\Pi.\] Now, combining (6.1), (6.3) and (6.5), we obtain \[\operatorname{sym}\Bigl{(}\operatorname{Ext}_{\Omega^{\prime}}\Bigl{(}U_{ \Phi}^{-1}\cdot\operatorname{Rest}_{\Omega}(T)\cdot U_{\Phi}\Bigr{)}\Bigr{)}=\] \[=\sum_{n=0}^{N}\operatorname{sym}\Bigl{(}\operatorname{Ext}_{\Omega^{\prime}} \Bigl{(}U_{\Phi}^{-1}\cdot\operatorname{Rest}_{\Omega}(T_{n})\cdot U_{\Phi} \Bigr{)}\Bigr{)}=\sum_{n=1}^{N}\operatorname{sym}(T_{n})\circ\Theta_{\Phi}=\] \[=\operatorname{sym}(T)\circ\Theta_{\Phi}-\operatorname{sym}(T_{0})\circ \Theta_{\Phi}=\operatorname{sym}(T)\circ\Theta_{\Phi}.\] ## 7. Principal symbol on compact manifolds ### Globalisation theorem Globalisation theorem is a folklore. We provide its proof in Appendix A for convenience of the reader. **Definition 7.1**.: _Let \(X\) be a compact manifold with an atlas \(\{(\mathcal{U}_{i},h_{i})\}_{i\in\mathbb{I}}.\) Let \(\mathfrak{B}\) be the Borel \(\sigma\)-algebra on \(X\) and let \(\nu\) be a countably additive measure on \(\mathfrak{B}.\) We say that \(\{\mathcal{A}_{i}\}_{i\in\mathbb{I}}\) are local algebras if_ 1. _for every_ \(i\in\mathbb{I},\)__\(\mathcal{A}_{i}\) _is a_ \(*\)_-subalgebra in_ \(B(L_{2}(X,\nu));\)__ _._ 2. _for every_ \(i\in\mathbb{I},\) _elements of_ \(\mathcal{A}_{i}\) _are compactly supported_2 _in_ \(\mathcal{U}_{i};\)__ Footnote 2: This notion is introduced immediately before the Definition 1.3. 3. _for every_ \(i,j\in\mathbb{I},\) _if_ \(T\in\mathcal{A}_{i}\) _is compactly supported in_ \(\mathcal{U}_{i}\cap\mathcal{U}_{j},\) _then_ \(T\in\mathcal{A}_{j};\)__ 4. _for every_ \(i\in\mathbb{I},\) _if_ \(T\in\mathcal{K}(L_{2}(X,\nu))\) _is compactly supported in_ \(\mathcal{U}_{i},\) _then_ \(T\in\mathcal{A}_{i};\)__ 5. _for every_ \(i\in\mathbb{I},\) _if_ \(\phi\in C_{c}(\mathcal{U}_{i}),\) _then_ \(M_{\phi}\in\mathcal{A}_{i};\)__ 6. _for every_ \(i\in\mathbb{I},\) _if_ \(\phi\in C_{c}(\mathcal{U}_{i}),\) _then the closure of_ \(M_{\phi}\mathcal{A}_{i}M_{\phi}\) _in the uniform norm is contained in_ \(\mathcal{A}_{i};\)__ 7. _for every_ \(i\in\mathbb{I},\) _if_ \(T\in\mathcal{A}_{i}\) _and if_ \(\phi\in C_{c}(\mathcal{U}_{i}),\) _then_ \([T,M_{\phi}]\in\mathcal{K}(L_{2}(X,\nu)).\)__ **Definition 7.2**.: _In the setting of Definition 7.1, we say that \(T\in\mathcal{A}\) if_ 1. _for every_ \(i\in\mathbb{I}\) _and for every_ \(\phi\in C_{c}(\mathcal{U}_{i}),\) _we have_ \(M_{\phi}TM_{\phi}\in\mathcal{A}_{i};\)__ 2. _for every_ \(\psi\in C(X),\) _the commutator_ \([T,M_{\psi}]\) _is compact._ **Definition 7.3**.: _Let \(\mathcal{B}\) be a \(*\)-algebra. In the setting of Definition 7.1, \(\{\hom_{i}\}_{i\in\mathbb{I}}\) are called local homomorphisms if_ 1. _for every_ \(i\in\mathbb{I},\)__\(\hom_{i}:\mathcal{A}_{i}\to\mathcal{B}\) _is a_ \(*\)_-homomorphism;_ 2. _for every_ \(i,j\in\mathbb{I},\) _we have_ \(\hom_{i}=\hom_{j}\) _on_ \(\mathcal{A}_{i}\cap\mathcal{A}_{j};\)__ 3. \(T\in\mathcal{A}_{i}\) _is compact iff_ \(\hom_{i}(T)=0;\)__ 4. _there exists a_ \(*\)_-homomorphism_ \(\hom:C(X)\to\mathcal{B}\) _such that_ \[\hom_{i}(M_{\phi})=\hom(\phi),\quad\phi\in C_{c}(\mathcal{U}_{i}),\quad i\in \mathbb{I}.\] **Theorem 7.4**.: _In the setting of Definitions 7.1, 7.3 and 7.2, we have_ 1. \(\mathcal{A}\) _is a unital_ \(C^{*}\)_-subalgebra in_ \(B(L_{2}(X,\nu))\) _which contains_ \(\mathcal{A}_{i}\) _for every_ \(i\in\mathbb{I}\) _and_ \(\mathcal{K}(L_{2}(X,\nu));\)__ 2. _there exists a_ \(*\)_-homomorphism_ \(\hom:\mathcal{A}\to\mathcal{B}\) _such that_ 1. \(\hom=\hom_{i}\) _on_ \(\mathcal{A}_{i}\) _for every_ \(i\in\mathbb{I};\)__ 2. \(\ker(\hom)=\mathcal{K}(L_{2}(X,\nu));\)__ 3. \(*\)_-homomorphism as in (_2_) is unique._ ### Construction of the principal symbol mapping Let \(\mathfrak{B}\) be the Borel \(\sigma\)-algebra on the manifold \(X\) and let \(\nu:\mathfrak{B}\to\mathbb{R}\) be a continuous positive density. It is immediate that the mapping \(h_{i}:(\mathcal{U}_{i},\nu)\to(\Omega_{i},\nu\circ h_{i}^{-1})\) preserves the measure. Define an isometry \(W_{i}:L_{2}(\mathcal{U}_{i},\nu)\to L_{2}(\Omega_{i},\nu\circ h_{i}^{-1})\) by setting \[W_{i}f=f\circ h_{i}^{-1},\quad f\in L_{2}(\mathcal{U}_{i},\nu).\] If \(T\) is compactly supported in \(\mathcal{U}_{i},\) then \(W_{i}TW_{i}^{-1}\) is understood as an element of the algebra \(B(L_{2}(\Omega_{i},\nu\circ h_{i}^{-1})).\) The latter operator is compactly supported in \(\Omega_{i}.\) By Definition 2.20, exactly the same operator also belongs to \(B(L_{2}(\Omega_{i}))\) and, therefore, can be extended to an element \(\Ext_{\Omega_{i}}(W_{i}TW_{i}^{-1})\) of \(B(L_{2}(\mathbb{R}^{d})).\) For the notion \(\Ext_{\Omega_{i}}\) we refer to Notation 3.7. **Definition 7.5**.: _Let \(X\) be a smooth compact manifold and let \(\nu\) be a continuous positive density on \(X.\) For every \(i\in\mathbb{I},\) let \(\Pi_{i}\) consist of the operators \(T\in B(L_{2}(X,\nu))\) compactly supported in \(\mathcal{U}_{i}\) and such that_ \[\Ext_{\Omega_{i}}(W_{i}TW_{i}^{-1})\in\Pi.\] For example, every operator \(M_{\phi},\)\(\phi\in C_{c}(\mathcal{U}_{i})\) belongs to \(\Pi_{i}.\) For notation \(C(S^{*}X)\) below we refer to Definition 2.12. For the notion sym, we refer to (1.1). **Definition 7.6**.: _Let \(X\) be a smooth compact manifold and let \(\nu\) be a continuous positive density on \(X.\) For every \(i\in\mathbb{I},\) the mapping \(\operatorname{sym}_{i}:\Pi_{i}\to C(S^{*}X)\) is defined by the formula_ \[\operatorname{sym}_{i}(T)=\operatorname{sym}(\operatorname{Ext}_{\Omega_{i}}( W_{i}TW_{i}^{-1}))\circ H_{i},\quad T\in\Pi_{i}.\] **Theorem 7.7**.: _Let \(X\) be a smooth compact manifold and let \(\nu\) be a continuous positive density on \(X.\)_ 1. _Collection_ \(\{\Pi_{i}\}_{i\in\mathbb{I}}\) _introduced in Definition_ 7.5 _satisfies all the conditions in Definition_ 7.1_;_ 2. _Collection_ \(\{\operatorname{sym}_{i}\}_{i\in\mathbb{I}}\) _introduced in Definition_ 7.6 _satisfies all the conditions in Definition_ 7.3_._ That is, the collection \(\{\Pi_{i}\}_{i\in\mathbb{I}}\) of \(*\)-algebras and the collection \(\{\operatorname{sym}_{i}\}_{i\in\mathbb{I}}\) of \(*\)-homomorphisms satisfy the conditions in Theorem 7.4. Definition 7.8 below is the culmination of the paper. Having this definition at hands, we easily prove Theorem 1.4. **Definition 7.8**.: _Let \(X\) be a smooth compact Riemannian manifold and let \(\nu\) be a continuous positive density on \(X.\)_ 1. _The domain_ \(\Pi_{X}\) _of the principal symbol mapping is the_ \(C^{*}\)_-algebra constructed in Theorem_ 7.4 _from the collection_ \(\{\Pi_{i}\}_{i\in\mathbb{I}}.\)__ 2. _The principal symbol mapping_ \(\operatorname{sym}_{X}:\Pi_{X}\to C(S^{*}X)\) _is the_ \(*\)_-homomorphism constructed in Theorem_ 7.4 _from the collection_ \(\{\operatorname{sym}_{i}\}_{i\in\mathbb{I}}.\)__ ### Proof of Theorem 7.7 Lemma 7.9 below delivers verification of condition (1) in Definitions 7.1 and 7.3. **Lemma 7.9**.: _For every \(i\in\mathbb{I},\) we have_ 1. \(\Pi_{i}\) _is a_ \(*\)_-subalgebra in_ \(B(L_{2}(X,\nu));\)__ 2. \(\operatorname{sym}_{i}:\Pi_{i}\to C(S^{*}X)\) _is a_ \(*\)_-homomorphism._ Proof.: It is immediate that \(\Pi_{i}\) is a subalgebra in \(B(L_{2}(X,\nu))\) and that \(\operatorname{sym}_{i}:\Pi_{i}\to C(S^{*}X)\) is a homomorphism. We need to show that \(\Pi_{i}\) is closed with respect to taking adjoints and that \(\operatorname{sym}_{i}\) is invariant with respect to this operation. Let \(T\in\Pi_{i}\) and let us show that \(T^{*}\in\Pi_{i}.\) Recall that, due to the Condition 2.20, \(\nu\circ h_{i}^{-1}\) is absolutely continuous and that its density denoted by \(a_{i}\) as well as its inverse \(a_{i}^{-1}\) are assumed to be continuous in \(\Omega_{i}.\) The following equality3 is easy to verify directly. Footnote 3: The operators \(M_{a_{i}}\) and \(M_{a_{i}^{-1}}\) are unbounded. The equality should be understood as \(LHS\xi=RHS\xi\) for every compactly supported \(\xi\in L_{2}(\Omega_{i}).\) \[W_{i}T^{*}W_{i}^{-1}=M_{a_{i}^{-1}}\cdot(W_{i}TW_{i}^{-1})^{*}\cdot M_{a_{i}}.\] However, by Definition 7.5, the operator \(T\) is compactly supported in \(\mathcal{U}_{i}.\) Hence, the operator \((W_{i}TW_{i}^{-1})^{*}\) is compactly supported in \(\Omega_{i}.\) Choose \(\phi\in C_{c}(\Omega_{i})\) such that \[(W_{i}TW_{i}^{-1})^{*}=M_{\phi}\cdot(W_{i}TW_{i}^{-1})^{*}=(W_{i}TW_{i}^{-1})^{ *}\cdot M_{\phi}.\] Thus, \[W_{i}T^{*}W_{i}^{-1}=M_{a_{i}^{-1}\phi}\cdot(W_{i}TW_{i}^{-1})^{*}\cdot M_{a_{i} \phi}.\] Thus, \[\operatorname{Ext}_{\Omega_{i}}(W_{i}T^{*}W_{i}^{-1})=M_{a_{i}^{-1}\phi}\cdot( \operatorname{Ext}_{\Omega_{i}}(W_{i}TW_{i}^{-1}))^{*}\cdot M_{a_{i}\phi}. \tag{7.2}\] Since \(a_{i}\phi,a_{i}^{-1}\phi\in C_{c}(\mathbb{R}^{d})\), it follows that every factor in the right hand side of (7.2) belongs to \(\Pi\). Hence, so is the expression on the left hand side. In other words, \(T^{*}\in\Pi_{i}\). Thus, \(\Pi_{i}\) is closed with respect to taking adjoints. Recall that (by [30]) sym is a \(*\)-homomorphism. Applying sym to the equality (7.2), we obtain \[\operatorname{sym}_{i}(T^{*})=\operatorname{sym}(\operatorname{Ext}_{\Omega _{i}}(W_{i}T^{*}W_{i}^{-1}))=\] \[=\operatorname{sym}(M_{a_{i}^{-1}\phi})\cdot\operatorname{sym}((\operatorname {Ext}_{\Omega_{i}}(W_{i}TW_{i}^{-1}))^{*})\cdot\operatorname{sym}(M_{a_{i} \phi})=\] \[=\operatorname{sym}(M_{\phi^{2}})\cdot\operatorname{sym}(\operatorname{Ext}_ {\Omega_{i}}(W_{i}TW_{i}^{-1}))^{*}=\] \[=\operatorname{sym}(\operatorname{Ext}_{\Omega_{i}}(M_{\phi^{2}}\cdot W_{i}TW _{i}^{-1}))^{*}.\] It is clear that \[M_{\phi^{2}}\cdot W_{i}TW_{i}^{-1}=W_{i}TW_{i}^{-1}.\] Thus, \[\operatorname{sym}_{i}(T^{*})=\operatorname{sym}(\operatorname{Ext}_{\Omega _{i}}(W_{i}TW_{i}^{-1}))^{*}=\operatorname{sym}_{i}(T)^{*}.\] In the following lemma, \(\Xi_{\Phi_{i,j}}\) is defined according to the Notation 3.8. **Lemma 7.10**.: _For every \(i,j\in\mathbb{I}\), we have_ \[\Xi_{\Phi_{i,j}}=H_{i}\circ H_{j}^{-1}\] _on \(\Omega_{j,i}\times\mathbb{R}^{d}\)._ Proof.: By Notation 3.8, \[\Xi_{\Phi_{i,j}}(t,s)=(\Phi_{i,j}^{-1}(t),J_{\Phi_{i,j}^{*}(\Phi_{i,j}^{-1}(t) )}s).\] By the chain rule, we have \[J_{\Phi_{i,j}}(\Phi_{i,j}^{-1}(t))\cdot J_{\Phi_{i,j}^{-1}}(t)=J_{\Phi_{i,j} \circ\Phi_{i,j}^{-1}}(t)=1_{M_{d}(\mathbb{C})}.\] Taking into account that \(\Phi_{i,j}^{-1}=\Phi_{j,i}\), we write \[J_{\Phi_{i,j}(\Phi_{i,j}^{-1}(t))}=(J_{\Phi_{j,i}})^{-1}(t)\text{ and }J_{\Phi_{i,j}(\Phi_{i,j}^{-1}(t))}^{*}=(J_{\Phi_{j,i}}^{*})^{-1}(t).\] The following lemma verifies the condition (3) in Definition 7.1 and condition (2) in Definition 7.3. **Lemma 7.11**.: _Let \((\mathcal{U}_{i},h_{i})\) and \((\mathcal{U}_{j},h_{j})\) be charts. Let \(T\in B(L_{2}(X,\nu))\) be compactly supported in \(\mathcal{U}_{i}\cap\mathcal{U}_{j}\)._ 1. _If_ \(T\in\Pi_{i},\) _then_ \(T\in\Pi_{j};\)__ 2. _We have_ \(\operatorname{sym}_{i}(T)=\operatorname{sym}_{j}(T).\)__ Proof.: Let \(V_{\Phi}\xi=\xi\circ\Phi\) (provided that the image of the mapping \(\Phi\) is contained in the domain of the function \(\xi\)). Since \(W_{j}=V_{\Phi_{i,j}}^{-1}W_{i}\), it follows that (using the Notation 3.8) \[W_{j}TW_{j}^{-1}=V_{\Phi_{i,j}}^{-1}\cdot W_{i}TW_{i}^{-1}\cdot V_{\Phi_{i,j}}=\] \[=U_{\Phi_{i,j}}^{-1}\cdot M_{|J_{\Phi_{i,j}}|^{\frac{1}{2}}}\cdot W_{i}TW_{i}^{- 1}\cdot M_{|J_{\Phi_{i,j}}|^{-\frac{1}{2}}}\cdot U_{\Phi_{i,j}}.\] Since \(T\) is compactly supported in \(\mathcal{U}_{i}\cap\mathcal{U}_{j}\), it follows that \(W_{i}TW_{i}^{-1}\in\mathrm{Rest}_{\Omega_{i}}(\Pi)\) is compactly supported in \(\Omega_{i,j}.\) Let \(A\subset\Omega_{i,j}\) be compact and such that \[M_{\chi_{A}}\cdot W_{i}TW_{i}^{-1}\cdot M_{\chi_{A}}=W_{i}TW_{i}^{-1}.\] Using Tietze extension theorem, choose \(\phi\in C(\mathbb{R}^{d})\) such that \(\phi^{-1}\in C(\mathbb{R}^{d})\) and such that \(\phi=|J_{\Phi_{i,j}}|^{\frac{1}{2}}\) on \(A.\) It follows that \[M_{|J_{\Phi_{i,j}}|^{\frac{1}{2}}}\cdot W_{i}TW_{i}^{-1}\cdot M_{|J_{\Phi_{i,j }}|^{-\frac{1}{2}}}=M_{\phi}\cdot W_{i}TW_{i}^{-1}\cdot M_{\phi^{-1}}=\] \[=W_{i}TW_{i}^{-1}+[M_{\phi},W_{i}TW_{i}^{-1}]\cdot M_{\phi^{-1}}\overset{L. \ref{eq:1}}{\in}W_{i}TW_{i}^{-1}+\mathcal{K}(L_{2}(\Omega_{i})).\] Combining the preceding paragraphs, we conclude that \[W_{j}TW_{j}^{-1}\in U_{\Phi_{i,j}}^{-1}\cdot W_{i}TW_{i}^{-1}\cdot U_{\Phi_{i, j}}+\mathcal{K}(L_{2}(\Omega_{j})).\] Denote for brevity \[T_{i}=\mathrm{Ext}_{\Omega_{i}}(W_{i}TW_{i}^{-1}),\quad T_{j}=\mathrm{Ext}_{ \Omega_{j}}(W_{j}TW_{j}^{-1}).\] The preceding display can be now re-written as \[\mathrm{Rest}_{\Omega_{j}}(T_{j})\in U_{\Phi_{i,j}}^{-1}\cdot\mathrm{Rest}_{ \Omega_{i}}(T_{i})\cdot U_{\Phi_{i,j}}+\mathcal{K}(L_{2}(\Omega_{j})).\] Thus, \[T_{j}\in\mathrm{Ext}_{\Omega_{j}}\Big{(}U_{\Phi_{i,j}}^{-1}\cdot\mathrm{Rest}_ {\Omega_{i}}(T_{i})\cdot U_{\Phi_{i,j}}\Big{)}+\mathcal{K}(L_{2}(\mathbb{R}^{ d})).\] By Theorem 3.11, we have \[\mathrm{Ext}_{\Omega_{j}}\Big{(}U_{\Phi_{i,j}}^{-1}\cdot\mathrm{Rest}_{ \Omega_{i}}(T_{i})\cdot U_{\Phi_{i,j}}\Big{)}\in\Pi\] and \[\mathrm{sym}\Big{(}\mathrm{Ext}_{\Omega_{j}}\Big{(}U_{\Phi_{i,j}}^{-1}\cdot \mathrm{Rest}_{\Omega_{i}}(T_{i})\cdot U_{\Phi_{i,j}}\Big{)}\Big{)}=\mathrm{ sym}(T_{i})\circ\Xi_{\Phi_{i,j}}.\] By Lemma 5.2, compact operators belong to \(\Pi.\) Therefore, \(T_{j}\in\Pi\) and \[\mathrm{sym}(T_{j})=\mathrm{sym}(T_{i})\circ\Xi_{\Phi_{i,j}}\overset{L.\ref{ eq:1}}{=}\mathrm{sym}(T_{i})\circ H_{i}\circ H_{j}^{-1}.\] Finally, \[\mathrm{sym}_{j}(T)=\mathrm{sym}(T_{j})\circ H_{j}=\mathrm{sym}(T_{i})\circ H _{i}\circ H_{j}^{-1}\circ H_{j}=\mathrm{sym}(T_{i})\circ H_{i}=\mathrm{sym}_{ i}(T).\] Proof of Theorem 7.7 (1).: The condition (1) in Definition 7.1 is verified in Lemma 7.9. The condition (2) in Definition 7.1 is immediate. The condition (3) in Definition 7.1 is verified in Lemma 7.11. Let us verify the condition (4) in Definition 7.1. If \(i\in\mathbb{I}\) and if \(T\in\mathcal{K}(L_{2}(X,\nu))\) is compactly supported in \(\mathcal{U}_{i}\), then \(\mathrm{Ext}_{\Omega_{i}}(W_{i}TW_{i}^{-1})\in\mathcal{K}(L_{2}(\mathbb{R}^{d})).\) Using Lemma 5.2, we conclude that \(\mathrm{Ext}_{\Omega_{i}}(W_{i}TW_{i}^{-1})\in\Pi.\) In other words, \(T\in\Pi_{i}.\) The condition (5) in Definition 7.1 is immediate. Let us verify the condition (6) in Definition 7.1. Let \(i\in\mathbb{I}\) and let \(\phi\in C_{c}(\mathcal{U}_{i}).\) Suppose \(\{T_{n}\}_{n\geq 1}\subset M_{\phi}\Pi_{i}M_{\phi}\) are such that \(T_{n}\to T\) in the uniform norm. It follows that \(T\) is compactly supported in \(\mathcal{U}_{i}\) and \[\operatorname{Ext}_{\Omega_{i}}(W_{i}T_{n}W_{i}^{-1})\to\operatorname{Ext}_{ \Omega_{i}}(W_{i}TW_{i}^{-1}),\quad n\to\infty,\] in the uniform norm. The sequence on the left hand side is in \(\Pi.\) Hence, so is its limit. In other words, \(T\in\Pi_{i}.\) Let us verify the condition (7) in Definition 7.1. Let \(i\in\mathbb{I}\) and let \(T\in\Pi_{i}\) and \(\phi\in C_{c}(\mathcal{U}_{i}).\) Let \(\psi=\phi\circ h_{i}^{-1}\in C_{c}(\mathbb{R}^{d}).\) We have \[\operatorname{Ext}_{\Omega_{i}}(W_{i}[T,M_{\phi}]W_{i}^{-1})=[\operatorname{ Ext}_{\Omega_{i}}(W_{i}TW_{i}^{-1}),M_{\psi}].\] Since \(\operatorname{Ext}_{\Omega_{i}}(W_{i}TW_{i}^{-1})\in\Pi,\) it follows that the commutator on the right hand side is compact by Lemma 5.7. Therefore, the operator on the left hand side is compact and, therefore, so is \([T,M_{\phi}].\) Proof of Theorem 7.7 (2).: The condition (1) in Definition 7.3 is verified in Lemma 7.9. The condition (2) in Definition 7.3 is verified in Lemma 7.11. Let us verify the condition (3) in Definition 7.3. If \(T\in\Pi_{i}\) is compact, then so is \(\operatorname{Ext}_{\Omega_{i}}(W_{i}TW_{i}^{-1}).\) Since \(\operatorname{sym}\) vanishes on compact operators, it follows that \[\operatorname{sym}_{i}(T)\stackrel{{ D.7.6}}{{=}}\operatorname{ sym}(\operatorname{Ext}_{\Omega_{i}}(W_{i}TW_{i}^{-1}))\circ H_{i}=0\circ H _{i}=0.\] Conversely, if \(T\in\Pi_{i}\) is such that \(\operatorname{sym}_{i}(T)=0,\) then \[\operatorname{sym}(\operatorname{Ext}_{\Omega_{i}}(W_{i}TW_{i}^{-1}))=0.\] Since \(\operatorname{ker}(\operatorname{sym})=\mathcal{K}(L_{2}(\mathbb{R}^{d})),\) it follows that \[\operatorname{Ext}_{\Omega_{i}}(W_{i}TW_{i}^{-1})\in\mathcal{K}(L_{2}( \mathbb{R}^{d})).\] Thus, \(T\in\mathcal{K}(L_{2}(X,\nu)).\) The condition (4) in Definition 7.3 is immediate if we take Hom to be the natural embedding \(C(X)\to C(S^{*}X).\) ### Proof of Theorem 1.4 Proof of Theorem 1.4.: By Definition 7.8, \(\Pi_{X}\) is a \(C^{*}\)-algebra and the mapping \(\operatorname{sym}_{X}:\Pi_{X}\to C(S^{*}X)\) is a \(*\)-homomorphism. By Definition 7.8 and Theorem 7.4 (2), \(\operatorname{ker}(\operatorname{sym}_{X})=\mathcal{K}(L_{2}(X,\nu)).\) Let us show that \(\operatorname{sym}_{X}\) is surjective. Denote the image of \(\operatorname{sym}_{X}\) by \(A\) and note that \(A\) is a \(C^{*}\)-subalgebra in \(C(S^{*}X).\) Let \(F\in C^{\infty}(S^{*}X).\) Let \((\phi_{n})_{n=1}^{N}\) be a good4 partition of unity so that \(\phi_{n}\in C_{c}^{\infty}(\mathcal{U}_{i_{n}})\) for \(1\leq n\leq N.\) It follows that Footnote 4: See Definition 2.16. \[q_{n}=(F\phi_{n})\circ H_{i_{n}}^{-1}\in C_{c}(\Omega_{i_{n}}\times\mathbb{R} ^{d}).\] By Lemma 2.6, we have \(T_{q_{n}}\in\Pi\) and \(\operatorname{sym}(T_{q_{n}})=q_{n}.\) Let \(\psi_{n}\in C_{c}(\Omega_{i_{n}})\) be such that \(\phi_{n}=\phi_{n}\psi_{n}.\) We have \(T_{n}=M_{\psi_{n}}T_{q_{n}}M_{\psi_{n}}\in\Pi\) and \(\operatorname{sym}(T_{n})=q_{n}\psi_{n}^{2}=q_{n}.\) Since \(T_{n}\) is (bounded and) compactly supported in \(\Omega_{i_{n}},\) it follows that \(S_{n}=W_{i_{n}}^{-1}\mathrm{Rest}_{\Omega_{i}}(T_{n})W_{i_{n}}\) is bounded and compactly supported in \(\mathcal{U}_{i_{n}}.\) It is clear that \(S_{n}\in\Pi_{i_{n}}\subset\Pi_{X}\) and that \[\operatorname{sym}_{X}(S_{n})=\operatorname{sym}(\operatorname{Ext}_{\Omega_ {i_{n}}}(W_{i_{n}}T_{n}W_{i_{n}}^{-1}))\circ H_{i_{n}}=\operatorname{sym}(T_{ n})\circ H_{i_{n}}=q_{n}\circ H_{i_{n}}=F\phi_{n}.\] Thus, \(S=\sum_{n=1}^{N}S_{n}\in\Pi_{X}\) and \[\operatorname{sym}_{X}(S)=\sum_{n=1}^{N}\operatorname{sym}_{X}(S_{n})=\sum_{n=1 }^{N}F\phi_{n}=F.\] Hence, every \(F\in C^{\infty}(S^{*}X)\) belongs to the \(A.\) In other words, \(C^{\infty}(S^{*}X)\subset A.\) Since \(A\) is a \(C^{*}\)-subalgebra in \(C(S^{*}X),\) it follows that \(A=C(S^{*}X).\) Hence, \(\operatorname{sym}_{X}\) is surjective. ## 8. Proof of the Connes Trace Theorem **Lemma 8.1**.: _Let \(g\) be as in Theorem 2.19. Let \(\phi\in C_{c}^{\infty}(\mathbb{R}^{d}).\) We have_ \[M_{\phi}(1-\Delta_{g})^{-\frac{r}{2}}\in\mathcal{L}_{\frac{d}{r},\infty}.\] Proof.: By definition, the principal symbol of \(1-\Delta_{g}\) is \[p_{0}:(t,s)\to\langle g(t)^{-1}s,s\rangle,\quad t,s\in\mathbb{R}^{d}.\] By Theorem 2.5, we have \[(1-\Delta_{g})^{-\frac{r}{2}}\in\Psi^{-r}(\mathbb{R}^{d}).\] We now write \[M_{\phi}(1-\Delta_{g})^{-\frac{r}{2}}=M_{\phi}(1-\Delta)^{-\frac{r}{2}}\cdot( 1-\Delta)^{\frac{r}{2}}(1-\Delta_{g})^{-\frac{r}{2}}.\] When \(r>\frac{d}{2},\) the first factor belongs to \(\mathcal{L}_{\frac{d}{r},\infty}\) by Theorem 1.4 in [15]. When \(r=\frac{d}{2},\) the first factor belongs to \(\mathcal{L}_{\frac{d}{r},\infty}\) by Theorem 1.3 in [15]. When \(r<\frac{d}{2},\) the first factor belongs to \(\mathcal{L}_{\frac{d}{r},\infty}\) by Theorem 1.1 in [15] (applied with \(r<\frac{d}{2}\)). The second factor belongs to \(\Psi^{0}(\mathbb{R}^{d})\) and is, therefore, bounded. **Lemma 8.2**.: _Let \(g\) be as in Theorem 2.19. We have_ \[M_{\psi}(1-\Delta_{g})^{-\frac{d}{2}}(1-\Delta)^{\frac{d}{2}}\in\Pi,\] \[\operatorname{sym}\bigl{(}M_{\psi}(1-\Delta_{g})^{-\frac{d}{2}}(1-\Delta)^{ \frac{d}{2}}\bigr{)}(t,s)=\psi(t)\langle g(t)^{-1}s,s\rangle^{-\frac{d}{2}}.\] Proof.: By definition, the principal symbol of \(1-\Delta_{g}\) is \[p_{0}:(t,s)\to\langle g(t)^{-1}s,s\rangle,\quad t,s\in\mathbb{R}^{d}.\] By Theorem 2.5, we have \[(1-\Delta_{g})^{-\frac{d}{2}}=\operatorname{Op}((p_{0}+1)^{-\frac{d}{2}})+ \operatorname{Err}_{0},\quad\operatorname{Err}_{0}\in\Psi^{-d-1}(\mathbb{R}^{ d}).\] Let \[p_{1}:(t,s)\to\psi(t)(1+\langle g(t)^{-1}s,s\rangle)^{-\frac{d}{2}}(1+|s|^{2} )^{-\frac{d}{2}},\quad t,s\in\mathbb{R}^{d}.\] Clearly, \[M_{\psi}\cdot\operatorname{Op}((p_{0}+1)^{-\frac{d}{2}})\cdot(1-\Delta)^{ \frac{d}{2}}=\operatorname{Op}(p_{1}).\] Thus, \[M_{\psi}(1-\Delta_{g})^{-\frac{d}{2}}(1-\Delta)^{\frac{d}{2}}=\operatorname{ Op}(p_{1})+\operatorname{Err}_{1},\quad\operatorname{Err}_{1}\in\Psi^{-1}( \mathbb{R}^{d}).\] Since both \[M_{\psi}(1-\Delta_{g})^{-\frac{d}{2}}(1-\Delta)^{\frac{d}{2}}\text{ and } \operatorname{Op}(p_{1})\] are compactly supported from the left, it follows that so is \(\operatorname{Err}_{1}.\) Thus, \(\operatorname{Err}_{1}\) is a compact operator. Consequently, \(\operatorname{Err}_{1}\in\Pi\) and \(\operatorname{sym}(\operatorname{Err}_{1})=0.\) Let \[p_{2}(t,s)=\psi(t)\langle g(t)^{-1}s,s\rangle^{-\frac{d}{2}},\quad t\in\mathbb{ R}^{d},\quad s\in\mathbb{S}^{d-1}.\] By Lemma 2.7, we have that \(\operatorname{Op}(p_{1})-T_{p_{2}}\) is compact. So, our operator belongs to \(\Pi\) and its symbol equals that of \(T_{p_{2}},\) i.e. equals \(p_{2}.\) **Lemma 8.3**.: _Let \((X,G)\) be a compact Riemannian manifold. Let \(\psi\in C^{\infty}(X)\) be compactly supported in the chart \((\mathcal{U}_{i},h_{i}).\) Let \(\hat{g}_{i}:\mathbb{R}^{d}\to\operatorname{GL}^{+}(d,\mathbb{R})\) be as in Theorem 2.19 and such that \(\hat{g}_{i}=g_{i}\) in the neighborhood of the support of \(\psi\circ h_{i}^{-1}.\) We have_ \[\operatorname{Ext}_{\Omega_{i}}(W_{i}M_{\psi}^{2}(1-\Delta_{G})^{-1}M_{\psi}^{ 2}W_{i}^{-1})-M_{\psi\circ h_{i}^{-1}}^{2}(1-\Delta_{\hat{g}_{i}})^{-1}M_{\psi \circ h_{i}^{-1}}^{2}\in\mathcal{L}_{\frac{d}{3},\infty}.\] Proof.: Let \(\Omega_{i}^{\prime}\subset\Omega_{i}\) be a compact set such that \(\psi\circ h_{i}^{-1}\) is supported in \(\Omega_{i}^{\prime}\) and such that \(g_{i}=\hat{g}_{i}\) on \(\Omega_{i}^{\prime}.\) Let \(\phi\in C_{c}(\mathbb{R}^{d})\) be supported in \(\Omega_{i}^{\prime}\) and such that \(\phi\cdot(\psi\circ h_{i}^{-1})=\psi\circ h_{i}^{-1}.\) We write \[\operatorname{Ext}_{\Omega_{i}}(W_{i}M_{\psi}^{2}(1-\Delta_{G})^{-1}M_{\psi}^{ 2}W_{i}^{-1})=M_{\phi}^{2}\cdot\operatorname{Ext}_{\Omega_{i}}(W_{i}M_{\psi}^{ 2}(1-\Delta_{G})^{-1}M_{\psi}^{2}W_{i}^{-1})=\] \[=M_{\phi}(1-\Delta_{\hat{g}_{i}})^{-1}\cdot(1-\Delta_{\hat{g}_{i}})M_{\phi} \operatorname{Ext}_{\Omega_{i}}(W_{i}M_{\psi}^{2}(1-\Delta_{G})^{-1}M_{\psi}^ {2}W_{i}^{-1}).\] It follows directly from Definition 2.17 that \[(1-\Delta_{\hat{g}_{i}})M_{\phi}=\operatorname{Ext}_{\Omega_{i}}(W_{i}(1- \Delta_{G})M_{\phi\circ h_{i}}W_{i}^{-1}).\] Thus, \[(1-\Delta_{\hat{g}_{i}})M_{\phi}\operatorname{Ext}_{\Omega_{i}}(W_{i}M_{\psi} ^{2}(1-\Delta_{G})^{-1}M_{\psi}^{2}W_{i}^{-1})=\] \[=\operatorname{Ext}_{\Omega_{i}}(W_{i}(1-\Delta_{g})M_{\phi\circ h_{i}}W_{i}^{ -1})\cdot\operatorname{Ext}_{\Omega_{i}}(W_{i}M_{\psi}^{2}(1-\Delta_{G})^{-1} M_{\psi}^{2}W_{i}^{-1})=\] \[=\operatorname{Ext}_{\Omega_{i}}(W_{i}(1-\Delta_{G})M_{\psi}^{2}(1-\Delta_{G}) ^{-1}M_{\psi}^{2}W_{i}^{-1}).\] Combining these equalities, we obtain \[\operatorname{Ext}_{\Omega_{i}}(W_{i}M_{\psi}^{2}(1-\Delta_{g})^{-1}M_{\psi}^ {2}W_{i}^{-1})=\] \[=M_{\phi}(1-\Delta_{\hat{g}_{i}})^{-1}\cdot\operatorname{Ext}_{\Omega_{i}}(W_ {i}(1-\Delta_{G})M_{\psi}^{2}(1-\Delta_{G})^{-1}M_{\psi}^{2}W_{i}^{-1}).\] Now, \[(1-\Delta_{G})M_{\psi}^{2}(1-\Delta_{G})^{-1}M_{\psi}^{2}=M_{\psi}^{4}-[ \Delta_{G},M_{\psi}^{2}](1-\Delta_{G})^{-1}M_{\psi}^{2}.\] Thus, \[\operatorname{Ext}_{\Omega_{i}}(W_{i}M_{\psi}^{2}(1-\Delta_{G})^{-1}M_{\psi}^ {2}W_{i}^{-1})=M_{\phi}(1-\Delta_{\hat{g}_{i}})^{-1}M_{\psi\circ h_{i}^{-1}}^ {4}-\] \[-M_{\phi}(1-\Delta_{\hat{g}_{i}})^{-1}\cdot\operatorname{Ext}_{\Omega_{i}}(W_ {i}[\Delta_{G},M_{\psi}^{2}](1-\Delta_{G})^{-1}M_{\psi}^{2}W_{i}^{-1}).\] Since \(X\) is compact, it follows that \[(1-\Delta_{G})^{-1}:L_{2}(X)\to W^{2,2}(X),\quad[\Delta_{G},M_{\psi}^{2}]:W^{ 2,2}(X)\to W^{1,2}(X)\] are bounded operators. We now write \[[\Delta_{G},M_{\psi}^{2}](1-\Delta_{G})^{-1}=(1-\Delta_{G})^{-\frac{1}{2}} \cdot(1-\Delta_{G})^{\frac{1}{2}}[\Delta_{G},M_{\psi}^{2}](1-\Delta_{G})^{-1},\] where the first factor is in \(\mathcal{L}_{d,\infty}\) and the second factor is bounded. By Lemma 8.1, we have \[M_{\phi}(1-\Delta_{\hat{g}_{i}})^{-1}\in\mathcal{L}_{\frac{d}{2},\infty}.\] By Holder inequality, we have \[M_{\phi}(1-\Delta_{\hat{g}_{i}})^{-1}\cdot\operatorname{Ext}_{\Omega_{i}}(W_ {i}[\Delta_{g},M_{\psi}^{2}](1-\Delta_{G})^{-1}M_{\psi}^{2}W_{i}^{-1})\in \mathcal{L}_{\frac{d}{3},\infty}.\] Thus, \[\operatorname{Ext}_{\Omega_{i}}(W_{i}M_{\psi}^{2}(1-\Delta_{G})^{-1}M_{\psi}^ {2}W_{i}^{-1})-M_{\phi}(1-\Delta_{\hat{g}_{i}})^{-1}M_{\psi\circ h_{i}^{-1}}^{4 }\in\mathcal{L}_{\frac{d}{3},\infty}.\] Note that \[M_{\phi}(1-\Delta_{\hat{g}_{i}})^{-1}M_{\psi oh_{i}^{-1}}^{4}=M_{\psi oh_{i}^{-1} }^{2}(1-\Delta_{\hat{g}_{i}})^{-1}M_{\psi oh_{i}^{-1}}^{2}+\] \[+M_{\phi}\cdot[(1-\Delta_{\hat{g}_{i}})^{-1},M_{\psi oh_{i}^{-1}}^{2}]\cdot M_{ \psi oh_{i}^{-1}}^{2}.\] Let \(\theta\in C_{c}^{\infty}(\mathbb{R}^{d})\) be such that \(\theta\cdot(\psi\circ h_{i}^{-1})=\psi\circ h_{i}^{-1}.\) We have \[[(1-\Delta_{\hat{g}_{i}})^{-1},M_{\psi oh_{i}^{-1}}^{2}]=(1-\Delta_{\hat{g}_{i }})^{-1}[\Delta_{\hat{g}_{i}},M_{\psi\circ h_{i}^{-1}}^{2}](1-\Delta_{\hat{g}_ {i}})^{-1}=\] \[=(1-\Delta_{\hat{g}_{i}})^{-1}M_{\theta}\cdot[\Delta_{\hat{g}_{i}},M_{\psi \circ h_{i}^{-1}}^{2}](1-\Delta_{\hat{g}_{i}})^{-1}\stackrel{{ L.\ref{lem:2.2.2}}}{{\in}}\mathcal{L}_{\frac{d}{2},\infty} \cdot\mathcal{L}_{d,\infty}=\mathcal{L}_{\frac{d}{3},\infty}.\] Combining the last three formulae, we complete the proof. **Lemma 8.4**.: _Let \((X,G)\) be a compact Riemannian manifold. If \(0\leq\psi\in C^{\infty}(X),\) then_ \[M_{\psi}^{d}(1-\Delta_{G})^{-\frac{d}{2}}M_{\psi}^{d}-\left(M_{\psi}^{2}(1- \Delta_{G})^{-1}M_{\psi}^{2}\right)^{\frac{d}{2}}\in\mathcal{L}_{\frac{2d}{2d +1},\infty}.\] _The same assertion holds for \(\Delta_{g},\) where \(g\) is as in Theorem 2.19 and for \(\psi\in C_{c}^{\infty}(\mathbb{R}^{d}).\)_ Proof.: **Step 1:** We prove by induction that \[M_{\psi}^{2n}(1-\Delta_{G})^{-n}M_{\psi}^{2n}-(M_{\psi}^{2}(1-\Delta_{G})^{-1} M_{\psi}^{2})^{n}\in\mathcal{L}_{\frac{d}{2n+1},\infty},\quad n\geq 0. \tag{8.1}\] Base of induction (i.e., the case \(n=1\)) is obvious. It remains to prove step of induction. Suppose (8.1) holds for \(n\) and let us prove it for \(n+1.\) We write \[(M_{\psi}^{2}(1-\Delta_{G})^{-1}M_{\psi}^{2})^{n+1}-M_{\psi}^{2n+2}(1-\Delta_{ G})^{-n-1}M_{\psi}^{2n+2}=\] \[=M_{\psi}^{2}(1-\Delta_{G})^{-1}M_{\psi}^{2}\cdot\left((M_{\psi}^{2}(1-\Delta_{ G})^{-1}M_{\psi}^{2})^{n}-M_{\psi}^{2n}(1-\Delta_{G})^{-n}M_{\psi}^{2n}\right)+\] \[+M_{\psi}^{2}(1-\Delta_{G})^{-\frac{3}{2}}\cdot(1-\Delta_{G})^{\frac{1}{2}}[ \Delta_{G},M_{\psi}^{2n}](1-\Delta_{G})^{-1}\cdot M_{\psi}^{2}(1-\Delta_{G})^{ -n}M_{\psi}^{2n}-\] \[-M_{\psi}^{2n+2}(1-\Delta_{G})^{-n-1}\cdot[M_{\psi}^{2},(1-\Delta_{G})^{n}](1 -\Delta_{G})^{\frac{1}{2}-n}\cdot(1-\Delta_{G})^{-\frac{1}{2}}M_{\psi}^{2n}.\] The first term on the right hand side belongs to \(\mathcal{L}_{\frac{d}{2n+3},\infty}\) by inductive assumption and Holder inequality. Note that the operators \[(1-\Delta_{G})^{\frac{1}{2}}[\Delta_{G},M_{\psi}^{2n}](1-\Delta_{G})^{-1}, \quad[M_{\psi}^{2},(1-\Delta_{G})^{n}](1-\Delta_{G})^{\frac{1}{2}-n},\] are bounded. Hence, the second and third terms on the right hand side belong to \(\mathcal{L}_{\frac{d}{2n+3},\infty}\) by Holder inequality. This establishes the step of induction and, hence, proves the claim in Step 1. **Step 2:** Note that \[M_{\psi}^{d}(1-\Delta_{G})^{-\frac{d}{2}}M_{\psi}^{d}-M_{\psi}^{2d}(1-\Delta_{G })^{-\frac{d}{2}}=\] \[=M_{\psi}^{d}\cdot[M_{\psi}^{d},(1-\Delta_{G})^{-\frac{d}{2}}](1-\Delta_{G})^{ \frac{d+1}{2}}\cdot(1-\Delta_{G})^{-\frac{d+1}{2}}.\] Since the operator \[[M_{\psi}^{d},(1-\Delta_{G})^{-\frac{d}{2}}](1-\Delta_{G})^{\frac{d+1}{2}}\] is bounded, it follows that \[M_{\psi}^{d}(1-\Delta_{G})^{-\frac{d}{2}}M_{\psi}^{d}-M_{\psi}^{2d}(1-\Delta_{G })^{-\frac{d}{2}}\in\mathcal{L}_{\frac{d}{2+1},\infty}.\] Taking adjoints, we obtain \[M_{\psi}^{d}(1-\Delta_{G})^{-\frac{d}{2}}M_{\psi}^{d}-(1-\Delta_{G})^{-\frac{d} {2}}M_{\psi}^{2d}\in\mathcal{L}_{\frac{d}{2+1},\infty}.\] Therefore, \[\left(M_{\psi}^{d}(1-\Delta_{G})^{-\frac{d}{2}}M_{\psi}^{d}\right)^{2}-M_{\psi}^{ 2d}(1-\Delta_{G})^{-d}M_{\psi}^{2d}\in\mathcal{L}_{\frac{d}{2d+1},\infty}. \tag{8.2}\] Applying (8.1) with \(n=d\) and using (8.2), we obtain \[\left(M_{\psi}^{d}(1-\Delta_{G})^{-\frac{d}{2}}M_{\psi}^{d}\right)^{2}-\left(M_ {\psi}^{2}(1-\Delta_{G})^{-1}M_{\psi}^{2}\right)^{d}\in\mathcal{L}_{\frac{d}{2 d+1},\infty}.\] The assertion follows now from Birman-Koplienko-Solomyak inequality. We remind the reader the following version of Connes Trace Theorem on Euclidean space established in [30]. **Theorem 8.5**.: _Let \(\varphi\) be a normalised continuous trace on \(\mathcal{L}_{1,\infty}.\) If \(T\in\Pi\) is compactly supported from the right (i.e., there exists \(\phi\in C_{c}^{\infty}(\mathbb{R}^{d})\) such that \(T=T\pi_{1}(\phi)\)), then_ \[\varphi(T(1-\Delta)^{-\frac{d}{2}})=c_{d}^{\prime}\int_{\mathbb{R}^{d}\times \mathbb{S}^{d-1}}\operatorname{sym}(T)dm,\] _where \(m\) is the product of Lebesgue measure on \(\mathbb{R}^{d}\) and Haar measure on \(\mathbb{S}^{d-1}.\)_ **Lemma 8.6**.: _Let \((X,G)\) be a compact Riemannian manifold. Let \(T\in\Pi_{X}\) be compactly supported in the chart \((\mathcal{U}_{i},h_{i}).\) Let \(\varphi\) be a continuous normalised trace on \(\mathcal{L}_{1,\infty}.\) We have_ \[\varphi(T(1-\Delta_{G})^{-\frac{d}{2}})=c_{d}\int_{\Omega_{i}\times\mathbb{R} ^{d}}\operatorname{sym}(T_{i})(t,\frac{s}{|s|})e^{-q_{i}(t,s)}dtds,\quad T_{ i}=\operatorname{Ext}_{\Omega_{i}}(W_{i}TW_{i}^{-1}).\] _Here, \(q_{i}\) is as in Notation 2.13._ Proof.: Fix \(0\leq\psi\in C^{\infty}(X)\) such that \(T=M_{\psi}TM_{\psi}\) and such that \(\psi\) is compactly supported in \(\mathcal{U}_{i}.\) By the tracial property we have \[\varphi(T(1-\Delta_{G})^{-\frac{d}{2}})=\varphi(M_{\psi}^{d}TM_{\psi}^{d}(1- \Delta_{G})^{-\frac{d}{2}})=\varphi(TM_{\psi}^{d}(1-\Delta_{G})^{-\frac{d}{2} }M_{\psi}^{d}).\] Since \(\varphi\) vanishes on \(\mathcal{L}_{\frac{2d}{2d+1},\infty},\) it follows from Lemma 8.4 that \[\varphi(T(1-\Delta_{G})^{-\frac{d}{2}})=\varphi(TA^{\frac{d}{2}}),\quad A=M_{ \psi}^{2}(1-\Delta_{G})^{-1}M_{\psi}^{2}.\] Since both operators \(T\) and \(A\) are compactly supported in the chart \((\mathcal{U}_{i},h_{i}),\) it follows that \[\varphi(TA^{\frac{d}{2}})=\varphi(W_{i}TW_{i}^{-1}\cdot(W_{i}AW_{i}^{-1})^{ \frac{d}{2}})=\] \[=\varphi(\operatorname{Ext}_{\Omega_{i}}(W_{i}TW_{i}^{-1})\cdot(\operatorname {Ext}_{\Omega_{i}}(W_{i}AW_{i}^{-1}))^{\frac{d}{2}}).\] Denote for brevity \[B_{i}=M_{\psi\circ h_{i}^{-1}}^{2}(1-\Delta_{\hat{g}_{i}})^{-1}M_{\psi\circ h_ {i}^{-1}}^{2}.\] By Lemma 8.3 and Birman-Koplienko-Solomyak inequality, we have \[(\operatorname{Ext}_{\Omega_{i}}(W_{i}AW_{i}^{-1}))^{\frac{d}{2}}-B_{i}^{ \frac{d}{2}}\in\mathcal{L}_{\frac{2d}{2d+1},\infty}.\] Since \(\varphi\) vanishes on \(\mathcal{L}_{\frac{2d}{2d+1},\infty},\) it follows \[\varphi(TA^{\frac{d}{2}})=\varphi(T_{i}B_{i}^{\frac{d}{2}}).\] Using the second assertion in Lemma 8.4, we obtain \[\varphi(TA^{\frac{d}{2}})=\varphi(T_{i}M_{\psi\circ h_{i}^{-1}}^{d}(1-\Delta_{ \hat{g}_{i}})^{-\frac{d}{2}}M_{\psi\circ h_{i}^{-1}}^{d})=\] \[=\varphi(T_{i}(1-\Delta)^{-\frac{d}{2}}X_{i})=\varphi(X_{i}T_{i}(1-\Delta)^{-\frac {d}{2}}),\] where \[X_{i}=(1-\Delta)^{\frac{d}{2}}(1-\Delta_{\hat{g}_{i}})^{-\frac{d}{2}}M_{\psi \circ h_{i}^{-1}}^{d}.\] By Lemma 8.2, we have \(X_{i}\in\Pi.\) Hence, the operator \(X_{i}T_{i}\in\Pi\) is compactly supported from the right. By Theorem 8.5 we have \[\varphi(T(1-\Delta_{g})^{-\frac{d}{2}})=\varphi(TA^{\frac{d}{2}})=c_{d}^{ \prime}\int_{\mathbb{R}^{d}\times\mathbb{S}^{d-1}}\operatorname{sym}(X_{i}T_{ i})dm,\] where \(m\) is the product of Lebesgue measure on \(\mathbb{R}^{d}\) and Haar measure on \(\mathbb{S}^{d-1}.\) By Lemma 8.2, we have \[\operatorname{sym}(X_{i}T_{i})(t,s)=\operatorname{sym}(T_{i})(t,s)\cdot\psi(h _{i}^{-1}(t))^{d}\cdot\langle\hat{g}_{i}(t)^{-1}s,s\rangle^{-\frac{d}{2}}=\] \[=\operatorname{sym}(T_{i})(t,s)\cdot\langle g_{i}(t)^{-1}s,s\rangle^{-\frac{ d}{2}}.\] Thus, \[\varphi(T(1-\Delta_{g})^{-\frac{d}{2}})=c_{d}^{\prime}\int_{\mathbb{R}^{d} \times\mathbb{S}^{d-1}}\operatorname{sym}(T_{i})(t,s)\cdot\langle g_{i}(t)^{ -1}s,s\rangle^{-\frac{d}{2}}dtds.\] By passing to spherical coordinates we obtain \[\int_{\mathbb{R}^{d}\times\mathbb{S}^{d-1}}\operatorname{sym}(T_{i})(t,s) \cdot\langle g_{i}(t)^{-1}s,s\rangle^{-\frac{d}{2}}dtds=\] \[=c_{d}^{\prime\prime}\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}\operatorname{ sym}(T_{i})(t,\frac{s}{|s|})e^{-q_{i}(t,s)}dtds.\] Combining these 2 equalities, we complete the proof. Proof of Theorem 1.5.: Suppose first that \(T\in\Pi_{X}\) is compactly supported in the chart \((\mathcal{U}_{i},h_{i}).\) By Lemma 8.6, we have \[\varphi(T(1-\Delta_{G})^{-\frac{d}{2}})=c_{d}\int_{\Omega_{i}\times\mathbb{R} ^{d}}\operatorname{sym}(T_{i})(t,\frac{s}{|s|})\cdot e^{-q_{i}(t,s)}dtds.\] By (2.9) and (2.10), we have \[\int_{\Omega_{i}\times\mathbb{R}^{d}}\operatorname{sym}(T_{i})(t,\frac{s}{|s |})\cdot e^{-q_{i}(t,s)}dtds=\int_{T^{*}X}\operatorname{sym}_{X}(T)e^{-q_{X}}d\lambda.\] A combination of these two equalities proves the assertion for \(T\) compacly supported in some \(\mathcal{U}_{i}.\) Let now \(T\in\Pi_{X}\) be arbitrary. Let \((\phi_{n})_{n=1}^{N}\) be a fixed good partition of unity. We write \[T=\sum_{n=0}^{N}T_{n},\quad T_{0}=\sum_{m=1}^{N}M_{\phi_{m}}^{\frac{1}{2}}\cdot [M_{\phi_{m}}^{\frac{1}{2}},T],\quad T_{n}=M_{\phi_{n}}^{\frac{1}{2}}TM_{\phi_ {n}}^{\frac{1}{2}},\quad n\geq 1.\] By assumption, \([T,M_{\psi}]\) is compact for every \(\psi\in C(X).\) In particular, \(T_{0}\) is compact. Thus, \[\varphi(T_{0}(1-\Delta_{G})^{-\frac{d}{2}})=0,\quad\operatorname{sym}_{X}(T_ {0})=0.\] By the first paragraph, we have \[\varphi(T(1-\Delta_{G})^{-\frac{d}{2}})=\sum_{n=0}^{N}\varphi(T_{n}(1-\Delta_ {G})^{-\frac{d}{2}})=\] \[=\sum_{n=1}^{N}c_{d}\int_{T^{*}X}\operatorname{sym}_{X}(T_{n})e^{-q_{X}}d \lambda=c_{d}\int_{T^{*}X}\operatorname{sym}_{X}(T)e^{-q_{X}}d\lambda.\] ## Appendix A Proof of globalisation theorem We prove Theorem 7.4 in the following series of lemmas. **Lemma A.1**.: _In the setting of Definitions 7.1 and 7.2, \(\mathcal{A}\) is a unital \(*\)-subalgebra in \(B(L_{2}(X,\nu)).\)_ Proof.: Suppose \(T,S\in\mathcal{A}.\) It is immediate that \(1,T+S,T^{*}\in\mathcal{A}.\) It suffices to show that also \(TS\in\mathcal{A}.\) Let \(i\in\mathbb{I}\) and let \(0\leq\phi\in C_{c}(\mathcal{U}_{i}).\) We write \[M_{\phi}TSM_{\phi}=M_{\phi^{\frac{1}{2}}}TM_{\phi^{\frac{1}{2}}}\cdot M_{\phi^ {\frac{1}{2}}}SM_{\phi^{\frac{1}{2}}}+M_{\phi^{\frac{1}{2}}}\cdot[M_{\phi}^{ \frac{1}{2}},T]\cdot SM_{\phi}+M_{\phi}^{\frac{1}{2}}TM_{\phi}^{\frac{1}{2}} \cdot[S,M_{\phi}^{\frac{1}{2}}]M_{\phi}^{\frac{1}{2}}.\] By Definition 7.2 (1), we have \[M_{\phi^{\frac{1}{2}}}TM_{\phi^{\frac{1}{2}}},M_{\phi^{\frac{1}{2}}}SM_{\phi^ {\frac{1}{2}}}\in\mathcal{A}_{i}.\] Since \(\mathcal{A}_{i}\) is a subalgebra, it follows that \[M_{\phi^{\frac{1}{2}}}TM_{\phi^{\frac{1}{2}}}\cdot M_{\phi^{\frac{1}{2}}}SM_{ \phi^{\frac{1}{2}}}\in\mathcal{A}_{i}.\] By Definition 7.2 (2), the operators \([M_{\phi}^{\frac{1}{2}},T]\) and \([S,M_{\phi}^{\frac{1}{2}}]\) are compact. Therefore, \[M_{\phi^{\frac{1}{2}}}\cdot[M_{\phi}^{\frac{1}{2}},T]\cdot SM_{\phi}+M_{\phi} ^{\frac{1}{2}}TM_{\phi}^{\frac{1}{2}}\cdot[S,M_{\phi}^{\frac{1}{2}}]M_{\phi}^ {\frac{1}{2}}\] is compact. However, the latter operator is compactly supported in \(\mathcal{U}_{i}.\) By Definition 7.1 (4), we have \[M_{\phi^{\frac{1}{2}}}\cdot[M_{\phi}^{\frac{1}{2}},T]\cdot SM_{\phi}+M_{\phi }^{\frac{1}{2}}TM_{\phi}^{\frac{1}{2}}\cdot[S,M_{\phi}^{\frac{1}{2}}]M_{\phi} ^{\frac{1}{2}}\in\mathcal{A}_{i}.\] Therefore, \[M_{\phi}TSM_{\phi}\in\mathcal{A}_{i},\quad\phi\in C_{c}(\mathcal{U}_{i}), \quad i\in\mathbb{I}.\] Since also \[[TS,M_{\psi}]=T\cdot[S,M_{\psi}]+[T,M_{\psi}]\cdot S\] is compact for every \(\psi\in C(X),\) it follows that \(TS\in\mathcal{A}.\) **Lemma A.2**.: _In the setting of Definitions 7.1 and 7.2, \(\mathcal{A}\) is a unital \(C^{*}\)-subalgebra in \(B(L_{2}(X,\nu)).\)_ Proof.: It is established in Lemma A.1 that \(\mathcal{A}\) is a unital \(*\)-subalgebra in \(B(L_{2}(X,\nu)).\) It suffices to show that \(\mathcal{A}\) is closed in the uniform norm. Let \(\{T_{n}\}_{n\geq 1}\subset\mathcal{A}\) and let \(T\in B(L_{2}(X,\nu))\) be such that \(T_{n}\to T\) in the uniform norm. Let us show that \(T\in\mathcal{A}.\) Let \(i\in\mathbb{I}\) and let \(\phi\in C_{c}(\mathcal{U}_{i}).\) Take \(\phi_{0}\in C_{c}(\mathcal{U}_{i})\) such that \(\phi\phi_{0}=\phi.\) We have \(M_{\phi}T_{n}M_{\phi}\in\mathcal{A}_{i}\) and, therefore, \[M_{\phi}T_{n}M_{\phi}=M_{\phi_{0}}\cdot M_{\phi}T_{n}M_{\phi}\cdot M_{\phi_{0} }\in M_{\phi_{0}}\mathcal{A}_{i}M_{\phi_{0}},\quad n\geq 1.\] By assumption, \(M_{\phi}T_{n}M_{\phi}\to M_{\phi}TM_{\phi}\) in the uniform norm. Hence, \(M_{\phi}TM_{\phi}\) belongs to the closure of \(M_{\phi_{0}}\mathcal{A}_{i}M_{\phi_{0}}\) in the uniform norm. By Definition 7.1 (6), \(M_{\phi}TM_{\phi}\in\mathcal{A}_{i}.\) If \(\psi\in C(X)\), then \[[T,M_{\psi}]=\lim_{n\to\infty}[T_{n},M_{\psi}],\] is the limit of compact operators in the uniform norm and is, therefore, compact. Combining the results in the preceding paragraphs, we conclude that \(T\in\mathcal{A}\). This completes the proof. Proof of Theorem 7.4 (1).: We already demonstrated in Lemma A.2 that \(\mathcal{A}\) is a unital \(C^{*}\)-subalgebra in \(B(L_{2}(X,\nu))\). It remains to show that \(\mathcal{A}_{i}\subset\mathcal{A}\) for every \(i\in\mathbb{I}\) and that \(\mathcal{K}(L_{2}(X,\nu))\subset\mathcal{A}\). Let \(T\in\mathcal{A}_{i}\) and \(\phi\in C_{c}(\mathcal{U}_{j})\). By Definition 7.1 (2), \(T\) is compactly supported in \(\mathcal{U}_{i}\). Choose \(\phi_{0}\in C_{c}(\mathcal{U}_{i})\) such that \(T=M_{\phi_{0}}T=TM_{\phi_{0}}\). Then \(\phi\phi_{0}\in C_{c}(\mathcal{U}_{i}\cap\mathcal{U}_{j})\) and \(M_{\phi\phi_{0}}\in\mathcal{A}_{i}\). Since \(\mathcal{A}_{i}\) is an algebra, it follows that \(M_{\phi}TM_{\phi}=M_{\phi\phi_{0}}TM_{\phi\phi_{0}}\in\mathcal{A}_{i}\). Since \(M_{\phi}TM_{\phi}\) is compactly supported in \(\mathcal{U}_{i}\cap\mathcal{U}_{j}\), it follows from Definition 7.1 (3) that \(M_{\phi}TM_{\phi}\in\mathcal{A}_{j}\). This verifies the condition (1) in Definition 7.2 for the operator \(T\). Let \(\psi\in C(X)\). As above, choose \(\phi_{0}\in C_{c}(\mathcal{U}_{i})\) such that \(T=M_{\phi_{0}}T=TM_{\phi_{0}}\). It follows that \([T,M_{\psi}]=[T,M_{\psi\phi_{0}}]\). Since \(\psi\phi_{0}\in C_{c}(\mathcal{U}_{i})\), it follows from the condition (7) in Definition 7.1 that \([T,M_{\psi}]\) is compact. This verifies the condition (2) in Definition 7.2 for the operator \(T\). Hence, \(T\in\mathcal{A}\) and, therefore, \(\mathcal{A}_{i}\subset\mathcal{A}\). Let \(T\in\mathcal{K}(L_{2}(X,\nu))\). For every \(i\in\mathbb{I}\) and for every \(\phi\in C_{c}(\mathcal{U}_{i})\), the operator \(M_{\phi}TM_{\phi}\) is simultaneously compact and compactly supported in \(\mathcal{U}_{i}\). By Definition 7.1 (4), we have \(M_{\phi}TM_{\phi}\in\mathcal{A}_{i}\). Clearly, \([T,M_{\psi}]\in\mathcal{K}(L_{2}(X,\nu))\) for every \(\psi\in C(X)\). Therefore, \(T\in\mathcal{A}\). Since \(T\in\mathcal{K}(L_{2}(X,\nu))\) is arbitrary, it follows that \(\mathcal{K}(L_{2}(X,\nu))\subset\mathcal{A}\). The purpose of the next lemma is twofold: to establish Theorem 7.4 (3) and to provide a concrete form of hom used in the proof of Theorem 7.4 (2). **Lemma A.3**.: _Suppose we are in the setting of Theorem 7.4. Let \((\phi_{n})_{n=1}^{N}\) be a good5 partition of unity so that \(\phi_{n}\) is compactly supported in \(\mathcal{U}_{i_{n}}\) for \(1\leq n\leq N\). If \(\hom:\mathcal{A}\to\mathcal{B}\) is a \(*\)-homomorphism as in Theorem 7.4 (2), then_ Footnote 5: See Definition 2.16. (A.1) \[\hom(T)=\sum_{n=1}^{N}\hom_{i_{n}}(M_{\phi_{n}^{\frac{1}{2}}}TM_{\phi_{n}^{ \frac{1}{2}}}),\quad T\in\mathcal{A}.\] _In particular, \(\hom\) is unique._ Proof.: For every \(T\in B(L_{2}(X,\nu))\), we write \[T=\sum_{n=0}^{N}T_{n},\quad T_{n}=M_{\phi_{n}^{\frac{1}{2}}}TM_{\phi_{n}^{ \frac{1}{2}}},\quad 1\leq n\leq N,\quad T_{0}=\sum_{k=1}^{N}M_{\phi_{k}^{ \frac{1}{2}}}\cdot[M_{\phi_{k}^{\frac{1}{2}}},T].\] Every \(T_{n}\), \(1\leq n\leq N\), is compactly supported in the chart \((\mathcal{U}_{i_{n}},h_{i_{n}})\). If \(T\in\mathcal{A}\), then \(T_{n}\in\mathcal{A}_{i_{n}}\) for \(1\leq n\leq N\). Hence, \[\hom(T_{n})=\hom_{i_{n}}(T_{n}),\quad 1\leq n\leq N.\] If \(T\in\mathcal{A}\), then \(T_{0}\) is compact by Definition 7.2 (2). Since hom vanish on compact operators, it follows that \[\hom(T_{0})=0.\] Thus, \[\hom(T)=\sum_{n=0}^{N}\hom(T_{n})=\sum_{n=1}^{N}\hom_{i_{n}}(M_{\phi_{n}^{\frac{1} {2}}}TM_{\phi_{n}^{\frac{1}{2}}}),\quad T\in\mathcal{A}.\] We now fix a good partition of unity and prove that the concrete map \(\hom\) introduced in Lemma A.3 is \(*\)-homomorphism. **Lemma A.4**.: _Let \(\hom\) be the mapping on the right hand side in (A.1). We have_ \[\hom(TM_{\phi})=\hom(T)\cdot\hom(M_{\phi}),\quad T\in\mathcal{A},\quad\phi\in C (X).\] Proof.: Let \(\psi_{n}\in C_{c}(\mathcal{U}_{i_{n}})\) be such that \(\phi_{n}\psi_{n}=\phi_{n}.\) We write \[M_{\phi_{n}^{\frac{1}{2}}}TM_{\phi}M_{\phi_{n}^{\frac{1}{2}}}=M_{\phi_{n}^{ \frac{1}{2}}}TM_{\phi_{n}^{\frac{1}{2}}}\cdot M_{\phi\psi_{n}}.\] Since \(\hom_{i_{n}}\) is a homomorphism, it follows that \[\hom_{i_{n}}(M_{\phi_{n}^{\frac{1}{2}}}TM_{\phi}M_{\phi_{n}^{\frac{1}{2}}})= \hom_{i_{n}}(M_{\phi_{n}^{\frac{1}{2}}}TM_{\phi_{n}^{\frac{1}{2}}})\cdot\hom_{ i_{n}}(M_{\phi\psi_{n}}).\] It follows from Definition 7.3 (4) and (A.1) that \[\hom_{i_{n}}(M_{\phi\psi_{n}})=\hom(\phi\psi_{n})=\hom(\phi)\cdot\hom(\psi_{n} )=\hom_{i_{n}}(M_{\psi_{n}})\cdot\hom(M_{\phi}).\] Again using the fact that \(\hom_{i_{n}}\) is a homomorphism, we obtain \[\hom_{i_{n}}(M_{\phi_{n}^{\frac{1}{2}}}TM_{\phi}M_{\phi_{n}^{\frac{1}{2}}})= \hom_{i_{n}}(M_{\phi_{n}^{\frac{1}{2}}}TM_{\phi_{n}^{\frac{1}{2}}}\cdot M_{ \psi_{n}})\cdot\hom(M_{\phi}).\] However, by the choice of \(\psi_{n}\), we have \[M_{\phi_{n}^{\frac{1}{2}}}TM_{\phi_{n}^{\frac{1}{2}}}\cdot M_{\psi_{n}}=M_{ \phi_{n}^{\frac{1}{2}}}TM_{\phi_{n}^{\frac{1}{2}}}.\] Therefore, \[\hom_{i_{n}}(M_{\phi_{n}^{\frac{1}{2}}}TM_{\phi}M_{\phi_{n}^{\frac{1}{2}}})= \hom_{i_{n}}(M_{\phi_{n}^{\frac{1}{2}}}TM_{\phi_{n}^{\frac{1}{2}}})\cdot\hom( M_{\phi}),\quad 1\leq n\leq N.\] Summing over \(1\leq n\leq N,\) we complete the proof. **Lemma A.5**.: _Let \(\hom\) be the mapping on the right hand side in (A.1). We have \(\hom=\hom_{j}\) on \(\mathcal{A}_{j}\) for every \(j\in\mathbb{I}.\)_ Proof.: Let \(T\in\mathcal{A}_{j}\) and let \((\phi_{n})_{n=1}^{N}\) be as in Lemma A.3. For every \(1\leq n\leq N,\) the operator \(M_{\phi_{n}^{\frac{1}{2}}}TM_{\phi_{n}^{\frac{1}{2}}}\) is compactly supported both in the chart \((\mathcal{U}_{j},h_{j})\) and in the chart \((\mathcal{U}_{i_{n}},h_{i_{n}}).\) By Definition 7.3 (2), we have \[\hom_{i_{n}}(M_{\phi_{n}^{\frac{1}{2}}}TM_{\phi_{n}^{\frac{1}{2}}})=\hom_{j}(M_ {\phi_{n}^{\frac{1}{2}}}TM_{\phi_{n}^{\frac{1}{2}}}).\] Let \(\phi\in C_{c}(\mathcal{U}_{j})\) be such that \(T=M_{\phi}T=TM_{\phi}.\) Since \(\hom_{j}\) is a homomorphism, it follows that \[\hom_{j}(M_{\phi_{n}^{\frac{1}{2}}}TM_{\phi_{n}^{\frac{1}{2}}})= \hom_{j}(M_{\phi_{n}^{\frac{1}{2}}}\cdot T\cdot M_{\phi\phi_{n}^{\frac{1}{2}}})=\] \[=\hom_{j}(T)\cdot\hom_{j}(M_{\phi_{n}^{\frac{1}{2}}})\cdot\hom_{j}( M_{\phi\phi_{n}^{\frac{1}{2}}})=\hom_{j}(T)\cdot\hom_{j}(M_{\phi_{n}\phi^{2}}).\] Therefore, \[\hom_{i_{n}}(M_{\phi_{n}^{\frac{1}{2}}}TM_{\phi_{n}^{\frac{1}{2}}})=\hom_{j}(T) \cdot\hom_{j}(M_{\phi_{n}\phi^{2}}),\quad 1\leq n\leq N.\] Summing over \(1\leq n\leq N,\) we obtain \[\hom(T)=\hom_{j}(T)\cdot\big{(}\sum_{n=1}^{N}\hom_{j}(M_{\phi_{n}\phi^{2}})\big{)} =\hom_{j}(T)\cdot\hom_{j}(M_{\phi^{2}}).\] Again using the fact the \(\hom_{j}\) is a homomorphism, we obtain \[\hom(T)=\hom_{j}(TM_{\phi^{2}}).\] Taking into account that \(TM_{\phi^{2}}=T,\) we complete the proof. **Lemma A.6**.: _We have_ \[\hom(TS)=\hom(T)\cdot\hom(S),\quad T,S\in\mathcal{A}_{j},\quad j\in\mathbb{I}.\] Proof.: Using Lemma A.5 and taking into account that \(\hom_{j}\) is a homomorphism, we write \[\hom(TS)=\hom_{j}(TS)=\hom_{j}(T)\cdot\hom_{j}(S)=\hom(T)\cdot\hom(S).\] **Lemma A.7**.: _If \(T,S\in\mathcal{A}\) and if \(T\) is compactly supported in the chart \((\mathcal{U}_{j},h_{j}),\) then_ \[\hom(TS)=\hom(T)\cdot\hom(S).\] Proof.: Let \(\phi\in C_{c}(\mathcal{U}_{j})\) be such that \(T=TM_{\phi}.\) We write \[TS=TS_{1}+TS_{2},\quad S=M_{\phi^{\frac{1}{2}}}SM_{\phi^{\frac{1}{2}}},\quad S _{2}=M_{\phi^{\frac{1}{2}}}[M_{\phi^{\frac{1}{2}}},S].\] By Definition 7.2 (2), \(S_{2}\) is compact. By construction, hom vanishes on compact operators. It follows that \[\hom(TS)=\hom(TS_{1}).\] Since \(T\) and \(S_{1}\) are compactly supported in the chart \((\mathcal{U}_{j},h_{j}),\) it follows from Lemma A.6 that \[\hom(TS)=\hom(T)\cdot\hom(S_{1}).\] By Lemma A.4, we have \[\hom(TS)=\hom(T)\cdot\hom(S)\cdot\hom(M_{\phi^{\frac{1}{2}}})^{2}=\] \[=\hom(T)\cdot\hom(M_{\phi})\cdot\hom(S)=\hom(TM_{\phi})\cdot\hom(S).\] Since \(T=TM_{\phi},\) the assertion follows. Proof of Theorem 7.4 (2).: It is immediate that \(\hom\) is a linear \(*\)-preserving mapping. We now prove that \(\hom\) preserves multiplication. Let \(T,S\in\mathcal{A}\) and let \((\phi_{n})_{n=1}^{N}\) be as in Lemma A.3. We write \(T=\sum_{n=0}^{N}T_{n},\) where \[T_{n}=M_{\phi_{n}^{\frac{1}{2}}}TM_{\phi_{n}^{\frac{1}{2}}},\quad 1\leq n\leq N,\quad T_{0}=\sum_{k=1}^{N}M_{\phi_{k}^{\frac{1}{2}}}[M_{\phi_{k}^{\frac{1}{2 }}},T].\] Every \(T_{n},\)\(1\leq n\leq N,\) is compactly supported in the chart \((\mathcal{U}_{i_{n}},h_{i_{n}}).\) By Lemma A.7, we have \[\hom(T_{n}S)=\hom(T_{n})\cdot\hom(S),\quad 1\leq n\leq N.\] The operators \(T_{0}\) and \(T_{0}S\) are compact. By construction, hom vanishes on compact operators. It follows that \[\hom(T_{0}S)=0=0\cdot\hom(S)=\hom(T_{0})\cdot\hom(S).\] By linearity, we have \[\hom(TS)=\sum_{n=0}^{N}\hom(T_{n}S)=\sum_{n=0}^{N}\hom(T_{n})\cdot\hom(S)=\hom(T) \cdot\hom(S).\] Thus, hom is a \(*\)-homomorphism. Let us now show that \(\ker(\hom)=\mathcal{K}(L_{2}(X,\nu)).\) If \(T\in\mathcal{A}\) is such that \(\hom(T)=0,\) then \(\hom(T^{*}T)=0.\) By construction of hom, we have \[\sum_{n=1}^{n}\hom_{i_{n}}(M_{\phi_{n}^{\frac{1}{2}}}T^{*}TM_{\phi_{n}^{\frac{ 1}{2}}})=0.\] Every summand on the left hand side is positive. Therefore, \[\hom_{i_{n}}(M_{\phi_{n}^{\frac{1}{2}}}T^{*}TM_{\phi_{n}^{\frac{1}{2}}})=0, \quad 1\leq n\leq N.\] By Definition 7.3 (3), we have \[M_{\phi_{n}^{\frac{1}{2}}}T^{*}TM_{\phi_{n}^{\frac{1}{2}}}\in\mathcal{K}(L_{2} (X,\nu)),\quad 1\leq n\leq N.\] In other words, \[TM_{\phi_{n}^{\frac{1}{2}}}\in\mathcal{K}(L_{2}(X,\nu)),\quad 1\leq n\leq N.\] Multiplying on the right by \(M_{\phi_{n}^{\frac{1}{2}}}\) and summing over \(1\leq n\leq N,\) we conclude that \(T\in\mathcal{K}(L_{2}(X,\nu)).\) Acknowledgement: The authors sincerely thank Professor N. Higson for the Question 1.2 which was the starting point of this article. The special thanks are due to Professor A. Connes for his insightful comments on the earlier version of this manuscript. In particular, the observation that diffeomorphisms of \(\mathbb{R}^{d}\) considered in this text extend to diffeomorphisms of \(P^{d}(\mathbb{R})\) is due to Professor Connes. Funding: The second author was partially supported by the ARC.
2309.12404
Understanding the language of molecules: Predicting pure component parameters for the PC-SAFT equation of state from SMILES
A major bottleneck in developing sustainable processes and materials is a lack of property data. Recently, machine learning approaches have vastly improved previous methods for predicting molecular properties. However, these machine learning models are often not able to handle thermodynamic constraints adequately. In this work, we present a machine learning model based on natural language processing to predict pure-component parameters for the perturbed-chain statistical associating fluid theory (PC-SAFT) equation of state. The model is based on our previously proposed SMILES-to-Properties-Transformer (SPT). By incorporating PC-SAFT into the neural network architecture, the machine learning model is trained directly on experimental vapor pressure and liquid density data. Combining established physical modeling approaches with state-of-the-art machine learning methods enables high-accuracy predictions across a wide range of pressures and temperatures, while maintaining the physical meaning of PC-SAFT parameters. SPT-PCSAFT demonstrates exceptional prediction accuracy even for complex molecules with various functional groups, outperforming traditional group contribution methods by a factor of four in the mean average percentage deviation. Moreover, SPT-PCSAFT captures the behavior of stereoisomers without any special consideration. To facilitate the application of our model, we provide predicted PC-SAFT parameters of more than 13645 components, making PC-SAFT accessible to all researchers.
Benedikt Winter, Philipp Rehner, Timm Esper, Johannes Schilling, André Bardow
2023-09-21T18:08:38Z
http://arxiv.org/abs/2309.12404v1
# Understanding the language of molecules: ###### Abstract A major bottleneck in developing sustainable processes and materials is a lack of property data. Recently, machine learning approaches have vastly improved previous methods for predicting molecular properties. However, these machine learning models are often not able to handle thermodynamic constraints adequately. In this work, we present a machine learning model based on natural language processing to predict pure-component parameters for the perturbed-chain statistical associating fluid theory (PC-SAFT) equation of state. The model is based on our previously proposed SMILES-to-Properties-Transformer (SPT). By incorporating PC-SAFT into the neural network architecture, the machine learning model is trained directly on experimental vapor pressure and liquid density data. Combining established physical modeling approaches with state-of-the-art machine learning methods enables high-accuracy predictions across a wide range of pressures and temperatures, while maintaining the physical meaning of PC-SAFT parameters. SPT\({}_{\mathrm{PC-SAFT}}\) demonstrates exceptional prediction accuracy even for complex molecules with various functional groups, outperforming traditional group contribution methods by a factor of four in the mean average percentage deviation. Moreover, SPT\({}_{\mathrm{PC-SAFT}}\) captures the behavior of stereoisomers without any special consideration. To facilitate the application of our model, we provide predicted PC-SAFT parameters of more than \(13\,645\) components, making PC-SAFT accessible to all researchers. PC-SAFT machine learning computational chemistry ## 1 Introduction Developing advanced materials like chemical products, fuels, or refrigerants is vital for sustainable solutions in various industries. To achieve this goal, designing new molecules with tailored properties is crucial. However, exploring all possible molecules experimentally is impossible, given the vast array of potential molecular candidates. As a result, models are needed that can rapidly predict molecular properties to streamline the molecular discovery and development of sustainable products and processes. Over the years, the research on predicting molecular properties has led to many approaches based on, e.g., quantitative structure-property relationships (QSPRs) [Katritzky et al., 1995, Hughes et al., 2008], group contribution (GC) methods [Fredenslund et al., 1975, Marrero and Gani, 2001, Hukkerikar et al., 2012, Sauer et al., 2014] and quantum mechanics [Klamt, 1995, Lin and Sandler, 2002, Schleder et al., 2019]. However, many of these classical methods either have low accuracy, are limited to certain functional groups, or require large computational resources. As a recent addition to these approaches, machine learning methods have emerged as a powerful tool due to their ability to learn complex patterns and generalize from data, overcoming some of the shortcomings of the classical methods. Some recent examples of machine learning approaches include methods for the prediction of binary properties such as activity coefficients (Jirasek and Hasse, 2021; Winter et al., 2022; Sanchez Medina et al., 2022; Rittig et al., 2023) or a large range of pure component properties (Liu et al., 2019; Venkatasubramanian, 2019; Ding et al., 2021; Alshehri et al., 2021 // 2022). However, the majority of recent machine learning approaches focus on singular properties, not a holistic description of a system. Thermodynamics teaches that equilibrium properties of fluids are not independent but rather related through an equation of state. Modern equations of state are expressed as a thermodynamic potential, usually the Helmholtz energy, as a function of its characteristic variables. All equilibrium properties are then available as partial derivatives of the thermodynamic potential. Equations of state can be broadly classified into three categories: 1) cubic equations of state (such as the Peng-Robinson (Peng and Robinson, 1976) and the Soave-Redlich-Kwong (Soave, 1972) equation of state), 2) highly accurate reference equations for specific systems (including water (Wagner and Pruss, 2002), carbon dioxide (Span and Wagner, 1996), nitrogen (Span et al., 2000), and natural gas components (Kunz and Wagner, 2012)), and 3) molecular equations of state (such as the SAFT family (Chapman et al., 1990; Gross and Sadowski, 2001; Llovell et al., 2004; Lafitte et al., 2013)). The main distinction among these categories lies in the data required for parameterization, with cubic equations of state necessitating the fewest parameters and reference equations of state demanding the most. Parameterizing equations of state typically relies on experimental data, which is often unavailable for novel molecules or expensive to obtain from commercial databases or experiments. In the absence of experimental data, various predictive methods have been developed for equations of state, primarily focused on GC methods (Shaahmadi et al., 2023; Privat and Jaubert, 2023). Since group contribution methods rely on a predefined set of functional groups and their respective contributions, those methods are limited to certain subsets of the molecular space and often struggle to predict the properties of more complex molecules accurately. Furthermore, capturing effects linked to isomers or more intricate intermolecular forces requires the definition of higher-order groups, for which adequate parametrization is more data-demanding(Gani, 2019). Recently, machine learning (ML) methods have been developed to predict pure component parameters for equations of state. The focus has been on the perturbed-chain statistical associating fluid theory (PC-SAFT) equation of state developed by Gross and Sadowski (2001). The ML models use as input either group counts (Matsukawa et al., 2021), molecular fingerprints (Habicht et al., 2023), or a variety of molecular descriptors (Felton et al., 2023). However, these methods are not trained directly on experimental property data but on previously fitted pure component parameters of PC-SAFT. This reliance on previously fitted pure component parameters vastly constraints the amount of available training data, thus likely limiting the applicability domain of these models. Moreover, small errors in predicted pure component parameters can have large effects on the final predicted fluid properties. Consequently, training machine learning models directly on experimental property data is preferred. In previous work, we demonstrated how explicit physical equations could be integrated into a machine learning framework, using the NRTL-equation as an example (Winter et al., 2023). However, integrating PC-SAFT into a machine learning framework presents two additional challenges: Firstly, PC-SAFT is not explicit in measurable properties like vapor pressures and liquid densities. Instead, vapor pressures and liquid densities have to be determined iteratively from partial derivatives of the Helmholtz energy, requiring a more sophisticated approach than a straightforward integration into the neural network. Secondly, the physical significance of the pure component parameters of PC-SAFT is the basis of its robust extrapolation, in particular to mixtures. Therefore, any predictive method should ensure that parameters meaningful for their physical basis are obtained. In this work, we present a natural language-based machine learning model for predicting pure component parameters of PC-SAFT trained directly on experimental data. For this purpose, the PC-SAFT equation of state is directly integrated into our previously proposed SMILES-to-Properties-Transformer (SPT) (Winter et al., 2022, 2023). The resulting SPT-PC-SAFT model exhibits high prediction performance, accurately predicting thermophysical properties for complex molecules with various functional groups. Remarkably, our model is also capable of correctly predicting the behavior of stereoisomers. ## 2 The SPT-PC-SAFT model The SPT\({}_{\mathrm{PC-SAFT}}\) model is designed to allow the inclusion of explicit systems of equations into machine learning frameworks to apply physical constraints. This work uses the PC-SAFT equation of state, though other equations of state or any other system of equations could be integrated. SPT is a natural language processing model that utilizes the SMILES code of a molecule as input. Conceptually, our SPT model can be interpreted as an advanced group contribution approach that uses characters in the SMILES code as atomic groups and dynamically assembles higher-order groups via natural language processing. Figure 1 illustrates the overall structure of the proposed SPT\({}_{\rm PC-SAFT}\) model: First, molecules are represented as SMILES codes, which are fed into a natural language processing model that predicts parameters, which are used within the PC-SAFT equation of state to compute vapor pressures and liquid densities at a given temperature or temperature and pressure, respectively. To preserve the physical meaning of the PC-SAFT parameters, the likelihood that a component is associating (\(\lambda_{\rm assoc}\)) or polar (\(\lambda_{\rm polar}\)) is also predicted by SPT\({}_{\rm PC-SAFT}\) and molecules are only assigned associating or polar parameters if the molecule is predicted to be associating or polar. During the model training, the PC-SAFT equation of state is incorporated into the backward pass, allowing for the calculation of analytical gradients of the loss (target function) with respect to the model parameters. This integration enables us to train a machine learning model end-to-end on experimental data and not only on previously fitted parameters. In the following sections, the model and training procedure of SPT\({}_{\rm PC-SAFT}\) are described in detail: Section 2.1 introduces the architecture of the machine learning model and the integration of the PC-SAFT equation. Section 2.2 describes the data sources, data processing, and the definition of training and validation sets. In Section 2.3, we describe the selection of hyper-parameters and the training process of SPT\({}_{\rm PC-SAFT}\). ### Model architecture The model architecture of SPT\({}_{\rm PC-SAFT}\) (Figure 2) is largely based on our previous SPT models (Winter et al., 2022, 2023), which are in turn based on the natural language model GPT-3 (Brown et al., 2020) using a decoder-only transformer architecture implemented by Vaswani et al. (2017). The transformer architecture has been shown suitable for understanding not only the grammar of natural language but also the molecular grammar embedded within SMILES codes, a linear text-based molecular representation introduced by Weininger (1988), leading to many successful applications in the field of chemistry (Schwaller et al., 2019; Honda et al., 2019; Lim and Lee, 2021; Kim et al., 2021). In the following, we present the SPT\({}_{\rm PC-SAFT}\) architecture in three sections: input embedding (Section 2.1.1), multi-head attention (Section 2.1.2), and head (Section 2.1.3). #### 2.1.1 Input embedding SPT\({}_{\rm PC-SAFT}\) predicts thermodynamic equilibrium properties as calculated from PC-SAFT and the corresponding pure component parameters using the SMILES codes of a molecule as input. The SMILES code (Weininger, 1988) has become a widely adopted molecular representation for machine learning applications in chemical engineering and has been used in numerous recent studies (Honda et al., 2019; Wang et al., 2019; Schwaller et al., 2019; Lim and Lee, 2021). The SMILES code offers a linear string representation for complex branched and cyclic molecules. In the SMILES codes, atoms are denoted by their periodic table symbols, such as the character "N" for nitrogen, while hydrogen Figure 1: Overarching structure of the SPT\({}_{\rm PC-SAFT}\) model and training. Molecules are represented as SMILES and passed into a natural language model to predict PC-SAFT parameters, which are, in turn, used to calculate vapor pressures \(p_{\rm sat}\) and liquid densities \(\rho_{\rm L}\) for a given temperature or temperature and pressure, respectively. Furthermore, the likelihood of molecules having associating (\(\lambda_{\rm assoc}\)) or polar (\(\lambda_{\rm polar}\)) interactions is predicted. During training, the loss function, i.e., target function, is calculated based on the natural logarithm of the pressure or density and the association and polarity likelihoods. atoms are implicitly assumed. While single bonds are also implicitly assumed, double and triple bonds are indicated by the characters "=" and "#", respectively. Branches are enclosed in brackets, and connections of ring structures are represented by numbers. For instance, the molecule 2-ethyl phenol can be depicted using the following SMILES code: Oc1c(CC)ccc1. Additional symbols are available for special molecules like "/" and "" for cis/trans isomers or "@" for enantiomers. The input of \(\text{SPT}_{\text{PC-SAFT}}\) consists of the SMILES codes representing the molecule of interest with special characters denoting the start of the sequence \(<\!\text{SOS}\!>\), and the end of the sequence \(<\!\text{EOS}\!>\). The remainder of the input sequence is filled up to a maximum sequence length \(n_{\text{seq}}\) of \(128\) with padding \(<\!\text{PAD}\!>\): \[<\!\text{SOS}\!>,\text{SMILES},<\!\text{EOS}\!>,<\!\text{PAD}\!>,...\] To render the input string suitable for the machine learning model, the string is tokenized, breaking the sequence into tokens that can each be represented by a unique number. Generally, tokens may comprise multiple characters, but in this work, each token consists of a single character. The tokenization process for SMILES can be compared to assigning first-order groups in group contribution methods. The complete vocabulary containing all tokens can be found in the Supporting Information Section 1. The input sequence undergoes one-hot encoding, where each token is represented by a learned vector of size \(n_{\text{emb}}=512\). An input matrix of size \(n_{\text{emb}}\times n_{\text{seq}}\) is generated by concatenating the vectors representing the tokens of the input sequence. After encoding the input sequence, an additional vector is appended to the right of the input matrix, which holds a linear projection of continuous variables into the embedding space. In the case of the original SPT model (Winter et al., 2022), temperature information is encoded in this vector. In \(\text{SPT}_{\text{PC-SAFT}}\), no continuous variables are supplied here, as temperature and pressure information is only introduced in the final stage (see Figure 2), and thus, the continuous variable vector only contains zeros. After adding the continuous variables, the resulting input matrix has a size of \(n_{\text{emb}}\times n_{\text{seq}}+1\). Subsequently, a learned positional encoding, which contains a learned embedding for each position, of size \(n_{\text{emb}}\times n_{\text{seq}}+1\) is added to the input matrix. At this stage, the input matrix contains information on all atoms and bonds in the molecule and their positions. However, each token lacks information about its surroundings, as no information has been exchanged between tokens yet. This information sharing between tokens is discussed in the following multi-head attention section. #### 2.1.2 Multi-head attention The multi-head attention section sequentially stacks multi-head attention blocks (Vaswani et al., 2017). Within each block, the input undergoes layer normalization before being passed to the multi-head attention mechanism. This mechanism enables information transfer between tokens. Although individual tokens possess only self-information after the input encoding, the multi-head attention mechanism permits tokens to acquire knowledge about their neighbors or other relevant atom or structural tokens within their molecule. Consequently, a transformer block could be viewed as a self-learning \(\text{n}^{\text{th}}\)-order group contribution method, where each token, or the smallest possible group, learns the significance of other tokens and self-assembles higher-order groups based on the molecular structure. For a more comprehensive and visual explanation, readers are directed to the blog of Alammar (2018) or the comprehensive description in the Supporting Information of our previous work (Winter et al., 2023). #### 2.1.3 The PC-SAFT head After the multi-head attention block, the model obtains a high-dimensional representation of the molecule (\(n_{\text{emb}}\times n_{\text{seq}}\)), which needs to be transformed into a set of pure component parameters to be handled within the PC-SAFT equation of state. This dimensionality reduction occurs in the head of the model. We have demonstrated in previous work on the prediction of activity coefficients that it is possible to incorporate physical models like the NRTL equation into the head of our SPT model. However, the PC-SAFT model introduces additional challenges not present in NRTL: First, the pure component parameters of PC-SAFT have inherent physical meaning, and preserving this physical meaning cannot be guaranteed in a simple regression model. Second, the target properties used for training the model, i.e., vapor pressures and liquid densities, are not direct outputs of PC-SAFT; instead, these target properties must be iteratively converged. While software packages are available that provide robust computations of bulk and phase equilibrium properties with PC-SAFT (Rehner et al., 2023), it is crucial to ensure that the neural network maintains an intact computational graph to allow the network to obtain a derivative of the target value with respect to all model parameters. An intact computational graph can be ensured when all calculations are conducted within a consistent framework like PyTorch. **Preservation of physical meaning** Figure 2: Architecture of SPT\({}_{\text{PC-SAFT}}\) for predicting PC-SAFT parameters using SMILES codes in an end-to-end training. The model takes the SMILES code of a molecule as input. In the input encoding section, information about the individual tokens within the SMILES code and their positions are merged into a single matrix. The multi-head attention section facilitates information exchange between parts of the molecule. In the head section of SPT\({}_{\text{PC-SAFT}}\), the high-dimensional output from the transformer is first reduced to the number of parameters required by the PC-SAFT head. Subsequently, the output is directed to the PC-SAFT head, which incorporates the PC-SAFT equation of state. The PC-SAFT head receives the temperature \(T\) as additional input for the prediction of vapor pressures and the temperature \(T\) and the pressure \(p\) for the prediction of liquid densities. The outputs of the PC-SAFT head are either vapor pressures and liquid densities as well as association and polarity likelihoods. The PC-SAFT equation of state is physics-based, and its pure component parameters are related to properties of the underlying molecular model. For example, the pure component parameter \(m\) denotes the (potentially non-integer) number of segments on a hypothetical reference fluid, while \(\sigma\) and \(\varepsilon\) correspond to Lennard-Jones interaction parameters that can be expected to be reasonably transferable between chemically similar molecules. Fortunately, we observe that the pure component parameters \(m\), \(\sigma\), and \(\varepsilon\) naturally converge to subjectively reasonable values. However, this natural convergence is not the case for the pure component parameters that describe polar interactions (\(\mu\)) and associating interactions (\(\varepsilon^{AB},\kappa^{AB}\)). These pure component parameters should be 0 for non-polar or non-associating components. This behavior, however, cannot be guaranteed if the parameters are fitted independently by the model purely based on experimental data. Therefore, to ensure the physical meaning of the polar and associating pure component parameters, the SPT\({}_{\text{PC-SAFT}}\) model must learn if a component has associating and polar interactions. To preserve the physical meaning, we predict the polarity and association likelihood in the head of the SPT\({}_{\text{PC-SAFT}}\) model. A graphical description of the PC-SAFT head is given in Fig. 3. After leaving the multi-head attention section, the model has an output of size \(n_{emb}\times n_{seq}\). To reduce the dimensionality, a max function is first applied across the sequence dimensions, resulting in a vector of size \(n_{emb}\times 1\). Afterward, a linear layer projects this vector to a vector of the auxiliary pure component parameters \(\mathbf{a}\) of size \(\mathbf{8}\), which contains the auxiliary pure component parameters of PC-SAFT (\(\bar{m}\), \(s\), \(e\), \(e^{\text{AB}}\), \(k^{\text{AB}}\), and \(u\)) and auxiliary association and polarity likelihoods (\(A\), \(P\)). From the auxiliary parameters, the pure component parameters of PC-SAFT \(\mathbf{\phi}\) are calculated using the following equation: \[\mathbf{\phi}=\left(1+\frac{\mathbf{a}}{10}\right)\cdot\mathbf{\phi}_{\text{mean}} \tag{1}\] Here, \(\mathbf{a}\) is the auxiliary parameter, and \(\mathbf{\phi}_{\text{mean}}\) is an externally set hyperparameter determined via a hyperparameter scan. The auxiliary parameters ensure that reasonable values for the pure component parameters of PC-SAFT are reached at the beginning of the training when \(\mathbf{a}\) can be expected to be small values around \(0\), effectively serving as a staring value for the model. Properly setting the \(\mathbf{\phi}_{\text{mean}}\) parameters ensures quicker convergence. The parameters \(m\), \(\sigma\), and \(\varepsilon\) are now passed directly to the PC-SAFT equation of state, while \(\mu\), \(\epsilon^{\text{AB}}\), \(\kappa^{\text{AB}}\) are calculated in the next step. The calculation of the polar and associating pure component parameters accounts for the polarity and associating likelihoods. To calculate the likelihood, the auxiliary likelihood parameters \(A\) and \(P\) are passed through a sigmoid Figure 3: Head section of the model. The natural language processing section of the SPT model returns a vector of length 8. This vector contains six auxiliary pure component parameters of PC-SAFT (\(\bar{m}\), \(s\), \(e\), \(e^{\text{AB}}\), \(k^{\text{AB}}\), and \(u\)) and the auxiliary association and polarity likelihoods \(A\) and \(P\). The auxiliary likelihood parameters are passed through a sigmoid function that normalizes them, returning the association and polarity likelihood \(\lambda_{\text{assoc}}\) and \(\lambda_{\text{polar}}\). The associating parameters \(\varepsilon^{AB}\) and \(\kappa^{AB}\) are calculated by multiplying the auxiliary parameters \(e\) and \(k\) with \(\lambda_{\text{assoc}}\). The polarity parameter \(\mu\) is calculated by multiplying \(u\) with \(1-\lambda_{\text{assoc}}\) and \(\lambda_{\text{polar}}\). The resulting pure component parameters are then used in the PC-SAFT equation of state to calculate either vapor pressure or liquid density using the FeO\({}_{\text{s}}\) framework [10]. The results of the FeO\({}_{\text{s}}\) calculation as well as \(\lambda_{\text{assoc}}\) and \(\lambda_{\text{polar}}\) are passed to the target function. function that normalizes them between 0 and 1: \[\lambda_{\mathrm{assoc}} =\frac{1}{1+e^{-A}} \tag{2}\] \[\lambda_{\mathrm{polar}} =\frac{1}{1+e^{-P}} \tag{3}\] Subsequently, the associating and polar pure component parameters of PC-SAFT are determined by multiplying the likelihood with the auxiliary parameters. For associating molecules, we assume that the association contribution dominates the polar contribution. Thus, the polar pure component parameter is set to 0. Accordingly, the parameters \(\mu\), \(\varepsilon^{\mathrm{AB}}\), and \(\kappa^{\mathrm{AB}}\) are calculated as: \[\varepsilon^{\mathrm{AB}} =e\cdot\lambda_{\mathrm{assoc}} \tag{4}\] \[\kappa^{\mathrm{AB}} =k\cdot\lambda_{\mathrm{assoc}}\] (5) \[\mu =u\cdot(1-\lambda_{\mathrm{assoc}})\cdot\lambda_{\mathrm{polar}} \tag{6}\] The parameters \(\varepsilon^{\mathrm{AB}}\), \(\kappa^{\mathrm{AB}}\), and \(\mu\) are then passed into the PC-SAFT equation of state to compute either saturation pressures \(p^{\mathrm{sat}}\) or liquid densities \(\rho^{\mathrm{L}}\). The resulting vapor pressures and liquid densities are subsequently passed into the target function along with the associating and polar likelihood \(\lambda_{\mathrm{assoc}}\) and \(\lambda_{\mathrm{polar}}\), respectively. The SPT\({}_{\mathrm{PC-SAFT}}\) setup thus allows the model to learn if components are associating or polar and predict pure component parameters of PC-SAFT with a physical basis. **Preservation of the computational graph** The PC-SAFT equation of state calculates the Helmholtz energy as a function of temperature, mole numbers, and volume. Thermodynamic properties that can be expressed as derivatives of the Helmholtz energy, such as pressure, chemical potential, and heat capacity, are also explicit in terms of temperature, volume, and mole numbers, or, for intensive properties, in temperature \(T\) and density \(\rho\). However, the pure component vapor pressure is not directly accessible via a derivative of the Helmholtz energy. Instead, the pure component vapor pressure is implicitly defined as the solution of three nonlinear equations, \[\mu(T,\rho_{\mathrm{V}}) =\mu(T,\rho_{\mathrm{L}}) \tag{7}\] \[p(T,\rho_{\mathrm{V}}) =p_{\mathrm{sat}}\] (8) \[p(T,\rho_{\mathrm{L}}) =p_{\mathrm{sat}} \tag{9}\] which need to be solved for the unknown densities \(\rho_{\mathrm{V}}\) and \(\rho_{\mathrm{L}}\), and the vapor pressure \(p_{\mathrm{sat}}\). Fast and robust solvers for this system of equations are implemented in the FeO\({}_{\mathrm{s}}\) framework [10] used in this work. However, for the training of the millions of parameters within SPT\({}_{\mathrm{PC-SAFT}}\), it is mandatory to maintain the full computational graph through the entirety of the neural network, from the output to the input embeddings. If the computational graph is interrupted, derivatives cannot be calculated, rendering learning and thus, training the model impossible. The call to an external program, such as the FeO\({}_{\mathrm{s}}\) framework, breaks the computational graph. To address this issue and ensure a fully connected computational graph, we implement the Helmholtz energy calculation of PC-SAFT in PyTorch and conduct the last Newton step of the free energy minimization using the already converged solution from FeO\({}_{\mathrm{s}}\) as starting point. In general, the derivatives of an implicitly defined function \(x(\mathbf{\phi})\) that depends on parameters \(\mathbf{\phi}\) via \(f(x,\mathbf{\phi})=0\), can be found by calculating a single step of a Newton iteration starting from an already converged solution \(x^{\star}\) as: \[x(\mathbf{\phi})=x^{\star}-\frac{f(x^{\star},\mathbf{\phi})}{f_{x}(x^{\star},\mathbf{\phi })}. \tag{10}\] Because \(f(x^{\star},\mathbf{\phi})\) is by construction 0, the function value of \(x\) does not change. However, due to the explicit dependence on \(\mathbf{\phi}\) automatic differentiation frameworks using both forward mode, in which case \(\mathbf{\phi}\) contains additional dual parts, or backward mode, in which case all operations are recorded on a computational graph, can readily determine the first derivative of \(x\) with respect to \(\mathbf{\phi}\). Applying the concept to the calculation of liquid densities leads to: \[\rho_{\text{L}}(T,p,\mathbf{\phi})=\rho_{\text{L}}^{*}-\frac{p(T,\rho_{\text{L}}^{*}, \mathbf{\phi})-p}{p_{\rho}(T,\rho_{\text{L}}^{*},\mathbf{\phi})} \tag{11}\] For the vapor pressures, after solving the system of three equations shown above, the last Newton step is: \[p^{\text{sat}}(T,\mathbf{\phi})=-\frac{a(T,\rho_{\text{V}}^{*},\mathbf{\phi})-a(T,\rho_ {\text{L}}^{*},\mathbf{\phi})}{\frac{1}{\rho_{\text{V}}^{*}}-\frac{1}{\rho_{\text{L }}^{*}}} \tag{12}\] with the molar Helmholtz energy \(a(T,\rho,\mathbf{\phi})\). It is particularly convenient that the expression for the vapor pressure only requires an evaluation of the Helmholtz energy in which PC-SAFT and other equations of state are formulated anyway. For liquid densities, however, the pressure and its derivative with respect to density are required. Implementing these derivatives by hand is cumbersome and error-prone. Therefore, we use an additional layer of forward automatic differentiation with second-order dual numbers (Rehner and Bauer, 2021) in which the real and dual parts are PyTorch tensors. Implementing Eqs. (11) and (12) into the neural network ensures a fully connected computational graph that can be used by PyTorch to evaluate derivatives of the loss function while still allowing the use of efficient external routines to converge states. While we developed this method to use equations of state, it could also be applied to a wider range of problems where parameters for implicit equations have to be determined using neural networks. ### Data SPTPC-SAFT is trained using vapor pressure and liquid density data obtained from, among others, the Dortmund Data Bank (DDB) (Dortmund Datenbank, 2022), the DIPPR database (Thomson, 1996) and the ThermoML database (Riccardi et al., 2022) curated by Esper et al. (2023). From this large data collection, all molecules are removed that do not contain at least one carbon atom and most metal complexes except silicon. The remaining data is then split into two sets depending on their data quality: the clean and the remaining dataset. The clean dataset contains molecules that have already been used for the fitting of pure component parameters of PC-SAFT by Esper et al. (2023) and contains \(1103\) components, \(189\,504\) vapor pressure data points, and \(282\,642\) liquid density data points. The pressure data in the clean dataset have undergone a significant effort to eliminate outliers (Esper et al., 2023). Only data from the clean dataset is used for validation. The remaining dataset includes the data of the aforementioned databases that is not suitable to directly fit pure component PC-SAFT parameters, as not sufficiently many vapor pressures and liquid densities are available for a given component. However, this data can still be used in SPTPC-SAFT due to the end-to-end training approach. The remaining dataset has a lower data quality than the clean dataset but contains a larger variety of molecules. Several steps were conducted to clean the remaining dataset: First, all data points at a vapor pressure of \(1.00\pm 0.01\,\mathrm{bar}\) at \(298.15\pm 1.00\,\mathrm{K}\) are excluded, as these seem to be data points entered erroneously. Then, we removed data points that could not be fitted using PC-SAFT. To remove the data points, we trained eight SPTPC-SAFT models on the clean and remaining data for 15 epochs using a SmoothL1 loss, thus giving less weight to outliers than using an MSE loss. Afterward, we removed all data points from the remaining dataset that have a training loss larger than \(0.5\). In total, \(21\,456\) of \(233\,988\) data points were removed from the remaining data. Figure S1 in the Supporting Information illustrates typical examples of errors identified using our data-cleaning method. Manual review of the removed data points showed that mostly unreasonable-looking data points were removed from the remaining data. Overall, \(160\,186\) data points for vapor pressure and \(52\,343\) data points for liquid densities remain in the data set with \(12\,019\) and \(2067\) molecules each, for vapor pressure and liquid density, respectively. As our model was employed to clean the remaining data, it is important to note that the remaining dataset is solely used for training the model and not for any form of model validation. For model validation, only the clean dataset is used (Esper et al., 2023). Thereby, we ensure that our model's performance evaluation is based on reliable and high-quality data and unbiased by our data cleaning steps. Some of the molecules in the training data are structural isomers such as _cis_-2-butane and _truns_-2-butane. SPT uses isomeric SMILES codes and can thus distinguish between the _cis_ and _truns_ versions of molecules. However, for some isomeric molecules, our training data also contains data only labeled with the non-isomeric SMILES. In these cases, the data is either one unknown isomer, a mixture of isomers with very similar properties, or mislabeled data of two differently behaving isomers. To avoid ambiguities, we dropped any data related to non-isomeric SMILES codes for components of which isomeric SMILES are present. To train the model to recognize if a component is associating or polar, the training data is labeled. To label molecules as associating or polar, we use the following approaches: For associating components, we use RDKit to identify molecules with at least one hydrogen bond donor site and one hydrogen bond acceptor site (Landrum et al., 2023). Components that meet this criterion are labeled as associating. To label molecules as polar, a consistent database of dipole information is needed. Here, we use the COSMO-Therm database 2020, where the dipole moment is available for \(12\,182\) molecules in the energy files. If the dipole moment is above \(0.35\,\mathrm{D}\), the molecule is labeled as polar. The limit is set semi-arbitrary by looking at molecules close to the limit and judging if they are polar. Examples of molecules around this polarity threshold are shown in Figure 4. If a component in the training data is unavailable in the COSMO-Therm database, its polarity likelihood is masked in the loss function and thus ignored during training. Polarity information is available for around \(95\,\mathrm{\char 37}\) of all molecules in the clean dataset and \(50\,\mathrm{\char 37}\) of the molecules in the remaining dataset. #### Validation splits In this study, we employ an n-fold cross-validation approach for validating our model using 8 training/validation splits. The data splits are conducted along molecules, ensuring that all data points of a given molecule are either in the training or validation set. This data splitting allows the validation sets to test the model's ability to predict properties of entirely unknown molecules. However, we impose certain restrictions on the data used for validation. Only components with at least three carbon atoms are included in the validation set, as extrapolation from larger molecules towards very small molecules, such as methane and carbon dioxide, works poorly and the space of small molecules is already well-explored experimentally. Thus, pure component parameters of PC-SAFT are generally available for small molecules (Esper et al., 2023). Additionally, structural isomers are treated as one component with respect to training/validation splits. Therefore, if the _trans_ version of a molecule is in the validation set, the _cis_ version is also included in the validation set, and vice versa. The same workflow is applied for enantiomers. ### Hyperparameters and training The base model architecture for SPT\({}_{\mathrm{PC-SAFT}}\) is adopted from our previous SPT-NRTL model (Winter et al., 2023) with no further modifications to the architectural hyperparameters such as embedding size, number of layers, and hidden factor. For training SPT\({}_{\mathrm{PC-SAFT}}\), we use an initial model pretrained on concentration-dependent activity coefficients using a regression head described in Winter et al. (2023). To identify good values for \(\mathbf{\phi}_{\mathrm{mean}}\), we generated a synthetic training dataset with \(1494\) pure component parameters of PC-SAFT from the work of Esper et al. (2023) and used these parameters to calculate \(100\) pressure and density values. To validate our model's performance, we reserved \(5\,\mathrm{\char 37}\) of the components as a separate validation set. Over this set, a scan was conducted using the parameter values listed in Table 1, and the set of parameters leading to the lowest loss on the test set was chosen. Figure 4: Distribution of dipole moments in the COSMO-Therm database and the threshold of \(0.35\,\mathrm{D}\) set to assign polarity. To give a better sense of molecules around the threshold, some molecules with dipole moments close to \(0.35\,\mathrm{D}\) are shown. The x-axis represents the range of dipole moments, while the y-axis shows the frequency of molecules in each range. During the hyperparameter scan, we found that values for \(\mathbf{\phi}_{\rm mean}\) that overestimate the critical point help with the convergence. The overestimation ensures that most calculations return valid results in the initial stages of the model training, speeding up the training and avoiding divergence of the model. The training was performed on 4 RTX-3090s using a learning rate of \(1\times 10^{-4}\) and 50 epochs. Training takes about \(10\,\mathrm{h}\) for 8 training/validation splits running two models per GPU in parallel. ## 3 Predictive capabilities of SPT-PC-SAFT In our analysis of predictive performance, we utilize the Average Percentage Deviation (APD) as our primary metric. To start, we determine the APD for individual molecules: \[\text{APD}_{i}=\frac{1}{M_{i}}\sum_{j=1}^{M_{i}}\frac{|y^{\prime}_{i,j}-y_{i,j }|}{y_{i,j}} \tag{13}\] where \(M_{i}\) is the number of datapoints for component \(i\), \(y_{i,j}\) is the experimental value and \(y^{\prime}_{i,j}\) is the predicted value for component \(i\) and datapoint \(j\). Subsequently, we evaluate either the mean or median of these deviations across the entire dataset. This approach ensures that molecules with numerous data points, such as propane, do not disproportionately influence the prediction discussion. Deviations for vapor pressure \(p_{\text{st}}\) and liquid density \(\rho_{\text{L}}\) are calculated independently of each other. Unless explicitly stated, we focus on the deviation in the validation set, representing the model's prediction, rather than the deviation in the training set. ### Prediction of vapor pressures and liquid densities The \(\text{SPT}_{\text{PC-SAFT}}\) model exhibits a mean APD of \(13.5\,\mathrm{\char 37}\) and a median APD of \(8.7\,\mathrm{\char 37}\) for predicting vapor pressures in our validation set, consisting of \(870\) components. Figure 5 presents a cumulative deviation curve of the APD for the validation set and the training set. The training set is comparable to a fitted model and should thus provide an upper bound for the accuracy of PC-SAFT on our training dataset. The results highlight the robustness of \(\text{SPT}_{\text{PC-SAFT}}\). Only a minor portion of the molecules in the validation set exhibited a notably high APD: \(3\,\mathrm{\char 37}\) had an APD exceeding \(50\,\mathrm{\char 37}\), while only \(0.4\,\mathrm{\char 37}\) surpassed an APD of \(100\,\mathrm{\char 37}\). This indicates accurate predictions of the vapor pressure for the vast majority of the validation set's molecules. Figure 5 illustrates additionally how the APD translates into pressure-temperature (_p/T_) plots and demonstrates the diverse set of molecules for which \(\text{SPT}_{\text{PC-SAFT}}\) can account. These examples are cyclohexylamine with an APD of \(2\,\mathrm{\char 37}\), ethyl cyanoacetate with an APD of \(9\,\mathrm{\char 37}\), octamethyl-1,3,5,7,2,4,6,8-tetraxactterasilocane with an APD of \(19\,\mathrm{\char 37}\), and triacetin with an APD of \(51\,\mathrm{\char 37}\). The relationship between APD, molecule size, and vapor pressure range is further illustrated in Figure 6, which displays the APD in vapor pressure prediction as a function of the number of heavy atoms and pressure. A region of relatively low APD is achieved for molecules containing between 4 and 20 heavy atoms within a vapor pressure range of \(1\,\mathrm{kPa}\) to \(100\,\mathrm{MPa}\). In contrast, high deviation predominantly occurs at the edges of the data space, particularly for large molecules at low pressures. This behavior might be due to a lower density of data and higher uncertainty when measuring low-pressure systems. In Figure 7, the relationship between APD (Average Percentage Deviation) and molecular families is explored. The classification of the molecular families is based on the DIPPR database [Thomson, 1996], which contains families for \(609\) out of the \(870\) components in the validation set. A noticeable correlation is obtained between the expected prediction error and the molecular families. Notably, molecular families composed solely of oxygen and carbon exhibit above-average prediction accuracy. In contrast, fluorinated, halogenated (bromide and iodine), and particularly nitrogen-containing compounds present challenges in prediction. A comprehensive list of the validation set, categorized by molecular group, can be found in the Supplementary Information (SI). Overall \(\text{SPT}_{\text{PC-SAFT}}\), performs well for the majority of molecular families. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Parameter & \(m\) & \(\sigma\) (Å) & \(\varepsilon/k\) (K) & \(\mu\) (D) & \(\kappa^{AB}\) & \(\varepsilon^{AB}/k\) (K) \\ \hline \(\sigma_{\rm mean}\) & 2 & 5 & 300 & 3 & 0.005 & 1500 \\ \hline \hline \end{tabular} \end{table} Table 1: Final mean parameter values \(\mathbf{\phi}_{\rm mean}\) of the parameter scan. Final values are determined by training a model on a range of parameters and selecting the set of parameters leading to the lowest loss The APD in liquid density is generally lower than the deviation in vapor pressure. For densities, our \(\text{SPT}_{\text{PC-SAFT}}\) model achieves a mean APD of \(3.1\,\%\). Predicted liquid densities at \(1\,\mathrm{bar}\) are shown for a range of alkanes and alcohols in Figure 8, generally demonstrating a good agreement with the measured data. ### Physicality of predicted pure component parameters of PC-SAFT One major advantage of the PC-SAFT model is the physical basis of its parameters. Thus, any predictive model should retain this physicality. We preserve the physical meaning of the predicted pure component parameters by introducing the polarity and association likelihood (see Section 2.1.3). Table 2 provides an overview of selected pure component parameters of PC-SAFT predicted by \(\text{SPT}_{\text{PC-SAFT}}\). The pure component parameters \(m\), \(\sigma\), and \(\varepsilon\) are predicted within anticipated ranges. The chain length parameter \(m\) increases along the homologous series, while the segment diameter \(\sigma\) increases. The \(\sigma\) values of \(\sigma\) are predicted within anticipated ranges. The \(\sigma\) values of \(\sigma\) are predicted within anticipated ranges. and interaction energy \(\varepsilon\) are similar for molecules in the same chemical family. The association is accurately identified for alcohols, and polarity is properly assigned to ethers. On the one hand, 1-ethoxopentane gets assigned a dipole moment of \(2.5\,\mathrm{D}\) with a polarity likelihood of nearly 1. On the other hand, 1,2-diethyloxomethane exhibits no dipole moment due to its higher symmetry, as correctly recognized by \(\text{SPPT}_{\text{PC-SAFT}}\). Consequently, the predicted parameters seem physically plausible. The Supplementary Information (SI) presents the receiver operating characteristic (ROC) curves of the association and polarity likelihood parameters, illustrating the trade-off between true positives and false positives. \(\text{SPPT}_{\text{PC-SAFT}}\) achieves a \(100\,\%\) true positive rate for associating molecules and approximately a \(90\,\%\) true positive rate for polarity. Given that we use classification in the normally continuous spectrum for polarity, a \(100\,\%\) true positive rate is not expected. Therefore, our model architecture enables \(\text{SPPT}_{\text{PC-SAFT}}\) to accurately learn when molecules exhibit associating or polar interactions and assign appropriate pure component parameters. ### Comparison to homosegmented GC method and recent ML models To assess the predictive capabilities of our method, we compare it to the homo-segmented group contribution method proposed by Sauer et al. (2014), in the following called GC-Sauer. The group contribution method by Sauer et al. (2014) calculates the PC-SAFT parameters from the contributions of individual functional groups. We define two sets of molecules that differ in the breadth of the molecular space: The _interpolation set_ contains molecules that belong to the chemical families that Sauer et al. (2014) used to parameterize the GC method (branched alkanes, alkenes, 1-alkynes, alkylbenzenes, alkylcyclohexanes, alkylcyclopentanes, ethers, aldehydes, formates, esters, ketones, 1-alcohols, and 1-amines) but only containing a maximum of one functional group as in Sauer et al. (2014). The _interpolation set_ likely Figure 7: Average percentage deviation in vapor pressure as a function of the molecular family. Molecular families are assigned according to the DIPPR database (Thomson, 1996). Of the \(870\) components in the validation set, \(609\) components could be assigned a molecular family. Green boxes show families with a median APD of \(2.5\,\%\) below the overall mean APD of \(13.5\,\%\), red boxes show families with an APD of \(2.5\,\%\) above the overall mean APD. contains many of the molecules on which the GC method was originally fitted. Thus, the GC-Sauer method enjoys a maximum advantage in the comparison. The _extrapolation set_ contains molecules outside of these chemical families that can still be fragmented into the groups defined by Sauer et al. (2014) but that do not contain more than one polar or associating group to not extrapolate from the GC-Sauer method to far. The _extrapolation set_ contains important molecules like cyclohexylamine or phenyl acetate that are difficult to describe accurately for GC methods. In total, the _interpolation set_ contains \(256\) molecules and the _extrapolation set_ contains \(67\) molecules. The comparison between SPT\({}_{\mathrm{PC-SAFT}}\) and GC-Sauer on the two sets of molecules indicates a substantial difference between the performance of the GC-Sauer and SPT\({}_{\mathrm{PC-SAFT}}\) methods when extrapolating beyond the _interpolation set_ (Figure 9): While the GC method performs decently within the _interpolation set_, with a mean APD of \(12.8\,\%\) compared to \(7.3\,\%\) of SPT\({}_{\mathrm{PC-SAFT}}\) for the vapor pressure, it falls short when extrapolating to more complex molecules, resulting in a much larger mean APD of \(48.0\,\%\) compared to \(11.1\,\%\) for SPT\({}_{\mathrm{PC-SAFT}}\). Similar performance benefits are observed for SPT\({}_{\mathrm{PC-SAFT}}\) in predicting liquid densities. Here, for the _interpolation set_, SPT\({}_{\mathrm{PC-SAFT}}\) has an mean APD of \(4.0\,\%\) compared to \(6.4\,\%\) of GC-Sauer and, for the _extrapolation set_, \(3.5\,\%\) compared to \(11.9\,\%\) of GC-Sauer. Compared to the recently published methods by Felton et al. (2023) and Habicht et al. (2023), SPT\({}_{\mathrm{PC-SAFT}}\) compares favorably. However, since there is no consistent validation set used across the studies, there is some uncertainty in this discussion. The reported average relative percentage errors in vapor pressures by Felton et al. (2023) are \(39\,\%\) based on a similar dataset as our clean dataset, compared to SPT\({}_{\mathrm{PC-SAFT}}\) mean APD of \(13.5\,\%\). Habicht et al. (2023) report average relative percentage deviations below \(20\,\%\) for many molecular families, however, limited to non-polar, non-associating molecules for which SPT\({}_{\mathrm{PC-SAFT}}\) has a mean deviation of \(10\,\%\). Overall, the better performance of SPT\({}_{\mathrm{PC-SAFT}}\) might lie in the direct training on experimental data and not on previously fitted PC-SAFT parameters. Thus, SPT\({}_{\mathrm{PC-SAFT}}\) is able to use a larger amount of data points and avoids error accumulation via the additional regression step. Our results demonstrate that the much simpler GC method of Sauer et al. (2014) performs reasonably well for molecules similar or equal to those to which it was parameterized, but extrapolating capabilities are limited for more complex \begin{table} \begin{tabular}{l l r r r r r} \hline \hline Name & SMILES & \(m\) & \(\sigma\) (Å) & \(\varepsilon/k\) (K) & \(\mu\) (D) & \(\kappa^{AB}\) & \(\varepsilon^{AB}/k\) (K) \\ \hline butane & CCCC & \(2.3\) & \(3.7\) & \(224\) & & & \\ hexane & CCCCCC & \(2.9\) & \(3.9\) & \(244\) & & & \\ octane & CCCCCCCCC & \(3.6\) & \(3.9\) & \(248\) & & & \\ 1-butanol & CCCCO & \(3.2\) & \(3.5\) & \(247\) & & \(0.006\) & \(2409\) \\ 1-hexanol & CCCCCC & \(3.7\) & \(3.6\) & \(258\) & & \(0.005\) & \(2498\) \\ 1-ethoxopentane & CCCCCCCCC & \(3.9\) & \(3.7\) & \(236\) & \(2.5\) & & \\ 1,2-diethoxymethane & CCCOCCC & \(3.6\) & \(3.5\) & \(231\) & & & \\ \hline \hline \end{tabular} \end{table} Table 2: Examples of pure component PC-SAFT parameters predicted by SPT\({}_{\mathrm{PC-SAFT}}\). Figure 8: Prediction of molar density of C4 to C10 alkanes and alcohols at \(1\,\mathrm{bar}\) over a range of temperatures using SPT\({}_{\mathrm{PC-SAFT}}\) (lines). Experimental data (crosses) are taken from the DDB. molecules. To cover a more comprehensive molecular space without manually defining an extensive set of (potentially higher order) groups, an approach that captures the complexities of molecules, like SPT\({}_{\mathrm{PC-SAFT}}\), is required. Moreover, even compared to more complex and recent machine learning approaches SPT\({}_{\mathrm{PC-SAFT}}\) compares favorably. ### Differentiation of stereoisomers Stereoisomers are molecules that have the same molecular formula and constitution but different structural arrangements due to differently arranged bonds. Although these subtle structural differences might appear insignificant, they can impact the properties of isomers substantially in some cases. GC methods often struggle to capture these differences in stereoisomers as they require large higher-order groups to differentiate between them. However, SPT\({}_{\mathrm{PC-SAFT}}\) utilizes isomeric SMILES as input, enabling the model to distinguish between stereoisomers. Unfortunately, our validation data contains only 35 pairs of stereoisomers, the majority of which exhibit no significant difference in vapor pressure. Therefore, we assess the prediction of stereoisomers based on individual examples and a comprehensive statistical analysis has to be performed as soon as more data on stereoisomers is available. For four example isomere pairs, i.e., the _cis_ and _trans_ isomers of 1,1,1,4,4,4-hexafluorobutene, stilbene, 2-hexene, and 2-hexenedinitril, the predicted vapor pressure is shown in Figure 10. Due to the different polarity, the isomers of 1,1,1,4,4-hexafluorobutene and stilbene have measurably different vapor pressures. SPT\({}_{\mathrm{PC-SAFT}}\) is able to predict the trend in vapor pressures, which is remarkable considering that the majority of isomers in the training data is similar to 2-hexene which shows no significant difference between the two isomers. However, 2-hexenedinitrille presents a challenge for the model, as it fails to distinguish between isomers even though there is a difference in vapor pressure between the _cis_ and _trans_ versions. When and why SPT\({}_{\mathrm{PC-SAFT}}\) fails in distinguishing specific isomers should be subject to further research. We observed some instances within our training data of likely mislabeling between isomers, which may impede the model's performance. Overall, the results concerning stereoisomer differentiation are encouraging, but more and better data on stereoisomers is required to unlock the full capability of the model. ### Publication of predicted pure component parameters While the current SPT\({}_{\mathrm{PC-SAFT}}\) model is efficient and straightforward to set up, executing machine learning models can still present a barrier to entry when only single components are of interest. To enhance the accessibility of our model, we have predicted pure component parameters of PC-SAFT for millions of components, as we have previously with a set of 100 million NRTL parameters [20]. Predicted pure component parameters of PC-SAFT are available for all \(13\,645\) molecules contained in our training set. By making these pre-computed pure component parameters available, we aim to facilitate broader adoption and utilization of the PC-SAFT equation of state across various applications and allow for exploring vast molecular spaces. Figure 9: Cumulative deviation plot of the average percentage deviations of the molecules in the extrapolation and _interpolation sets_ for predictions of (a) vapor pressures \(p^{\mathrm{vap}}\) and (b) liquid densities \(\rho^{\mathrm{L}}\). The predictive performance of both models is lower on the extrapolation dataset, where SPT outperforms GC-Sauer significantly. ## 4 Conclusion In this study, we introduce the machine-learning model \(\text{SPT}_{\text{PC-SAFT}}\), which can predict thermodynamic equilibrium properties using the PC-SAFT equation of state and the corresponding pure component parameters of PC-SAFT from the SMILES code of a molecule. \(\text{SPT}_{\text{PC-SAFT}}\) is a modification of the SMILES-to-Properties-Transformer (SPT) (Winter et al., 2022) and overcomes challenges posed by the complexity of the PC-SAFT equation of state while preserving the physical meaning of its parameters. Our model demonstrates excellent predictive performance on a validation set of \(870\) components, achieving a mean APD of \(13.5\,\%\) for vapor pressures and \(3\,\%\) for liquid densities. Remarkably, \(99.6\,\%\) of the predictions fall within a factor of 2, indicating a minimal presence of outliers. Compared to the homo-segmented group contribution method of PC-SAFT by Sauer et al. (2014), our \(\text{SPT}_{\text{PC-SAFT}}\) model provides significantly higher quality predictions for both vapor pressures and liquid densities and compares favorably to more recent ML models. In particular, for more complex molecules, the prediction accuracy of \(\text{SPT}_{\text{PC-SAFT}}\) is four times higher than the group contribution method. Moreover, our model can differentiate between stereoisomers, highlighting its potential for improved accuracy in predicting the properties of subtle molecular effects. We believe Figure 10: Pressure-temperature plots of the isomer pairs 1,1,1,4,4,4-hexafluorobutene, stilbene, 2-hexene and 2-hexenedinitril that SPT\({}_{\text{PC-SAFT}}\) offers a versatile and robust approach for predicting equilibrium thermodynamic properties and the corresponding pure component parameters of PC-SAFT, allowing for applications in thermodynamics, process engineering, and material science. To make our model more accessible to researchers and industry professionals, we have precomputed pure component parameters of PC-SAFT for a large number of components. The SPT\({}_{\text{PC-SAFT}}\) model presents a significant advancement in the prediction of equilibrium properties and corresponding pure component parameters of PC-SAFT. By leveraging machine learning techniques, our model offers improved accuracy in predicting the properties of various molecules while being capable of handling complex molecular structures and subtle differences in isomers. The availability of precomputed pure component parameters of PC-SAFT will further facilitate the adoption of our model and enable its use in a broad range of research and industry applications. ## Acknowledgments B.W. and A.B. acknowledge funding by NCCR Catalysis, a National Centre of Competence in Research funded by the Swiss National Science Foundation, grant number 180544. P.R. acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), grant number 497566159.
2309.04266
Locating Buggy Segments in Quantum Program Debugging
When a bug is detected by testing a quantum program on a quantum computer, we want to determine its location to fix it. To locate the bug, the quantum program is divided into several segments, and each segment is tested. However, to prepare a quantum state that is input to a segment, it is necessary to execute all the segments ahead of that segment in a quantum computer. This means that the cost of testing each segment depends on its location. We can also locate a buggy segment only if it is confirmed that there are no bugs in all segments ahead of that buggy segment. Since a quantum program is tested statistically on the basis of measurement results, there is a tradeoff between testing accuracy and cost. Although these characteristics are unique to quantum programs and complicate locating bugs, they have not been investigated. We suggest for the first time that these characteristics should be considered to efficiently locate bugs. We are also the first to propose a bug-locating method that takes these characteristics into account. The results from experiments indicate that the bug-locating cost, represented as the number of executed quantum gates, can be reduced with the proposed method compared with naive methods.
Naoto Sato, Ryota Katsube
2023-09-08T11:25:04Z
http://arxiv.org/abs/2309.04266v3
# Locating Buggy Segments in Quantum Program Debugging ###### Abstract When a bug is detected by testing a quantum program on a quantum computer, we want to determine its detailed location to fix it. To locate the bug, the quantum program is divided into several segments and each segment is tested. However, to prepare a quantum state that is input to a segment, it is necessary to execute all the segments ahead of that segment in a quantum computer. This means that the cost of testing each segment depends on its location. We can also locate a buggy segment only if it is confirmed that there are no bugs in all segments ahead of that buggy segment. Since a quantum program is tested statistically on the basis of measurement results, there is a tradeoff between testing accuracy and cost. Although these characteristics are unique to quantum programs and complicate locating bugs, they have not been investigated. We suggest for the first time that these characteristics should be considered to efficiently locate bugs. We are also the first to propose a bug-locating method that takes these characteristics into account. The results from experiments indicate that the bug-locating cost that is represented as the number of executed quantum gates can be reduced with the proposed method compared with naive methods. ## I Introduction The field of quantum software engineering has developed rapidly [14][15][16][17][18][19]. Research of testing, verification, and debugging of quantum programs began with a typology of bugs [14][15][16]. On the basis of these studies, the application of classical methods to quantum programs has been proposed [15]. When a bug is detected by testing a quantum program on a quantum computer, we divide the program into several segments and test each segment to determine the detailed bug location. To find the bug in the test of each segment, it is necessary to prepare the actual quantum states that would be input to each segment when the entire quantum program is executed. In a quantum computer, to prepare the actual quantum state for a segment, all segments ahead of that segment should be executed on the initial state that the quantum computer physically forms [18]. This leads to the cost of testing segments to depend on the locations of the segments. Even if a bug is detected in the test of a segment, it does not necessarily mean that the bug is in that segment. This is because the bug may be in other segments ahead of that segment. Therefore, to locate a buggy segment on a quantum computer, we have to confirm that there is no bug in any segment ahead of it. As another perspective, the testing of each segment is conducted on the basis of measurements. Since a sufficient number of measurements is necessary for testing with sufficient accuracy, there is a tradeoff between testing accuracy and its cost. These characteristics unique to quantum programs complicate locating bugs, however, they have not been mentioned in previous studies. The first contribution of this paper is to clarify for the first time the characteristics that should be considered to efficiently locate a buggy segment in quantum programs on a quantum computer. The second contribution is that we present a bug-locating method for quantum programs. We implement the proposed method and conducted experiments to demonstrate its efficiency. ## II Background ### _Quantum Program Testing_ A qubit, which is a variable in a quantum program is, in a superposition of the basis state --0\(\acute{}_{\acute{\ell}}\) and --1\(\acute{}_{\acute{\ell}}\). When a qubit is measured, 0 or 1 is observed probabilistically, depending on its superposition state. The state of a qubit can be expressed as \(|\psi>=a_{0}|0>+a_{1}|1>(|a_{0}|^{2}+|a_{1}|^{2}=1)\), where \(a_{0}\) and \(a_{1}\) are complex numbers and called amplitudes. The absolute squares of the amplitudes \(|a_{0}|^{2}\) and \(|a_{1}|^{2}\) represent the probabilities of obtaining 0 and 1 by a measurement. Generally, arbitrary quantum state consisting of \(n\) qubits is represented by \(2^{n}\) basis states. Figure 1 is an example of a quantum program implementing Grover's algorithm. The quantum program is represented by a model called quantum circuit. Each horizontal line corresponds to a qubit, and the operations on them, i.e., the quantum gates, are arranged from left to right. We assume a quantum program so large that it cannot be simulated in practical time on a classical computer. Since the motivation for using a quantum computer is to Fig. 1: Quantum program divided into segments solve complex problems that cannot be solved by a classical computer, this assumption is natural. To test a quantum program on a quantum computer, the output state is measured many times. Quantum information derived from the measurement results is then statistically compared with the test oracle. If they are different, the quantum program is statistically determined to have a bug. A simple testing method is comparing the absolute square of the amplitude that can be directly derived from the measurement results. For a more rigorous test, the density matrix may be useful too, which is calculated by quantum state tomography, maximum likelihood estimation, or Bayesian estimation [11][12][13]. We propose a method for locating a buggy segment that is based on these statistical testing methods. The actual testing method is outside the scope of this paper; thus, we uses a simple testing method comparing the absolute square of the amplitude in our experiment in Section V. ### _Related Work_ Various classical testing methods have been proposed for application to quantum programs. Huang et al. proposed statistical assertion that is based on the chi-square test [14]. Li et al. presented runtime assertion using a projective measurement that stabilizes the tested quantum state [15]. Liu et al. introduced another runtime assertion by adding an extra (ancilla) qubit to collect the information of the quantum state [16]. Other classical testing methods were proposed for application to quantum programs, such as fuzzing [17], mutation testing [18][19], search-based testing [17][20], combinatorial testing [17], coverage testing [1][21][22], metamorphic testing [1][23][24], a property-based testing [19], and equivalence class testing [18]. Muqet et al. proposed leveraging machine learning to improve the accuracy of testing under the noise effect [25]. We use a testing method suggested by Huang et al. [14] to test each segment in our experiment. Focusing on debugging quantum programs, Miranskyy et al. discussed the applicability of the classical debugging strategies to quantum programs [12][12][12]. Liu et al. suggested using the information obtained from assertion tests for debugging [16]. Zhao et al. introduced bug patterns of quantum programs and a static analysis tool based on the patterns [15]. Li et al. proposed a debugging scheme to locate bugs by injecting assertions into a quantum program [15]. They also suggested that to show that a segment has a bug, it is necessary to show that all segments ahead of it do not have a bug. However, it was not recognized as a factor that complicates bug locating. Including it, we clarify the characteristics that should be considered to efficiently locate bugs. ## III Characteristics of Quantum Program Testing for Bug Locating When a bug is detected in a quantum program, we divide the program into several segments and test each segment to determine the detailed bug location. To find the bug in the test of each segment, the actual quantum states that would be input to each segment when the entire quantum program is executed should be given. However, unlike a classical computer, a quantum computer can not easily prepare the quantum state as desired. If we want to prepare the actual input state of a segment, all segments ahead of the segment should be executed. This means that when testing a backward segment, more quantum gates are executed than when testing a forward segment. That is, the first characteristic [**C1**] is that the cost of testing a segment depends on its location. It is also implied that if we test a segment and a bug is detected, it is possible that the bug is not in the segment but in another segment ahead of it. Therefore, the second characteristic [**C2**] is that we can locate a buggy segment only if it is confirmed that there is no bug in all segments ahead of the buggy segment. This suggests another characteristic. If a bug is not detected in the test of a segment, it can be assumed that the segment ahead of it is also bug-free (more precisely, no bug affecting the output of the segment). Conversely, if a bug is detected, the same bug should also be detected in the tests of the segments behind it. Accordingly, the third characteristic [**C3**] is that the test results of the segments are not independent. As described in Section II-A, the test of each segment is statistically conducted on the basis of the measurement results. A sufficient number of measurements is necessary for testing with sufficient accuracy. This indicates the fourth characteristic [**C4**] that there is a tradeoff between testing accuracy and its cost. In the chi-square test that we use in our experiments in Section V, accuracy is indicated by significance level and power. The power of the test described as \(1-\beta\) is the probability of correctly detecting the presence of a bug, where \(\beta\) is the Type II error rate. The power depends on the reliability of a sample, which is represented by the standard error of the sample mean \(\sqrt{\sigma^{2}/M}\), where \(\sigma^{2}\) is the unbiased estimate of the population variance and \(M\) is the sample size [1]. Since the power depends not only on the sample size but also on the significance level, which corresponds to the Type I error rate, the significance level also affects the required sample size. This means that test accuracy depends on the number of samples, that is, measurements. Although these four characteristics affect the efficiency of locating bugs, they have not been mentioned in previous studies. ## IV Proposed Method The proposed method consists of _cost-based binary search_, _early determination_, _finalization_, and _looking back_. ### _Cost-based Binary Search_ Binary search is an efficient array search algorithm [12], in which the location of the target value is recursively narrowed down by comparing the middle element of the array and target value. By selecting the center element of the array as the middle element, binary search ensures that the search costs of the left and right subarrays are as similar as possible. We believe that binary search is also effective in locating buggy segments in quantum programs. However, as [C1] states, the cost of the test depends on the position of the segment. Let \(S_{l}\) be a sequence of segments of length \(l\). If a segment \(s_{x}\) (\(1\leq x\leq l-1\)) is the middle element, the segment sequences from \(s_{1}\) to \(s_{x}\) and from \(s_{x+1}\) to \(s_{l}\) are called the left sequence and right sequence in terms of \(s_{x}\), respectively. If the central segment is selected as the middle element, the search costs of the left and right sequences are not similar. Therefore, we use _cost-based binary search_ in which the middle element is selected on the basis of the testing cost. We define the testing cost \(c_{x}\) as the number of quantum gates to be executed in the test of segment \(s_{x}\). That is, \(c_{x}=\sum_{i=1}^{x}g_{i}\), where \(g_{i}\) denotes the number of quantum gates in \(s_{i}\). The middle element \(s_{x}\) is selected so that the highest total testing costs for searching the left and right sequences are as similar as possible. Therefore, the index \(x\) of middle element \(s_{x}\) is given as \[\mathop{\arg\min}_{1\leq x\leq l-1}\left|\sum_{i=1}^{x-1}c_{i}-\sum_{i=x+1}^{ l-1}c_{i}\right|.\] An example of a cost-based binary search tree is shown in Figure 2. A node in the tree is associated with a segment sequence to be searched, which we call a target sequence, and the output state of the middle element is tested (represented with the dashed line). Note that when testing a target sequence, all segments ahead of the middle element should be executed, not the target sequence. We call this an executed sequence. In the example in Figure 2, at node \(n_{4}\), the target sequence consists of \(s_{3}\) and \(s_{4}\), and the executed sequence is from \(s_{1}\) to \(s_{3}\). If a bug is detected from the test at node \(n_{i}\), move to the left node with the edge \(e_{i}^{L}\). Otherwise, transit to the right node with the edge \(e_{i}^{R}\). ### _Search Strategy_ #### Iii-B1 Early Determination In accordance with the binary search tree, we search for a buggy segment. On the basis of [C4], we introduce an approach to reduce the search cost, which is called _early determination_. This is based on the assumption that statistically sufficient accuracy is over-performance for proceeding the search, and it may be more efficient to reduce the number of measurements by taking the risk of return in the search. As another motivation of _early determination_, we focus on the "reinforcement" relation between determinations, which is based on [C3]. A search path from node \(n_{1}\) to \(n_{k}\) is denoted as a sequence of edges \([e_{1}^{d_{1}},...,e_{i}^{d_{i}},e_{i+1}^{d_{i+1}},...,e_{k-1}^{d_{k-1}}]\). Assume that there is no bug in the executed sequence \(S_{l}\) of \(n_{i}\), which corresponds to the null hypothesis of the test at \(n_{i}\). The Type I error rate of the test at each node is denoted as \(\alpha\). If we determine at \(n_{i}\) that there is a bug in \(S_{i}\), that is \(d_{i}=L\), the probability of making this determination under the null hypothesis is \(\alpha\). In accordance with the structure of the search tree, the executed sequence \(S_{i+1}\) of \(n_{i+1}\) is included in the executed sequence \(S_{i}\) of \(n_{i}\). This means that there is also no bug in \(S_{i+1}\) under the null hypothesis. Therefore, if we determine \(d_{i+1}=L\) at \(n_{i+1}\), the probability is also \(\alpha\) under the null hypothesis of \(n_{i}\). Finally, the probability of determining \(d_{i}=d_{i+1}=L\) is \(\alpha^{2}\). This means that by proceeding from \(n_{i}\) to \(n_{i+1}\), the null hypothesis of \(n_{i}\) can be more certainly rejected at \(n_{i+1}\) than at \(n_{i}\), and rejecting the null hypothesis corresponds to the determination of \(d_{i}=L\). We call this relation \(e_{i+1}^{L}\) reinforces \(e_{i}^{L}\). In the example in Figure 2, \(e_{2}^{L}\) reinforces \(e_{1}^{L}\). The same is applied for \(d_{i}=d_{i+1}=R\). The fact that past determinations may be confirmed later motivates _early determination_. However, an incorrect determination is not reinforced later; thus, we introduce approaches to modify the incorrect determination in Sections IV-B2 and IV-B3. #### Iii-B2 Finalization _Early determination_ is based on the assumption that sufficient accuracy is not necessary when advancing the search. When locating a buggy segment finally, however, the test should be conducted with sufficient accuracy. Therefore, on the basis of [C2], the proposed method executes _finalization_ when the binary search reaches a leaf node and locates the buggy segment \(s_{x}\). _Finalization_ consists of the tests of \(s_{x-1}\) and \(s_{x}\) with sufficient accuracy. It should be confirmed that the segments from \(s_{1}\) to \(s_{x-1}\) do not contain a bug from the test of \(s_{x-1}\), and that the segments from \(s_{1}\) to \(s_{x}\) does contain a bug from the test of \(s_{x}\), with sufficient accuracy. If _finalization_ reveals an incorrect determination at a node, the search will return to the node. #### Iii-B3 Looking Back In addition to _finalization_, we introduce _looking back_ to modify incorrect determinations. First, we show that we only need to focus on the last \(L\) edge in a search path if we have incorrectly determined that there is a bug. Assume that the binary search is executed from \(n_{1}\) to \(n_{k}\) with the path \(p=[e_{i}^{d_{1}},...,e_{L}^{L},...,e_{i}^{L},e_{i+1}^{R},...,e_{k-1}^{R}]\) in which \(e_{h}^{L}\) is an arbitrary \(L\) edge from \(n_{1}\) to \(n_{i}\) (\(1\leq h\leq i-1\)) and \(e_{i}^{L}\) is the last edge of \(L\). Since the executed sequence \(S_{i}\) of \(n_{i}\) is included in the executed sequence \(S_{h}\) of \(n_{h}\), if there is no bug in \(S_{h}\), there is also no bug in \(S_{i}\). That is, if \(e_{h}^{L}\) is incorrect, \(e_{i}^{L}\) is also incorrect. Therefore, when we want to confirm that there is an incorrect \(L\) edge in the path from \(n_{1}\) to \(n_{i}\), we only need to check whether the last \(L\) edge, \(e_{i}^{L}\), is incorrect. Next, we focus on the successor \(e_{i+1}^{R},...,e_{k-1}^{R}\). If this is long, it suggests that \(e_{i}^{L}\) is likely to be incorrect. If \(e_{i}^{L}\) is incorrect, that is, \(S_{i}\) does not include a bug, the executed sequences \(S_{i+1},...,S_{k-1}\) also do not include a bug. In that case, \(R\) edges appear in succession if the determinations are correctly executed. Therefore, if \(R\) edges appear more than a certain number of times in succession, we should suspect that \(e_{i}^{L}\) is incorrect. _Looking back_ confirms the determination of the last \(L\) edge by the test with sufficient accuracy. If the determination turns out to be incorrect, the search returns to the node. Similarly, if \(L\) edges appear in succession, the last \(R\) edge is looked back to. ## V Experiment We implemented our proposed method and applied it to arbitrarily generated quantum programs. In each program, a bug was injected into a segment. In testing each segment, as in a previous study [19], we use the chi-square test, which statistically compares the absolute square of the amplitude with its oracle. We also implemented naive linear search and naive (non-cost-based) binary search in which the central segment is selected as the middle element, as comparison methods. With these naive methods, the chi-square test is also used for testing segments with sufficient accuracy, but our approaches described in Section IV are not applied. In the chi-square test, the p-value and power of the test are referred to as accuracy indicators. When determining that there is a bug with sufficient accuracy, the thresholds of the p-value and power are 0.05 and 0.8, respectively. When applying _early determination_, only the p-value is referred to and its threshold is relaxed to 0.2. Similarly, when determining the absence of a bug, the threshold of the p-value with sufficient accuracy is set to 0.8, and it is relaxed to 0.6 in _early determination_. The power is not referred to when determining the absence of a bug because it is similar to the significance level when there is no bug. _Looking back_ is executed when the same kind of edge (\(R\) or \(L\) edge) appears three times in succession. At each node, measurements are repeated until these indicators exceed the thresholds, but an upper limit is defined for the number of measurements. If the upper limit is reached, the search fails. The experiment was conducted through simulation on a classical computer using Qiskit(r) [10]. The experimental results are listed in Table I. By generating 100 quantum programs for each row, we evaluated the probability and average cost (the average number of quantum gates executed) of successfully locating a buggy segment. The standard deviation of the cost was also calculated. Table I shows that the costs and standard deviations of our method are lower than those of the naive methods. The results of the standard deviations indicate that the search costs are more equalized by _cost-based binary search_. The results also shows that the success probabilities of the proposed method are higher than those of the naive methods. If the difference in the output state caused by the bug is small, it is difficult to determine the presence of the bug by the test with sufficient accuracy. In that case, it is likely that the number of measurements reaches the upper limit and the search fails there. Since the proposed method uses _early determination_, the search is more likely to proceed before the number of measurements reaches the upper limit than the naive method. The experimental results indicate that _early determination_ also contributes to the improvement of the success probabilities. ## VI Discussion The basic idea of _early determination_ is to take the risk of return instead of reducing the number of measurements. This section discusses the probability of return on the basis of Bayes' theorem. Assume that the quantum program is divided into \(l\) segments and the search is executed from \(n_{1}\) to \(n_{k}\) with the path \(p=[e_{1}^{d_{1}},...,e_{i}^{d_{l}},...,e_{k-1}^{d_{k-1}}]\). At node \(n_{i}\), the segment \(s_{x}\) (\(1\leq x\leq l-1\)) is tested and the executed sequence is \(S_{i}\). When _early determination_ is applied at each node, the Type I and Type II error rates of a statistical test are denoted as \(\alpha\) and \(\beta\), respectively. The case \(d_{i}=L\) is described as follows, but the same is applied for \(d_{i}=R\). First, we consider the prior probability \(P(B)\) that \(S_{i}\) does not have a bug. For sake of simplicity, we assume the program contains only one bug and each segment has an equal chance of containing it. \(P(B)\) is expressed as \((l-x)/l\), where \(x\) corresponds to the length of \(S_{i}\). Next, let \(w\) (\(w\geq 1\)) be the number of \(L\) edges from \(e_{i}^{L}\) to \(e_{k-1}^{d_{k-1}}\). The conditional probability \(P(A|B)\) that the search follows path \(p\) from \(n_{i}\) to \(n_{k}\) when \(S_{i}\) contains no bug is expressed as \((\alpha)^{w}(1-\alpha)^{k-i-w}\). The marginal probability \(P(A)\) that the search reaches \(n_{k}\) from \(n_{i}\) along \(p\) is then calculated as a sum of the probabilities for each bug location. For sake of simplicity, instead of the actual \(P(A)\), we use the largest term \((1-\beta)^{w}(1-\alpha)^{k-i-w}\) in \(P(A)\), which is the probability that the buggy segment is correctly narrowed down at \(n_{k}\). Finally, the posterior probability \(P(B|A)\) that the search returns from \(n_{k}\) to \(n_{i}\) is expressed as \[P(B|A)\simeq\frac{(\alpha)^{w}(1-\alpha)^{k-i-w}((l-x)/l)}{(1-\beta)^{w}(1- \alpha)^{k-i-w}}.\] Fig. 2: Example of cost-based binary search tree Since \(\alpha\) is much smaller than \(1\), focusing on \((\alpha)^{u}\), we see that \(P(B|A)\) decreases exponentially as \(w\) increases. This indicates that the presence of \(L\) edges before \(e_{i}^{L}\) decreases the probability of returning to \(n_{i}\). In Section IV-B1, we interpreted this as the reinforcement relation of edges, which is the basis for _early determination_. If \(w=1\), that is, only \(R\) edges appear after \(e_{i}^{L}\), the probability to return to \(n_{i}\) does not decrease. In this case, \(e_{i}^{L}\) will be checked by _looking back_. ## VII Conclusion and Future Plans We presented four characteristics for the first time that should be considered to locate buggy segments of quantum programs on a quantum computer. We also proposed an efficient bug-locating method consisting of _cost-based binary search_, _early determination_, _finalization_, and _looking back_. We also experimentally demonstrated the efficiency of the proposed method. Future plans includes evaluating the efficiency of each approach by applying them separately. We will also demonstrate the usefulness of the proposed method in the entire debugging process, e.g., whether the proposed method can locate multiple bugs. The proposed method should locate the most forward buggy segment. By applying the proposed method again after fixing the bug, another segment including a bug will be located. It is also necessary to conduct experiments on an actual quantum computer with real quantum programs. Improvements to the proposed method is also included for future plans. More appropriate thresholds of the accuracy indicators used in _early determination_ can be theoretically calculated on the basis of the risk of return described in Section VI. The testing method we used for this study is measuring quantum states in the Z-basis and comparing the absolute square of the amplitude with its oracle by the chi-square test. Bugs that do not appear as a difference in the amplitude of the Z-basis, such as a difference in phase, cannot be detected. Therefore, we should consider leveraging measurements in different bases. The efficiency of the proposed method when used with other test methods described in Section II-B should also be evaluated.
2309.14075
Bayesian inference of 3D densities of galactic HI and H2
Due to our vantage point in the disk of the Galaxy, its 3D structure is not directly accessible. However, knowing the spatial distribution, e.g. of atomic and molecular hydrogen gas is of great importance for interpreting and modelling cosmic ray data and diffuse emission. Using novel Bayesian inference techniques, we reconstruct the 3D densities of atomic and molecular hydrogen in the Galaxy together with (part of) the galactic velocity field. In order to regularise the infinite number of degrees of freedom and obtain information in regions with missing or insufficient data, we incorporate the correlation structure of the gas fields into our prior. Basis for these reconstructions are the data-sets from the HI4PI-survey on the 21-cm emission line and the CO-survey compilation by Dame et al. (2001) on the ($1\rightarrow0$) rotational transition together with a variable gas flow model. We present the preliminary estimated mean surface mass densities and corrections to the prior assumption of the galactic velocity field. In the future, we plan to relax assumptions on the optical thickness and include additional data to further constrain either the galactic velocity field or the gas densities.
Laurin Söding, Philipp Mertsch, Vo Hong Minh Phan
2023-09-25T12:09:58Z
http://arxiv.org/abs/2309.14075v1
# Bayesian inference of 3D densities of galactic HI and H2 ###### Abstract: Due to our vantage point in the disk of the Galaxy, its 3D structure is not directly accessible. However, knowing the spatial distribution, e.g. of atomic and molecular hydrogen gas is of great importance for interpreting and modelling cosmic ray data and diffuse emission. Using novel Bayesian inference techniques, we reconstruct the 3D densities of atomic and molecular hydrogen in the Galaxy together with (part of) the galactic velocity field. In order to regularise the infinite number of degrees of freedom and obtain information in regions with missing or insufficient data, we incorporate the correlation structure of the gas fields into our prior. Basis for these reconstructions are the data-sets from the HI4PI-survey on the 21-cm emission line and the CO-survey compilation by Dame et al. (2001) on the (\(1\to 0\)) rotational transition together with a variable gas flow model. We present the preliminary estimated mean surface mass densities and corrections to the prior assumption of the galactic velocity field. In the future, we plan to relax assumptions on the optical thickness and include additional data to further constrain either the galactic velocity field or the gas densities. Introduction In order to properly interpret measurements of cosmic rays and gamma-ray diffuse emission, it is necessary to understand the emission, propagation and absorption of radiation in the interstellar medium of the Milky Way. This medium is a complex system consisting mainly of gas, magnetic fields, interstellar radiation fields and cosmic rays which permeate the entire Galaxy. While it makes up for only a few percent of the total mass of the Galaxy (the majority is in the form of stars or dark matter), it fills out most of the available volume and thereby defines the dynamics of radiation and particles within. Due to our vantage point, the 3D-distribution of the constituents of the Galaxy is not easily determined by observations of the sky. No matter in which direction we point our telescopes, we always observe an integrated signal of radiation that has travelled an a priori unknown distance through the Galaxy. However, due to galactic rotation and peculiar motion, light that reaches us will be Doppler-shifted by a certain amount, depending on the relative velocity of its source with respect to us. This is particularly useful when looking at emission lines that have a narrow width as it enables us to determine the relative velocity of its emitter and observer very precisely. This idea is unfortunately somewhat tainted by the fact that we do not know the precise structure of the galactic velocity field - and even circular rotation features a velocity ambiguity for positions within the solar circle. Any attempt to produce 3D maps of some quantity from such data will thus have to specify some rule according to which said quantity is placed when there is an ambiguity. Multiple approaches have been tried, most of them treating every line-of-sight (direction) independently, thereby missing out on a lot of information. This work will attempt to produce 3D maps of the distribution of HI (atomic hydrogen) and H\({}_{2}\) (molecular hydrogen) in the Milky Way using novel Bayesian inference techniques, exploiting spatial correlations of the gas structure to regularise ambiguities. Our approach will not only yield maps of the estimated gas densities, but also uncertainty information. The two observational datasets used are that of the HI4PI-survey [1] mapping the 21-cm emission of atomic hydrogen in the galaxy (see figure 1) and the CO-survey compilation by [2] observing the \(1\to 0\) rotational transition of CO as a tracer for molecular clouds and thereby H\({}_{2}\) (see figure 2). This work builds on precursory reconstructions (see [3, 4]) with some key differences: 1. A different numerical grid is used trading resolution far away from the observer for a much more refined resolution nearby. 2. The inference of galactic HI and H\({}_{2}\) is unified into a common inference process coupled by a common galactic velocity field. 3. The galactic velocity field is partly inferred, modifying our prior assumption by adding a curl-free field. In the following section, we will formulate this problem in a Bayesian manner and shortly describe the used approach to this very-high-dimensional problem. Thereafter, we will show our preliminary results, i.e. 3D-maps of the distribution of HI and H\({}_{2}\) in the galaxy. ## 2 Method ### Bayesian formulation In the language of probabilities, we want to know the probability of the gas distribution in the galaxy (called signal \(s\)) given the data \(d\) obtained by the sky surveys. Using Bayes' law, this can be written as \[P(s|d)=\frac{P(s,d)}{P(d)}=\frac{P(d|s)P(s)}{P(d)}\;. \tag{1}\] Since the datasets we are using measured the brightness temperature \(T\) (a measure of the intensity of the observed radiation) as a function of relative line-of-sight velocity and position on the sky, we will attempt to infer the CO and HI volume emissivity \(s=(\varepsilon_{\rm HI}(\vec{x}),\varepsilon_{\rm CO}(\vec{x}))\) simultaneously and later convert to gas densities. Equation 1 is often solved by creating a model that allows to sample from the prior distribution and compute the likelihood of said sample. Then, algorithms like MCMC sampling can be used to probe the shape of the posterior probability. For problems with many free parameters (usually more than \(\gtrsim\)10\({}^{2}\)), this becomes computationally unfeasible. A commonly used approach for high-dimensional problems are so-called _Variational Inference_(VI)-methods [5]. The idea of these methods is to approximate the posterior by a family of parametric distributions, for example a multi-variate Gaussian distribution. The parameters of this approximation can be determined by minimising the "distance" between the approximated posterior and the true posterior, for example via the Kullback-Leibler-divergence [6]. Computing this in theory involves the inversion of the full covariance matrix of all the correlated latent variables which - with millions or more of free parameters - is impossible even to store in common memory modules. As an approximation, it has been suggested to replace the inverse of the full covariance matrix by the inverse Fisher information metric, an approach known as Metric Gaussian Variational Inference (MGVI, [7]). This method can be applied to problems with more than 10\({}^{6}\) parameters while still being computationally efficient on regularly available hardware. This algorithm is implemented in an iterative scheme in the publicly available code-package nifty81. It alternates between estimating the covariance of the probability distribution with the inverse Fisher information metric at the current mean and optimising the mean of the distribution by minimising the Kullback-Leibler divergence to the true posterior with respect to the mean. This does not require explicitly computing the covariance matrix at any point (which would require the inversion of the Fisher information metric): instead the Kullback-Leibler divergence is estimated stochastically by drawing samples from a Gaussian with the appropriate covariance, leading to linear scaling in the model parameters. This can be implemented in terms of implicit operators which apply the Fisher information metric to some vector and then solving a linear system via conjugate gradient methods to obtain the application of the inverse Fisher information metric to some vector (which then features the desired correlation structure). Footnote 1: Available at [https://gitlab.mpcdf.mpg.de/ift/NIFTy](https://gitlab.mpcdf.mpg.de/ift/NIFTy) This way, a set of samples of the posterior distribution is obtained from which the Kullback-Leibler divergence can be calculated and, in turn, minimised. The algorithm has converged once the estimate for the mean and the estimate for the covariance are self-consi algorithm is a set of samples of the approximated posterior distribution, implicitly containing the correlation structure between all model parameters. To apply this algorithm, we thus need 1. A model that allows drawing prior samples from a set of latent variables, taking into account the spatial correlation structure of the 3D gas distribution (the _signal_) 2. A connection between the drawn gas realisation and the expected measurement data (the _response_) ### Gas model and Signal Since the data from sky surveys has some fixed angular resolution, it is wise to represent the gas density on a grid that shares this property. If we chose to represent the gas density on a regular x-y-z grid (as in [3, 4]), voxels nearby would occupy almost half of the sky while voxels far away occupy an area on the sky much smaller than the available data resolution. In order to be consistent with the data resolution we thus choose to represent our signal data on a HEALPix-grid on angular direction and a logarithmic grid in radial direction. This ensures a high resolution nearby where we expect to be the most sensitive to the actual gas distribution. This choice will also make the response-function trivial as the otherwise costly line-of-sight integration reduces to a simple sum along the radial direction of the grid. To model our prior, we generate samples of correlated lognormal random fields. These are obtained by drawing an - initially white-noise - sample \(\xi(\vec{x})\) of latent variables and correlating it using a method called Iterative Charted Refinement [8] according to a Matern-covariance function. We infer the parameters of this correlation structure at the same time as the gas density. The result is a correlated Gaussian random field \(g(\vec{x})\). Upon exponentiation, we obtain a lognormal correlation structure. This ensures positive gas densities while also allowing for large density differences as are expected to be present in the interstellar medium. This is not yet a very good prior assumption for gas in the galaxy as most of the gas is tightly constrained to the galactic disk which has a small scale height (\(\approx\)150 pc) compared to its diameter (\(\approx\)15 kpc). This can be immediately seen in the data-sets (figures 2 and 2): most of the gaseous emission is concentrated around latitude zero. The fidelity of the reconstruction can be increased by explicitly modelling the inhomogeneous large-scale variations. We therefore multiply the correlated field with a profile in z-direction and in radial direction: \[P_{z}(\vec{x})=P_{z}(z)=\exp\left(\frac{-|z|}{z_{h}}\right), \tag{2}\] \[P_{\rm rad}(\vec{x})=P_{\rm rad}(r_{\rm gal})=\exp\left(\frac{R_{ \rm cutoff}-r_{\rm gal}}{R_{\rm scale}}\right),\ \mbox{for}\ r_{\rm gal}>R_{\rm cutoff},\ \mbox{else}\ 1. \tag{3}\] Using this, we obtain \[\epsilon(\vec{x})=A\cdot P_{z}(\vec{x})\cdot P_{\rm rad}(\vec{x})\cdot\exp \left(g(\vec{x})\right) \tag{4}\] for HI and H\({}_{2}\) respectively. For the HI-profile, we choose \(z_{h}=z_{h}(r_{\rm gal})=150\,\mbox{pc}\cdot\exp\left(\frac{r_{\rm gal}-R_{ \odot}}{9.8\,\mbox{pc}}\right)\) for \(r_{\rm gal}>5\,\mbox{kpc}\), \(R_{\rm scale}=3.15\,\mbox{kpc}\) and \(R_{\rm cutoff}=7.0\,\mbox{kpc}\) as suggested by [9]. For the H\({}_{2}\)-profile, we use \(z_{h}=50\,\mbox{pc}\), \(R_{\rm scale}=1.0\,\mbox{kpc}\) and \(R_{\rm cutoff}=8.0\,\mbox{kpc}\). This does not prevent the inference from reconstructing fields that differ from this profile, but ensures that the drawn prior samples feature a galactic-disk-like gas distribution. ### Data and Response The second ingredient is the response function that connects the signal to the observation by modelling the generation of synthetic data from a signal sample. For simplicity, we work in the optically thin limit and ignore any absorption effects. In this case, the measured brightness temperature \(T(\hat{n},v)\) in some direction \(\hat{n}\) Doppler-shifted by a velocity difference \(v\) is related to the volume emissivity \(\epsilon(\vec{x})\) by a linear response map \(R\) via \[T(\hat{n},v)=R(\epsilon(\vec{x}))=\int_{0}^{\infty}\mbox{d}r\epsilon(\vec{x}) \delta(v-v_{\rm LSR}(\vec{x}))\, \tag{5}\] where \(v_{\rm LSR}(\vec{x})\) is the relative line-of-sight velocity at the position \((\vec{x})\) in the local standard of rest as dictated by our velocity model and \(r=|\vec{x}|\) is the distance from Earth. We approximate the Dirac-delta by a Gaussian with a width of \(\sigma_{\rm HI}=10\frac{\rm km}{s}\)[10] and \(\sigma_{\rm H_{2}}=5\frac{\rm km}{s}\)[11] in order to account for velocity dispersion inside gas clouds. For the velocity model, we use a fixed component based on a smoothed particle hydrodynamics simulation by [12], extended beyond 8 kpc using a flat rotation curve. On top of this velocity field, we add another component computed as the gradient of a scalar velocity potential: \[v_{\rm LSR}(\vec{x})=\vec{v}_{0}(\vec{x})+\nabla S(\vec{x}). \tag{6}\] This scalar field will be modelled as a correlated Gaussian random field and reconstructed at the same time as the gas emissivities. This opens the possibility for the model to adjust the velocity-field during the reconstruction. Additionally, we can hope to learn something about the true velocity field in areas, where the data is very constraining. However, there is no direct velocity-information in the data if one does not demand that the resulting gas densities should follow a certain correlation structure. Even then, this introduces many ambiguities as it greatly expands the possibilities, where gas clouds can be mapped. In the future, we will have to add additional data to either constrain the gas densities tighter (e.g. using correlations with dust [13]), thereby learning about the velocities or constrain the velocities tighter (e.g. using parallax information of masers or young stars [14]), thereby learning about the gas densities. ### Noise and Likelihood Taking into account additive noise in the observations, we alter our model for the relation between brightness temperature and volume emissivity to \[T(\hat{n},v)=R(\epsilon(\vec{x}))+n\,, \tag{7}\] where we assume the noise \(n\) to be normal distributed and uncorrelated (white) with some diagonal covariance \(N\). The likelihood can then be written as \[p(T|\epsilon)=\int\,\mathrm{d}n\,p(T|\epsilon,n)p(n)=\int\,\mathrm{d}n\, \delta(T-R(\epsilon)-n)\mathcal{G}(n,N)=\mathcal{G}(T-R(\epsilon),N)\,. \tag{8}\] ## 3 Results We run our reconstruction with a resolution of \(\mathrm{NSIDE}=32\) in angular direction and 500 radial pixels between \(r_{\mathrm{min}}=50\,\mathrm{pc}\) and \(r_{\mathrm{max}}=28\,\mathrm{kpc}\). The sample-average surface mass densities resulting from the 3D-maps can be seen in figure 3 for HI and H\({}_{2}\). This figure also shows the (partly reconstructed) sample-average line-of-sight velocity in a zero-latitude slice. The reconstructed gas densities show disk-like structures of gas clusters with imprints of galactic arms that are particularly visible in the HI-gas reconstruction. Both gas reconstructions suffer from a set of problems that we will discuss in the following. ### Inferred HI gas density The outside of the solar circle is well populated with HI-gas, whereas in the inside of the solar circle, the distance ambiguity seems to be resolved very one-sidedly towards the nearer solution. This could be due to the logradial grid giving the algorithm the opportunity to place gas nearby with a much higher fidelity than far away. This can then reproduce the data with a much higher fidelity as well leading to a much higher likelihood. This could be tested and perhaps solved by e.g. modifying the grid to have a uniform resolution inside the solar circle or by increasing the total resolution until saturation. The nearby gas shows a circular structure at the same radius as the reconstructed velocity as well as a tilted line-like structure in negative \(x\)-direction. Figure 3: Results of the inference with zoom-in on the local neighbourhood. Left panel: HI surface mass density. Middle panel: H\({}_{2}\) surface mass density. Right panel: Line-of-sight velocity at zero latitude ### Inferred H\({}_{2}\) gas density The quality of reconstruction of H\({}_{2}\) appears to be worse than that of HI, mainly due to the fact that the gas is very concentrated at the plane \(z=0\) and the amount of grid-points that are far away and very close to the galactic plane become very few. One can clearly see circular structures in the gas projection stemming from the (too) low angular resolution. Clearly visible is a bar-like structure in the galactic centre as well as two "wall"-like structures in-between us and the galactic centre also seen in precious reconstructions on a regular grid [3]. The excellent local resolution lets us see fine structures in the nearby gas showing a similar structure as the HI-gas in negative \(x\)-direction and small nearby clouds of gas towards the galactic centre. ### Inferred velocity field The reconstructed curl-free modification of the velocity prior is in general very small in amplitude and negligible for distances larger than 1 kpc. For distances smaller than that, there is an almost circular, positive (amplitude up to \(10\frac{\mathrm{km}}{s}\)) velocity correction being reconstructed. The position and amplitude coincide nicely with estimates for the expansion velocity of the local bubble [15]. It is not clear, how much information about the velocity field itself is contained in the data but the combination of two data-sets having to respect the local correlation structure at the same time appears to provide at least some information. ## 4 Conclusion We present new preliminary 3D-maps of galactic HI and H\({}_{2}\) inferred in conjunction using the same velocity field. We also present a partly reconstructed 3D line-of-sight velocity map featuring a circular structure with outwards-pointing velocities in the local neighbourhood. The ingredients for our inference were the HI4PI-survey from [1] measuring 21cm-emission, the CO-survey compilation by [2] measuring rotational CO-transitions and a VI-algorithm capable of inferring millions of parameters [7]. We have assumed the optically thin limit. In the future we plan to improve upon these shortcoming by including additional data and thereby further constraining either the velocity field or the gas distributions, by lifting our assumption on the optical thinness of the gas; and by improving the angular resolution of our reconstructions.
2309.03509
BroadCAM: Outcome-agnostic Class Activation Mapping for Small-scale Weakly Supervised Applications
Class activation mapping~(CAM), a visualization technique for interpreting deep learning models, is now commonly used for weakly supervised semantic segmentation~(WSSS) and object localization~(WSOL). It is the weighted aggregation of the feature maps by activating the high class-relevance ones. Current CAM methods achieve it relying on the training outcomes, such as predicted scores~(forward information), gradients~(backward information), etc. However, when with small-scale data, unstable training may lead to less effective model outcomes and generate unreliable weights, finally resulting in incorrect activation and noisy CAM seeds. In this paper, we propose an outcome-agnostic CAM approach, called BroadCAM, for small-scale weakly supervised applications. Since broad learning system (BLS) is independent to the model learning, BroadCAM can avoid the weights being affected by the unreliable model outcomes when with small-scale data. By evaluating BroadCAM on VOC2012 (natural images) and BCSS-WSSS (medical images) for WSSS and OpenImages30k for WSOL, BroadCAM demonstrates superior performance than existing CAM methods with small-scale data (less than 5\%) in different CNN architectures. It also achieves SOTA performance with large-scale training data. Extensive qualitative comparisons are conducted to demonstrate how BroadCAM activates the high class-relevance feature maps and generates reliable CAMs when with small-scale training data.
Jiatai Lin, Guoqiang Han, Xuemiao Xu, Changhong Liang, Tien-Tsin Wong, C. L. Philip Chen, Zaiyi Liu, Chu Han
2023-09-07T06:45:43Z
http://arxiv.org/abs/2309.03509v1
# BroadCAM: Outcome-agnostic Class Activation Mapping for Small-scale Weakly Supervised Applications ###### Abstract Class activation mapping (CAM), a visualization technique for interpreting deep learning models, is now commonly used for weakly supervised semantic segmentation (WSSS) and object localization (WSOL). It is the weighted aggregation of the feature maps by activating the high class-relevance ones. Current CAM methods achieve it relying on the training outcomes, such as predicted scores (forward information), gradients (backward information), etc. However, when with small-scale data, unstable training may lead to less effective model outcomes and generate unreliable weights, finally resulting in incorrect activation and noisy CAM seeds. In this paper, we propose an outcome-agnostic CAM approach, called BroadCAM, for small-scale weakly supervised applications. Since broad learning system (BLS) is independent to the model learning, BroadCAM can avoid the weights being affected by the unreliable model outcomes when with small-scale data. By evaluating BroadCAM on VOC2012 (natural images) and BCSS-WSSS (medical images) for WSSS and OpenImages30k for WSOL, BroadCAM demonstrates superior performance than existing CAM methods with small-scale data (less than 5%) in different CNN architectures. It also achieves SOTA performance with large-scale training data. Extensive qualitative comparisons are conducted to demonstrate how BroadCAM activates the high class-relevance feature maps and generates reliable CAMs when with small-scale training data. Class Activation Mapping, Broad Learning System, Weakly Supervised Semantic Segmentation, Weakly Supervised Object Localization. ## 1 Introduction Class activation mapping (CAM) [1] has attracted great attention and emerged significant advancements in the past few years. It was first proposed to visualize the regions where the classification models focus for interpretability. Lately, it has been widely adopted for weakly supervised applications to reduce annotation efforts by using only image-level labels to achieve dense pixel-level predictions, including weakly supervised semantic segmentation (WSSS) [2] and weakly supervised object localization (WSOL) [3]. Most of the existing WSSS and WSOL approaches rely on the training outcomes to construct the correlation between the feature maps and their correlated categories, such as gradients for Grad-CAM [4] and prediction scores for Score-CAM [5], and generate weights for the feature map aggregation. Thanks to the large-scale open source data in the community (e.g., ImageNet [6, 7] and COCO [8]), current CAM-based weakly supervised applications have achieved outstanding performance as shown in Fig. 1 (a). However, when with small-scale data, the model training Fig. 1: Comparison with CAM and our BroadCAM on large-scale data (a) and small-scale data (b). Existing CAM approaches rely on the training outcomes, and achieve good results on large-scale data but noisy results on small-scale data. The proposed BroadCAM is capable of generating stable class activation maps for the models trained on both large- and small-scale datasets. may become unstable, which leads to unreliable training outcomes and finally generates noisy weights. That is the reason why most of the current CAM approaches that depend on the training outcomes fail to generate reliable CAM seeds for weakly supervised applications, shown in Fig. 1 (b). And this situation cannot be underestimated. Because in the deep learning era, a tailored dataset for each specific task or scenario is indispensable, but most of the large-scale datasets were collected for the general natural scenes. Collecting massive well-labeled data from scratch is extremely difficult and labor-intensive. Therefore, a reliable approach for small-scale weakly supervised applications is crucial for reducing the cost and efforts of dense annotations and accelerating the annotation process. Since the unreliable training outcomes are the reason why existing CAM approaches fail to generate appropriate CAM seeds with small-scale data. The most intuitive idea is to make the CAM generation process independent to the training outcomes. So the key problem to be solved is to reconstruct the correlation between the feature maps and their correlated categories in an alternative way while ensuring the high and low class-relevance feature maps are activated and deactivated, respectively. To this end, we propose an outcome-agnostic CAM approach, called BroadCAM, for small-scale weakly supervised applications. Broad learning system (BLS) [9] is introduced as an independent classifier to re-construct the correlation. Thanks to the outcome-agnostic nature, BLS can avoid the weights being affected by the unreliable model outcomes while successfully activating the high class-relevance feature maps and deactivating the low class-relevance ones. Thanks to the robustness of BLS in handling small-scale data [10], the training of BLS is stable and the generated weights are more reliable when with insufficient training samples. To further bridge the gap between image-level labels and dense pixel-level predictions, we introduce more information by aggregating multi-layer feature maps. To the best of our knowledge, we are the first CAM approach to handle small-scale weakly supervised applications. We conduct extensive experiments to evaluate the quality of CAM seeds for WSSS and WSOL on three datasets, including two natural image ones and a medical image one. Quantitative results demonstrate that BroadCAM greatly outperforms the most representative CAM methods across different model architectures and training strategies in small-scale data (less than 5%). In the entire data gamut experiments, the CAM seed generated by BroadCAM also consistently achieves state-of-the-art (SOTA) performance for both WSSS and WSOL on all the datasets. By qualitatively comparing with SOTA CAM approaches, BroadCAM is highly related to the corresponding categories with more complete activation and fewer noises when with small-scale training data. In the meanwhile, BroadCAM also shows superior activation results when with large-scale data. Furthermore, by visualizing the relationship between the weights and feature maps, we observe that BroadCAM activates more high class-relevance feature maps and fewer low class-relevance ones than conventional CAM approaches in small-scale data. To summarize, BroadCAM is an effective and flexible CAM approach for weakly supervised applications which is less susceptible to the dataset size. The contributions of this paper are summarized as follows. * This paper is the first study that focuses on small-scale weakly supervised applications. We provide a feasible solution by making the CAM generation process independent to training outcomes when they are not reliable. * We introduce a novel outcome-agnostic CAM named BroadCAM for small-scale WSSS and WSOL. BLS guarantees the reliability of the weights in small-scale data. Multi-layer feature map aggregation bridges the gap between weak supervision and dense prediction. * Data gamut experiments on both WSSS and WSOL demonstrate the superiority and robustness of BroadCAM compared with the most representative CAM approaches. BroadCAM achieves SOTA quantitative performance in two natural image datasets and one medical image dataset. * Qualitative comparisons demonstrate the robustness of BroadCAM on all datasets with small-scale data. Visualization of the relationship between CAM weights and feature maps shows the reliability of CAM weights generated by BroadCAM. ## 2 Related Works ### _CAM-based Weakly Supervised Applications_ Weakly supervised applications, including WSSS and WSOL, leverage image-level labels to achieve dense pixel-level predictions [2, 3]. The major challenge is the huge information gap between weak supervision and dense prediction. Class activation mapping (CAM) technique [1] provides a feasible solution to overcome this challenge. Since the feature maps of the classification model are able to reveal the position and semantically related areas of the target classes. Therefore, CAM-based approaches have become the mainstream for weakly supervised applications. Current WSSS and WSOL approaches apply various training strategies or introduce more useful information to push the classification task toward the segmentation/localization tasks. To avoid the classification model only focusing on the most distinguishable regions, the dropout strategy [11, 12] is introduced to deactivate the most discriminative areas, forcing the model to learn from non-predominant regions and generate more complete CAM seeds. To solve the incorrect activations in different scales, SEAM [13] is proposed to reconstruct the consistency over different rescaling transformations in a self-supervised manner. To encourage finer local details, PuzzleCAM [14] and L2G [15] are proposed to train the classification model jointly by the whole image and the cropped/tessellated images. To introduce additional object information, saliency-guided methods have been proposed to exploit the saliency maps as background cues [16, 17], pseudo-pixel supervision [18] and class-specific attention cues [19]. Moreover, existing studies also introduce information from other aspects to bridge the information gap, such as contrastive self-attention [20], data augmentation [21] and domain adaptation [22]. By reviewing CAM-based weakly supervised techniques, we find that current approaches are basically constructed on top of the deep models trained on large-scale training data. However, pursuing sufficient high-quality labeled data is labor-intensive, even for image-level labels. Although the foundation models are the trend of future artificial intelligence (AI) [23], the paradigm of current AI models remains one dataset for one specific task. Small-scale data will still be an inevitable problem for a long time in the deep learning era. Gao _et al._[24] presents a pioneering study on large-scale unsupervised semantic segmentation (LUSS) with a released benchmark. However, even though they have achieved SOTA performance on LUSS, the precision of the results is still insufficient in practice due to the lack of supervision. In this paper, we first raise the importance of small-scale weakly supervised applications and propose a novel BroadCAM for them. ### _Class Activation Mapping (CAM)_ Besides the training strategy, CAM techniques are also crucial for WSSS and WSOL. Essentially, CAM seed generation is a weighted aggregation of the feature maps. The quality of CAM seeds strongly relies on the robustness of the weights, which determine whether higher class-relevance feature maps (shown in Fig. 2) can contribute more activation. Current CAM methods generate weights by constructing the correlation between the feature maps and their corresponding categories based on the training outcomes, which can be categorized into two types: gradient-based and gradient-free methods. #### 2.2.1 Gradient-based CAM GradCAM [4] is the first CAM approach that leverages gradient information to improve the flexibility of CAM, which has been widely applied to WSSS and WSOL. Later, various gradient-based methods have been proposed to improve the granularity by the smoothing technique, such as GradCAM++ [25], smooth GradCAM++ [26] and etc. To make the CAM seeds more complete for weakly supervised applications, LayerCAM [27] is proposed to introduce additional coarse-to-fine information by weighted aggregating multi-layer feature maps. BagCAM [28] identifies globally discriminative areas by deriving a set of regional localizers from the well-trained classifier, rather than solely activating regional object cues. #### 2.2.2 Gradient-free CAM To avoid the saturation and false confidence problems of gradient, Score-CAM [5] utilizes forward information (predicted score) to replace backward information (gradient) to achieve less noisy CAM seeds. Lately, several score-based CAM methods have been introduced to advance Score-CAM to generate more complete visual explanations of deep models by the smooth operation (SS-CAM [29]) and integration operation (IS-CAM [30]). To produce high-quality and class-discriminative localization maps, Ablation-CAM [31] employs ablation analysis to measure the significance of individual feature map units to obtain the weights for CAM generation. Relevance-CAM [32] overcomes the shattered gradient problem at shallow layers by utilizing a relevance propagation process. Although current CAM techniques bridge the gap between weak supervision and dense predictions, they are still based on an underlying assumption of the presence of large-scale data. When with small-scale data, the performance of current training outcome-based CAM methods may suffer from a dramatic decrease since unstable training will lead to unreliable training outcomes. To overcome such difficulty, we propose an outcome-agnostic CAM approach for small-scale data by breaking the dependency on training outcomes when generating CAM seeds. ### _Broad Learning System (BLS)_ Broad learning system (BLS) [33, 9, 34] is a flat and lightweight classification network without the need for deep architecture. It constructs the correlation fleetly between the image/features and labels by feature mapping and enhanced mapping to avoid the retraining process. In the last few years, various modified BLS have been developed to improve the broad learning architecture and the feature mapping manner, such as Fuzzy-BLS [35], TDFW-BLS [36], stacked-BLS [37], Recurrent-BLS [38] and etc. BLS is also widely applied in various applications such as hyperspectral image classification [39] and micro-robotic control [40]. In our previous studies [10, 41], we also explore the capability of BLS on small-scale classification. Our proposed pyramidal deep-broad learning (PDBL) demonstrates more stable and superior performance in different CNN architectures compared with the conventional deep learning manner on small-scale data. Even if the model training is unstable with small-scale data, BLS can still successfully construct the correlation between the features and the labels. Therefore, we further extend BLS to the problem of small-scale CAM generation and finally empower small-scale WSSS and WSOL. ## 3 BroadCAM In this paper, we propose an outcome-agnostic CAM approach for small-scale WSSS and WSOL, called BroadCAM. Since CAM seed generation is essentially a weighted aggregation of the feature maps. Generating high-quality CAM seeds depends on whether CAM weights can activate Fig. 2: Examples of high and low class-relevance feature maps. For each feature map, we calculate the IoU between the activated region (threshold: 0.1) and the ground truth mask to measure the relevance between each feature map and the corresponding category. the higher class-relevance feature maps and deactivate the lower ones. For small-scale weakly supervised applications, outcome-based CAM approaches generally fail to generate reliable weights due to unstable training outcomes. Thus, we design a novel outcome-agnostic CAM approach by making the weights generation process independent of the training outcomes. Fig. 3 demonstrates the workflow of the proposed BroadCAM. And the details of BroadCAM are shown as follows. ### _Deep Learning Feature Extraction_ Given a training set \(\mathcal{D}\sim\{X,Y\}\), where \(X\) and \(Y\) are the training samples and the corresponding image-level labels, we first train a classification model \(f_{DL}\) to learn domain-specific knowledge from the dataset, just like the other CAM approaches. Then we use the classification model as the deep feature extractor. To further bridge the gap and introduce more information, we extract deep features from multiple layers. \[\mathcal{F}_{1},\mathcal{F}_{2},...,\mathcal{F}_{j}=f_{DL}(x,\theta_{DL}) \tag{1}\] where \(j\) is the index of the layers. \(x\in X\), \(\theta_{DL}\) and \(\mathcal{F}_{j}\) represent a sample \(x\) in the training set \(X\), parameters of deep learning model and the feature maps of the \(j^{th}\) layer. Since the deep learning architecture in BroadCAM only serves for feature extraction. Any training strategy to enhance the feature maps can be applied to this framework. And we will discuss this in the experiments. ### _Broad Learning Weights Calculation_ Next, we construct the correlation between the deep features and the corresponding categories and generate the weights for each feature map. Inspired by our previous studies [10, 41], a broad learning system (BLS) is a lightweight and flat architecture that is a robust classifier for small-scale data. Thus, we introduce BLS to generate CAM weights for CAM seed generation. In the BLS, we first map the deep feature maps to broad features to fit the broad learning architecture by initial feature mapping. Then, we establish the enhancement nodes to expand the feature broadly by enhanced mapping. And we solve the optimization problem of BLS with extended broad learning feature nodes by ridge regression algorithm. Finally, we can transform the BLS parameters to obtain the weights for CAM seed generation. **(1) Initial Feature Mapping**: For the feature maps \(\mathcal{F}_{j}\) extracted from the \(j^{th}\) layer, we first apply global average pooling (GAP) \(f_{GAP}\) to squeeze the deep learning features by: \[z_{j}=f_{GAP}(\mathcal{F}_{j}) \tag{2}\] where \(z_{j}\) represents the squeezed features of the \(j^{th}\) layer. Then we concatenate the multi-layer features of each sample Fig. 3: The model framework of our proposed BroadCAM, which includes three steps. (a) Deep learning feature extraction: We first extract the features of multiple layers from a deep learning model. (b) Broad learning weights calculation: A broad learning system with only one-round learning is applied to generate the BroadCAM weights \(W_{B\text{-}roadCAM}\). (c) Multi-layer CAM seed generation: CAM seed is generated by a weighted aggregation of the multi-layer feature maps. We define the high class-relevance feature map as the one with a larger overlap with the ground truth label. into a broad feature vector \(\mathbf{z}\) as the feature node to fit the BLS architecture: \[\mathbf{z}=[z_{1},z_{2},...,z_{j}]^{T}. \tag{3}\] We form a broad feature matrix \(Z\) of all the training samples by: \[Z=[\mathbf{z}^{1},\mathbf{z}^{2},...,\mathbf{z}^{i}]^{T} \tag{4}\] where \(i\) is the index of samples. **(2) Enhance Mapping**: Next, we perform an enhanced mapping on the feature nodes to expand the broad structure widely and improve the BSL optimization. \[H=f_{H}(Z\times\theta_{H}+\beta_{H}). \tag{5}\] where \(f_{H}\) is a linear activation function. \(\theta_{H}\) and \(\beta_{H}\) represent the weights and bias of enhance mapping. \(\theta_{H}\) is used to group the feature nodes with similar importance. Different from the original BLS design with randomly initialized weights \(\theta_{H}\), \(\theta_{H}\) is now initialized by the class-relevance of the features \(Z\). To acquire the class relevance, we first solve BLS using the initial feature nodes to calculate the parameters \(W_{init}\) by: \[W_{init}=(\lambda I+ZZ^{T})^{-1}Z^{T}Y \tag{6}\] where \(I\) and \(\lambda\) denote the unit matrix and the hyperparameter of the L2 norm constraint for \(W_{init}\). Then we use \(W_{init}\) to initialize \(\theta_{H}\). **(3) BLS Learning**: Then we utilize all the broad features to learn BLS to construct the relationship between the features and the corresponding image-level labels \(Y\). We first concatenate the feature nodes and enhancement nodes together into an expanded broad feature matrix, denoted as \(A=[Z,H]\). Next, we employ the ridge regression algorithm to calculate the parameters of BLS as follows: \[W_{BLS}=(\lambda I+AA^{T})^{-1}A^{T}Y \tag{7}\] where \(W_{BLS}\) is the BLS parameters which can be split into two parts \(W_{Z}\) and \(W_{H}\) for the feature nodes and enhancement nodes, respectively. **(4) CAM Weights Calculation**: To use the BLS parameters to generate the CAM weights for each feature map, we transform the formulation of BLS as follows: \[\begin{split} Y&=A\cdot W_{BLS}=[Z,H]\times[W_{Z},W _{H}]^{T}\\ &=Z\times W_{Z}+H\times W_{H}\\ &=Z\times W_{Z}+(Z\times\theta_{H}+\beta_{H})\times W_{H}\\ &=Z\times(W_{Z}+\theta_{H}\times W_{H})+\beta_{H}\times W_{H}\\ denote:&=Z\times W_{BroadCAM}+\sigma\end{split} \tag{8}\] where \(\sigma=\beta^{H}\times W_{H}\) is the bias of the BLS and \(W_{BroadCAM}=W_{Z}+\theta_{H}\times W_{H}\) represents the weights that directly connect the feature nodes to the predicted classification results, quantifying the relevance between deep feature maps and categories. Since each value of the broad feature vector \(Z\) is calculated by GAP from the corresponding deep feature map of \(F\). The CAM weights of \(Z\) are equivalent to the CAM weights of \(F\). It is the channel-wise weights utilized for aggregating feature maps to generate class activation maps. ### _Multi-layer CAM Seeds Generation_ Finally, we can use \(W_{BroadCAM}\) to generate CAM seeds in the weights aggregation manner by: \[M_{BroadCAM}=ReLU([\mathcal{F}_{1},\mathcal{F}_{2},...,\mathcal{F}_{i}]\times W _{BroadCAM}) \tag{9}\] where \(ReLU\) is the non-linear activation function. The proposed BroadCAM can aggregate the feature maps from multiple layers. ## 4 Datasets In this paper, we evaluate the proposed BroadCAM on three datasets including the VOC2012 dataset (WSSS), BCSS (WSSS), and OpenImages30k (WSOL), which are detailed in this section. The distributions of three datasets are demonstrated in Table I. ### _VOC2012 Dataset_ VOC2012 dataset1 is the standard dataset provided by The PASCAL Visual Object Classes Challenge [42], which has been used for weakly supervised semantic segmentation (WSSS) tasks in recent years. VOC2012 dataset comprises \(20\) foreground categories, encompassing Car, Bus, Bicycle, etc. It can be split into training set (1464 images), validation set (1449 images) and test set (1456 images). Following the previous WSSS study [14], the training set has been extended to an augmented training set which includes \(10,582\) training samples. Image-level labels are provided for model training and pixel-level annotations are for evaluation. In our experiments, we use the augmented training set with different proportions to train the models and evaluate the CAM seed in the original training set and validation set. Footnote 1: [http://host.robots.ox.ac.uk/pascal/VOC/voc2012/](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/) ### _BCSS-WSSS Dataset_ Breast Cancer Semantic Segmentation (BCSS) dataset [45]2 is a well-labeled dataset for histopathological tissue semantic segmentation task of breast cancer. It consists of \(151\) representative regions of interest (ROIs) which were selected from \(151\) whole slide images (WSIs) and labeled by pathologists, pathology residents, and medical students. In this dataset, pixel-level annotations were provided for fully supervised model training, including \(4\) predominant classes (e.g., tumor, stroma, lymphocyte-rich regions and necrosis.), \(6\) non-predominant classes (e.g., artifacts, blood, \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline Task & Dataset & \begin{tabular}{c} Num. of \\ categories \\ \end{tabular} & \begin{tabular}{c} \begin{tabular}{c} Num. of \\ train \\ \end{tabular} & val & test \\ \hline WSSS & VOC2012 [42] & 20 & 1,449 & 1,456 \\ & BCSS-WSSS [12] & 4 & 23,422 & 3,418 & 4,986 \\ \hline WSOL & \begin{tabular}{c} OpenImages30k \\ [43, 44] \\ \end{tabular} & 100 & 29,819 & 2,500 & 5,000 \\ \hline \end{tabular} \end{table} TABLE I: Data distribution of VOC2012, BCSS-WSSS and OpenImages30k datasets. etc.) and \(8\) challenging classes (e.g. plasma cells, lymph vessels, etc.) Based on the BCSS dataset, our previous study [12] recreates a new dataset (BCSS-WSSS dataset) for weakly supervised semantic segmentation, which cropped the patches from original ROIs and obtained the corresponding image-level labels from the pixel-level annotations. In the BCSS-WSSS dataset, 4 foreground categories were defined based on the predominant classes of the BCSS dataset, including Tumor (TUM), Stroma (STR), Lymphocytic infiltrate (LYM), Necrosis (NEC). ### _OpenImages30k Dataset_ OpenImages30k dataset [43, 44]3 is curated for weakly supervised object localization (WSOL) task based on the OpenImagesV5 dataset [46]4. It consists \(100\) categories with balanced number of samples in each category. Samples of OpenImages30k were selected randomly from the OpenImagesV5 dataset which can be split into training set (29819 images), validation set (2500 images) and test set (5000 images), respectively. In this dataset, the image-level labels are provided in the training set for classification model training with different proportions and pixel-level annotations were created for evaluation. Footnote 3: [https://github.com/clovaai/wsolevaluation](https://github.com/clovaai/wsolevaluation) Footnote 4: [https://storage.googleapis.com/openimages/web/download_v5.html](https://storage.googleapis.com/openimages/web/download_v5.html) ## 5 Experiments In this section, we first show the evaluation metrics in our experiments in Section 5.1. Then, we evaluate the quality of the CAM seeds generated by various CAM techniques on both WSSS and WSOL in three datasets in Section 5.2 and Section 5.3. Next, we comprehensively analyze the reliability of the CAM weights in Section 5.4. Finally, we conduct an ablation study on the multi-layer design of BroadCAM in Section 5.5. ### _Evaluation Metrics_ In this paper, we conduct the comparison of CAM methods on VOC2012 dataset [42] (WSSS, natural image), BCSS-WSS [12] (WSSS, medical image) and OpenImages30k [43, 44] (WSOL, natural image), respectively. We employ three metrics to evaluate the performance of CAM seeds: * mIoU: We first employ the Mean Intersection over Union (mIoU) metric to evaluate the performance on the VOC2012 dataset for the WSSS task of natural image. We use a range of thresholds to quantify the mIoU of CAM seeds separately and record the peak one representing the performance of CAM methods. * FwIoU: Following the setting of our previous study [12], we utilize the Frequency weighted Intersection over Union (FwIoU) metric to evaluate the performance of CAM methods on BCSS-WSSS dataset. * mPxPA: In the experiments on OpenImages30k dataset (natural image), we utilize the mean pixel average precision (mPxAP) [43, 44] metric to quantify the performance of CAM methods for WSOL task. ### _Comparisons of CAM Methods on WSSS_ First of all, we compare our BroadCAM with three representative CAM methods on WSSS, including the original CAM [1] and two gradient-based CAM approaches (GradCAM [4] and LayerCAM [27]). All the experiments are conducted on a natural image dataset VOC2012 [42] \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{\begin{tabular}{c} Training strategy \\ (Architecture) \\ \end{tabular} } & \multirow{2}{*}{Split} & \multirow{2}{*}{CAM method} & \multicolumn{5}{c|}{Performance of CAM seeds with different proportions of training samples} \\ \cline{6-13} & & & 1\% & 2\% & 5\% & 8\% & 10\% & 20\% & 50\% & 80\% & 100\% \\ \hline \multirow{10}{*}{\begin{tabular}{c} VOC2012 \\ (natural image) \\ \end{tabular} } & \multirow{4}{*}{train} & CAM & 34.13 & 37.18 & 46.51 & 47.44 & 49.75 & 53.55 & **58.15** & **58.89** & **59.55** \\ & & GradCAM & 33.89 & 36.71 & 46.05 & 47.05 & 49.40 & 53.10 & 57.82 & 58.85 & 59.53 \\ & & LayerCAM & 41.18 & 43.46 & 46.27 & 47.03 & 49.54 & 45.61 & 51.18 & 54.12 & 54.64 \\ & & BroadCAM & **46.45** & **47.02** & **48.57** & **49.55** & **51.51** & **54.22** & 55.90 & 58.39 & 58.52 \\ \cline{2-13} & & \multirow{4}{*}{val} & CAM & 33.85 & 36.83 & 46.28 & 40.90 & 49.41 & 53.58 & **57.33** & **58.99** & **59.00** \\ & & GradCAM & 33.73 & 36.30 & 45.89 & 46.77 & 49.04 & 52.72 & 57.00 & 58.96 & 58.90 \\ & & LayerCAM & 41.47 & 43.22 & 45.75 & 46.23 & 48.61 & 45.67 & 51.38 & 53.90 & 54.43 \\ & & BroadCAM & **46.55** & **46.78** & **48.54** & **49.18** & **51.23** & **53.78** & 54.27 & 57.33 & 57.41 \\ \cline{2-13} & & \multirow{4}{*}{train} & CAM & 30.11 & 38.59 & 45.25 & 51.55 & 51.17 & 54.26 & 56.56 & **58.59** & **58.55** \\ & & GradCAM & 29.97 & 37.92 & 44.58 & 50.62 & 50.04 & 53.93 & 56.02 & 58.53 & 58.47 \\ & & LayerCAM & 41.81 & 44.52 & 45.88 & 50.60 & 48.85 & 53.12 & 52.89 & 55.01 & 55.47 \\ & & BroadCAM & **49.37** & **49.00** & **49.78** & **54.38** & **52.56** & **57.03** & **56.85** & 58.69 & 57.89 \\ \cline{2-13} & & \multirow{4}{*}{val} & CAM & 29.55 & 38.96 & 44.85 & 51.53 & 51.56 & 54.27 & 56.11 & **58.39** & **58.45** \\ & & GradCAM & 29.75 & 38.57 & 44.36 & 50.77 & 50.56 & 53.81 & 55.56 & 58.00 & 58.03 \\ & & LayerCAM & 41.41 & 44.06 & 45.82 & 50.60 & 49.20 & 52.12 & 52.02 & 54.52 & 54.74 \\ & & BroadCAM & **49.87** & **48.31** & **49.63** & **54.06** & **52.17** & **56.20** & **56.67** & 57.96 & 57.49 \\ \hline \multirow{10}{*}{\begin{tabular}{c} BCSS-WSSS \\ (medical image) \\ \end{tabular} } & \multirow{4}{*}{ \begin{tabular}{c} PDA [12] \\ (ResNet38) \\ \end{tabular} } & \multirow{4}{*}{val} & CAM & 48.38 & 60.17 & 65.91 & 68.24 & 69.18 & 69.55 & 69.69 & 70.11 & 70.64 \\ & & GradCAM & 48.37 & 59.81 & 65.96 & 68.30 & **69.28** & 69.87 & 69.85 & 70.32 & 70.79 \\ \cline{1-1} & & LayerCAM & 52.93 & 64.46 & 65.92 & 66.86 & 67.35 & 68.00 & 69.51 & 69.04 & 68.78 \\ \cline{1-1} & & BroadCAM & **59.92** & **64.77** & **66.86** & **69.01** & 68.11 & **71.73** & **71.03** & **71.08** & **71.54** \\ \cline{1-1} \cline{2-13} & \multirow{4}{*}{test} & CAM & 55.91 & 67.53 & 71.80 & 73.71 & 73.44 & 73.04 & 73.18 & 73.93 & 74.14 \\ \cline{1-1} & & GradCAM & 58.84 & 67.43 & **72.09** & 73.98 & **73.69** & 73.38 & 73.37 & 74.13 & 74.26 \\ \cline{1-1} & & LayerCAM & 59.09 & 67.64 & 68.90 & 69.78 & 70.11 & 71.17 & 71.46 & 70.84 \\ \cline{1-1} & & BroadCAM & **66.63** & **70.50** & 71.79 & **74.09** & 72.41 & **74.59** & **74.14** & **74.47** & **74.93** \\ \hline \end{tabular} \end{table} TABLE II: Comparison of our proposed BroadCAM and existing CAM approaches on VOC2012 and BCSS-WSSS datasets with different proportions of training data. mIoU and FwIoU are used for the evaluation of VOC2012 and BCSS-WSSS, respectively. **Bold** represents the best performance. and a medical image dataset BCSS-WSSS [12]. To comprehensively investigate the performance of CAM approaches from large-scale data to small-scale data, we conduct a data gamut experiment with different proportions of the training data (1%, 2%, 5%, 8%, 10%, 20%, 50%, 80%, 100%). Besides the scale of the data, we also conduct comparisons on various CNN architectures and training strategies. For the experiments on the VOC2012 dataset, we employ the puzzle strategy proposed in PuzzleCAM [14] on ResNet38 and ResNe5t269 [47], respectively. For the experiment on the BCSS-WSSS dataset, we apply our previously proposed training strategy, called progressive dropout attention (PDA) [12], on ResNet38 [48]. Since we only evaluate the quality of the CAM seeds in this experiment, we do not apply any post-processing step, like AffinityNet [49] and IRNet [50], or second-round training by the pseudo masks. Note that, no matter how many training samples are used to train the classification model, the evaluations are performed under the entire training, validation and test sets. Table II demonstrates the quantitative results of different approaches to WSSS on two datasets. We can observe that CAM seeds generated by all the CAM methods with large Fig. 4: Comparison with CAM seeds generated by CAM methods (CAM, GradCAM, LayerCAM, BagCAM and our BroadCAM) with models trained on training set of different proportions on VOC2012 dataset and OpenImages30k dataset. Horizontally: \(1st\) column represents the original image and mask. Then the next 9 columns are the CAM seeds generated with models trained on training sets of different proportions (1%, 2%, 5%, 8%, 10%, 20%, 50%, 80%, 100%), respectively. scale training data (more than 50%) exhibit exceptional performance in the VOC2012 dataset on both ResNeSt101 and ResNeSt269 architectures. Visualization in Fig. 4 (a) demonstrates that CAM, GradCAM and BroadCAM generate very similar CAM seeds when with large-scale data. Although LayerCAM provides high activation on the object, multi-layer design may also introduce more noisy activation. When the proportion of the training data decreases, the model training and the outcomes become less reliable. Therefore, the performance of outcome-based CAM approaches drops dramatically. Under the same experimental setting, we can observe that BroadCAM mitigates the performance degradation and constantly outperforms the outcome-based CAM approaches. For example, the performance of CAM seeds generated by the original CAM (ResNeSt269) on the validation set drops from \(58.45\) to \(29.95\), but BroadCAM can still achieve \(49.87\). Thanks to the outcome-agnostic nature, BroadCAM will not affected by unreliable training outcomes and only focus on the relevance between the feature maps and the corresponding categories, leading to less noisy CAM seeds. As shown in Fig. 4 (a), outcome-based approaches show either incomplete or noisy activation. However, the CAM seeds generated by BroadCAM also show weaker activation on small-scale data than the ones on large-scale data. They are still more complete with less noise compared with the other three competitors. To further evaluate the CAM seeds, we conduct an experiment on a medical image dataset, BCSS-WSSS. The training strategy, progressive dropout attention (PDA), applied in this experiment is proposed by our previous study [12] to prevent the classification model from focusing on the most discriminative areas. As shown in the lower part of Table II, BroadCAM almost dominates the entire data gamut experiment. Although the performance of all four CAM approaches decreases when reducing the training samples. BroadCAM still greatly outperforms the other three competitors. Fig. 5 demonstrates an example and the corresponding CAM seeds generated by the original CAM and BroadCAM on both 1% and 100% training sets. When the classification model is trained on the complete dataset, both CAM and BroadCAM can generate favorable CAM seeds. However, when the training set downscales to only 1%, conventional CAM fails to activate lymphocytes and generates noisy tumor CAM seeds. Thanks to the outcome-agnostic design, BroadCAM is able to generate high-quality CAM seeds with more complete and less noisy activation even if the training process is unstable. ### _Comparisons of CAM Methods on WSOL_ Next, we conduct experiments to evaluate the ability of different CAM approaches on WSOL, including CAM [1], GradCAM [4], BagCAM [28] and BroadCAM. In this experiment, we employ the training strategy DA-WSOL [22] to train the ResNet50 model on different proportions of the training set in the OpenImages30k dataset. Quantitative results shown in Table III demonstrate that the CAM seeds generated by BroadCAM are well-qualified for object localization. A similar trend in WSSS results can also be observed in WSOL ones that the performance of BroadCAM decreases more smoothly than existing CAM approaches along with the reduction of training data. Fig. 4 (b) visualizes CAM seeds generated by different CAM approaches with different training data scales. When with enough training data, there is no visual difference in all the CAM approaches. When reducing the training samples, existing outcome-based CAM approaches introduce false positive activation even the object can be easily separated from the background, such as the sparrow example. To further evaluate the robustness and the flexibility of various CAM approaches on different WSOL training strategies, including original CAM [1] with no training strategy, ADL [11], cutmix [21] and DA-WSOL [22]. This experiment is conducted on 1% and 100% training sets. There is an interesting observation that not all the strategies proposed for WSOL can gain improvement compared with the original CAM baseline, which is also reported in the previous study [44]. Nevertheless, BroadCAM constantly Fig. 5: Visual comparison of the CAM seeds generated by CAM and our BroadCAM. An example image and its ground truth are shown on the left-hand side (Red: tumor epithelium, Blue: lymphocyte, Green: stroma). On the right side, we show the CAM seeds of two classes TUM and LVM generated by CAM and BroadCAM with 1% and 100% training sets. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{\begin{tabular}{c} Training strategy \\ (Architecture) \\ \end{tabular} } & \multirow{2}{*}{Split} & \multirow{2}{*}{CAM method} & \multicolumn{6}{c}{Performance of CAM seeds with different proportions of training samples} \\ \cline{6-11} & & & & 1\% & 2\% & 5\% & 8\% & 10\% & 20\% & 50\% & 80\% & 100\% \\ \hline \multirow{8}{*}{ \begin{tabular}{c} OpenImages30k \\ (natural image) \\ \end{tabular} } & \multirow{4}{*}{DA-WSOL [22]} & \multirow{4}{*}{val} & CAM & 39.32 & 50.13 & 49.81 & 60.10 & 60.87 & 62.32 & 64.91 & 65.41 & 65.74 \\ & & & GradCAM & 43.38 & 53.67 & 51.14 & 61.92 & 62.73 & 63.89 & 66.42 & 66.98 & 67.64 \\ & & & BagCAM & 50.85 & 57.10 & 53.30 & 61.92 & 62.08 & 62.76 & 65.82 & 66.33 & 67.52 \\ & & & BroadCAM & **53.31** & **58.75** & **58.01** & **63.78** & **64.25** & **65.18** & **67.32** & **67.60** & **68.32** \\ \cline{2-11} & \multirow{4}{*}{(ResNet50)} & \multirow{4}{*}{test} & CAM & 38.47 & 49.41 & 49.57 & 59.54 & 60.35 & 61.50 & 64.07 & 64.68 & 65.05 \\ \cline{1-1} & & & GradCAM & 42.63 & 53.00 & 51.24 & 61.32 & 62.00 & 63.26 & 66.02 & 66.63 & 67.12 \\ \cline{1-1} & & & BagCAM & 49.54 & 56.29 & 52.64 & 60.84 & 61.38 & 62.01 & 65.45 & 65.78 & 66.89 \\ \cline{1-1} & & & BroadCAM & **52.71** & **58.68** & **57.48** & **63.32** & **63.86** & **65.01** & **67.16** & **67.51** & **68.10** \\ \hline \end{tabular} \end{table} TABLE III: Comparison of our proposed BroadCAM and existing CAM approaches on OpenImages30k dataset with different proportions of training data. mPxAP is used for the evaluation. outperforms existing CAM approaches in all the training strategies. And the performance gap of BroadCAM between 1% and 100% training sets is the least among all the four CAM methods. ### _Reliability of CAM Weights_ Since the CAM seeds are generated by the weighted linear aggregation of the feature maps. Therefore, whether the CAM weights are reliable will be the key to CAM generation. In this part, we conduct two visualization experiments to deeply investigate the reliability of the CAM weights, especially for small-scale data. In the first experiment, we want to decode the relationship between the CAM weights and the feature maps. Fig. 6 (a) demonstrates two examples of CAM seeds generated by CAM and BroadCAM with a model trained on 1% training set. We can observe that the traditional CAM fails to generate reliable CAM seeds, whereas our BroadCAM succeeds. To explain this, we first generate three CAM seeds using the selected feature maps of top20, top200 and top200 weights. In Fig. 6 (b), we can observe that the top20 and top200 results of CAM and BroadCAM are very similar. And both CAM seeds generated by the feature maps with top200 weights can actually provide precise activation on the objects. However, in the top2000 results, we can observe an obvious activation degradation on the CAM seeds generated by the original CAM. This is because unreliable training with small-scale data will generate noisy weights for the outcome-based CAM techniques, leading to incorrect deactivation on the high class-relevance feature maps. To further justify this conclusion, we plot the CAM weights in Fig. 6 (c) and show five high class-relevance feature maps and their corresponding weights in Fig. 6 (d). Note that, the weights are in order of the channel numbers of the feature maps. We can easily observe that the original CAM deactivates the high class-relevance feature maps with negative weights, whereas BroadCAM activates them with positive weights. From this experiment, we make an assumption that the weights of BroadCAM have a higher correlation with the relevance (IoU) of feature maps when with small-scale data, compared with the original CAM. To further investigate this assumption, we conduct another experiment to visualize the relationship between the distribution of CAM weights and the class-relevance feature maps. Fig. 7 demonstrates two examples with the generated CAM seeds on 1% and 100% training data. The line chart \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline \hline Dataset & Proportion & \begin{tabular}{c} CAM \\ methods \\ \end{tabular} & \begin{tabular}{c} CAM [1] \\ \end{tabular} & \begin{tabular}{c} ADL [11] \\ \end{tabular} & \begin{tabular}{c} Cutmix [21] \\ \end{tabular} & \begin{tabular}{c} DA-WSOL [22] \\ \end{tabular} \\ \hline \multirow{4}{*}{ \begin{tabular}{c} OpenImages30k \\ (WSOL) \\ \end{tabular} } & \multirow{4}{*}{1\%} & \multirow{4}{*}{100\%} & CAM & 38.05 & 37.09 & 32.71 & 31.64 & 37.35 & 36.27 & 39.32 & 38.47 \\ & & GradCAM & 41.27 & 40.51 & 35.20 & 33.84 & 40.66 & 39.85 & 43.38 & 42.63 \\ & & BagCAM & 47.47 & 46.19 & 38.39 & 37.04 & 46.74 & 45.40 & 50.85 & 49.54 \\ \cline{2-10} & & BroadCAM & **51.88** & **51.24** & **48.73** & **47.58** & **51.23** & **50.55** & **53.31** & **52.71** \\ \cline{2-10} & \multirow{4}{*}{100\%} & CAM & 58.50 & 57.85 & 56.78 & 53.67 & 57.31 & 56.27 & 65.74 & 65.05 \\ \cline{1-1} & & GradCAM & 59.91 & 59.12 & 57.83 & 57.36 & 58.65 & 57.87 & 67.64 & 67.12 \\ \cline{1-1} & & BagCAM & 60.01 & 59.33 & 57.94 & 57.48 & 59.04 & 58.06 & 67.52 & 66.89 \\ \cline{1-1} & & BroadCAM & **62.23** & **61.68** & **60.61** & **60.25** & **61.89** & **61.15** & **68.32** & **68.10** \\ \hline \hline \end{tabular} \end{table} TABLE IV: Compared with the performance (mPxAP) of CAM seeds generated by CAM methods on models trained using different WSOL methods with a training set of different proportions on OpenImages30k dataset. Fig. 6: Visual comparison of the reliability of CAM weights. Two examples are selected from the VOC2012 dataset and the experiment is conducted on 1% training set. (a) The CAM seeds generated by all the feature maps. (b) The CAM seeds generated by the selected feature maps of top20, top200, and top2000 weights. (c) The CAM weights in the order of the channel numbers of the feature maps, with five selected ones highlighted by red stars. (d) The selected feature maps and their corresponding channel number, IoU with ground truth and the value of the weights. Positive weights are marked in red and negative ones are marked in blue. (in red) shows the IoU of the feature maps and the bar chart (in blue) demonstrates the distribution of the weights. The details of the experimental settings are demonstrated as follows. In the line chart, we first rank the feature maps by the IoU of the target class. Then we split the feature maps into 16 groups, each of which contains an equal number of feature maps. The red line is the average normalized IoU of each group of feature maps. In the bar chart, each bar indicates the total number of positive or negative weights in one group of feature maps. In each group, positive weights are in deep blue and negative weights are in light blue. By putting the bar chart and line chart together, we can easily observe that the weights generated by BroadCAM show more similar trends with the class relevance of the feature maps compared with the original CAM. For the cases in 100% training set in Fig. 7 (b), the number of positive weights is positively correlated with the class relevance for both BroadCAM and CAM, leading to favorable CAM seeds for both CAM techniques. However, when with only 1% training data in Fig. 7 (a), CAM fails to show a positive correlation but the one of BroadCAM still remains. Inaccurate activation makes CAM fail to generate clean and robust CAM seeds. This experiment somehow justifies our assumption that the weights generated by BroadCAM have a higher correlation with the relevance of feature maps compared with the original CAM, especially for small-scale data, thanks to the outcome-agnostic nature. ### _Ablation Study on Multi-layer Design_ Since BroadCAM is also capable of aggregating features from multiple layers. Here, we conduct an ablation study to discuss the advantages and drawbacks of the multi-layer design for BroadCAM in large-scale and small-scale weakly supervised applications. In this experiment, we train ResNeSt101 on 1% and 100% training sets of the VOC2012 dataset with different combinations of the layers in WSSS. L1 to L4 are from shallower layers to deeper layers. Quantitative results are shown in Table V. We first evaluate the performance of BroadCAM on the single layer. We can observe that using the feature maps extracted from the deepest layer shows the best performance as expected. Because the deeper layers deliver more semantic information and the shallow feature maps capture more textural information. When with a small-scale dataset, the feature maps are less stable and informative than with large-scale data, resulting in a significant decline in performance. Next, to evaluate the multi-layer performance, we gradually equip the other three layers on the deepest layer L4. Experimental results show that associating the last two layers achieves the best performance in both 1% and 100% training sets. When with large-scale data, the performance of BroadCAM on all the multi-layer configurations is Fig. 7: Visualization of the relationship between the distribution of weights and the relevance of feature maps. Two examples are selected from the VOC2012 dataset and the experiment is conducted on both 1% and 100% training sets. For each example, we show the original image, pixel-level ground truth, generated CAM seeds and mixed charts with line charts and bar charts. In each mixed chart, we first group the feature maps according to the rank of the IoU. The line chart (in red) shows the IoU of each group of feature maps and the bar chart (in blue) demonstrates the distribution of the weights in each group. The number of positive/negative weights is in deep/light blue. similar. However, when with small-scale data, introducing shallow layers will harm the CAM generation due to the unstable feature maps. In this paper, all the experiments are conducted using the BroadCAM with the last two layers (L3+L4). ## 6 Conclusion In this paper, we first raise the importance of small-scale weakly supervised semantic segmentation and object localization, and propose a novel outcome-agnostic CAM method, called BroadCAM. To avoid the CAM weights being affected by unreliable outcomes due to unstable training, we introduce an independent broad learning system, which is friendly to small-scale data, to generate reliable CAM weights. Extensive quantitative and qualitative experiments on WSSS and WSOL tasks on three different datasets have proven the effectiveness and robustness of BroadCAM on both large-scale and small-scale data, whereas the existing CAM approaches are not able to achieve both. Two well-designed visualization experiments demonstrate that the outcome-agnostic design encourages BroadCAM to correctly activate more high class-relevance feature maps, resulting in less noisy and more complete CAM seeds. And the weights generated by BroadCAM are highly correlated with the relevance of the feature maps, which is the major reason why BroadCAM achieves more favorable results even when the training is unstable with small-scale data. Since this study is the first one tailored for small-scale weakly supervised applications, and BroadCAM is the first outcome-agnostic CAM approach. We think that there exists a lot of future directions to be explored. (1) The reliability of the feature maps. In this study, we only focus on how to avoid the CAM weights being affected by unreliable training outcomes. However, unstable training will also harm the robustness of the feature maps when generating CAM seeds. Therefore, how to improve the feature representation ability for small-scale data is equally important. (2) How to make good use of shallow layers. Although BroadCAM leverages multi-layer information to generate CAM seeds. It still mainly relies on the deeper layers with more semantic information. However, the activation of the feature maps from deeper layers is coarse and more abstract. To introduce the finer details of the object boundaries, making good use of shallow layers might be a feasible solution. As the first baseline model on small-scale weakly supervised applications, we hope BroadCAM can bring some insights to this field and encourage more studies to discover more outstanding techniques to reduce the annotation efforts on semantic segmentation and object localization. ## Acknowledgments This work was supported by the Key-Area Research and Development Program of Guangdong Province, China (No. 2021B0101420006), the Natural Science Foundation for Distinguished Young Scholars of Guangdong Province (No. 2023B1515020043), the National Science Foundation for Young Scientists of China (No. 62102103), the National Natural Science Foundation of China (No. 82372044, 82071892, 92267203 and 82271941), Regional Innovation and Development Joint Fund of National Natural Science Foundation of China (No.U22A20345), High-level Hospital Construction Project (No.DFJHBF202105), Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application (No. 2022B121010011). the Science and Technology Major Project of Guangzhou (No. 202007030006), the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (No. 2019ZT08X214), Key-Area Research and Development Program of Guangzhou City (202206030007). The source code of BroadCAM is available at [https://github.com/linjataai/BroadCAM](https://github.com/linjataai/BroadCAM).
2309.09742
Drawing the Same Bounding Box Twice? Coping Noisy Annotations in Object Detection with Repeated Labels
The reliability of supervised machine learning systems depends on the accuracy and availability of ground truth labels. However, the process of human annotation, being prone to error, introduces the potential for noisy labels, which can impede the practicality of these systems. While training with noisy labels is a significant consideration, the reliability of test data is also crucial to ascertain the dependability of the results. A common approach to addressing this issue is repeated labeling, where multiple annotators label the same example, and their labels are combined to provide a better estimate of the true label. In this paper, we propose a novel localization algorithm that adapts well-established ground truth estimation methods for object detection and instance segmentation tasks. The key innovation of our method lies in its ability to transform combined localization and classification tasks into classification-only problems, thus enabling the application of techniques such as Expectation-Maximization (EM) or Majority Voting (MJV). Although our main focus is the aggregation of unique ground truth for test data, our algorithm also shows superior performance during training on the TexBiG dataset, surpassing both noisy label training and label aggregation using Weighted Boxes Fusion (WBF). Our experiments indicate that the benefits of repeated labels emerge under specific dataset and annotation configurations. The key factors appear to be (1) dataset complexity, the (2) annotator consistency, and (3) the given annotation budget constraints.
David Tschirschwitz, Christian Benz, Morris Florek, Henrik Norderhus, Benno Stein, Volker Rodehorst
2023-09-18T13:08:44Z
http://arxiv.org/abs/2309.09742v1
Drawing the Same Bounding Box Twice? Coping Noisy Annotations in Object Detection with Repeated Labels ###### Abstract The reliability of supervised machine learning systems depends on the accuracy and availability of ground truth labels. However, the process of human annotation, being prone to error, introduces the potential for noisy labels, which can impede the practicality of these systems. While training with noisy labels is a significant consideration, the reliability of test data is also crucial to ascertain the dependability of the results. A common approach to addressing this issue is repeated labeling, where multiple annotators label the same example, and their labels are combined to provide a better estimate of the true label. In this paper, we propose a novel localization algorithm that adapts well-established ground truth estimation methods for object detection and instance segmentation tasks. The key innovation of our method lies in its ability to transform combined localization and classification tasks into classification-only problems, thus enabling the application of techniques such as Expectation-Maximization (EM) or Majority Voting (MJV). Although our main focus is the aggregation of unique ground truth for test data, our algorithm also shows superior performance during training on the TexBiG dataset, surpassing both noisy label training and label aggregation using Weighted Boxes Fusion (WBF). Our experiments indicate that the benefits of repeated labels emerge under specific dataset and annotation configurations. The key factors appear to be (1) dataset complexity, the (2) annotator consistency, and (3) the given annotation budget constraints. Keywords:Object Detection Instance Segmentation Robust Learning. ## 1 Introduction Data-driven machine learning systems are expected to operate effectively even under "difficult" and unforeseen circumstances. Consider safety-relevant domains such as autonomous driving, medical diagnosis, or structural health monitoring, where system failure sets lives at risk. Robust systems \(-\) those capable of reliable operation in unseen situations \(-\) may encounter several challenges, including domain shifts [25; 36; 16; 5], adversarial attacks [43; 42], degrading image quality [21; 7; 41] and noisy or uncertain labels [18; 9; 10; 37]. Past studies [33] indicate that noisy labels can cause more harm than the three aforementioned sources of input noise. Given this context, our study concentrates on addressing the issue of noisy labels, specifically within noisy test data. Without a unique ground truth, evaluation is unattainable. Therefore, to enhance robustness against label noise, it will be pivotal to first devise methods tailored towards annotation aggregation, which lays the groundwork for potential future integration with multi-annotator learning methods. The creation of annotated data for supervised learning is a costly endeavor, particularly in cases where experts such as medical professionals or domain experts are needed to annotate the data. To mitigate this issue, crowd-sourcing has emerged as a cost-effective means of generating large datasets, albeit with the disadvantage of potentially lower quality annotations that may contain label noise [30; 44; 47]. Although the reduced costs of crowd-sourced annotations often justifies their use, deep neural networks have the capacity to memorize noisy labels as special cases, leading to a declining performance and overfitting towards the noisy labeled data [44]. Notably, even expert annotated data is susceptible to label noise, given the difficulty of the data to annotate. A survey by Song et. al.[33] revealed that the number of corrupt labels in real-world datasets ranges between 8.0% to 38.5%. The authors demonstrate that reducing label noise and creating cleaned data can improve the accuracy of models. To address the issue of noisy labels, an approach known as "repeated-labeling" has been proposed. Repeated-labeling means to obtain annotations from multiple annotators/coders Figure 1: Comparison between different ground truth aggregation methods, exemplary on the VinDr-CXR dataset [23]. Left: the original image with the repeated labels indicated by the different line types. Right: the four smaller images from top left to bottom right are, MJV+\(\cap\), LAEM+\(\mu\), LAEM+\(\cup\) and WBF. for the same data entry, such as an image. More specifically: For a set of images \(\{x_{i}\}_{i=1}^{N}\) multiple annotators create noisy labels \(\{\tilde{y}_{i}^{r}\}_{i=1,\ldots,N}^{r=1,\ldots,R}\), with \(\tilde{y}_{i}^{r}\) being the label assigned from annotator \(r\) to image \(x_{i}\), but without a ground truth label \(\{y_{i}\}_{i=1,\ldots,N}\)[34]. Methods for mitigating the negative effect of label noise via repeated labeling can be divided into two categories [18; 34]: (a) _two-stage_ approaches [39; 8] and (b) _one-stage_ or _simultaneous_ approaches [15; 12; 11]. Two-stage approaches aim to approximate the ground truth prior to training, a process known as ground truth estimation or ground truth inference [45], as depicted in Figure 2; a straightforward approach is to compute a majority vote. Following label aggregation, the model is trained in a regular fashion. Two-stage approaches offer the benefit of being compatible with commonly used model architectures. On the other hand, simultaneous approaches attempt to integrate repeated labels directly into the training process. In any case, the primary objective of both strategies is to achieve robust and accurate results by leveraging the repeated labeled data to the fullest extent possible. Doing so is crucial to justify the additional annotation efforts. Lastly, to enable the use of established performance metrics, such as those employed in the COCO object detection dataset (mAP) [20], a ground truth estimation step is essential for the validation and test sets. While simultaneous approaches can more effectively utilize repeated labels, they are not intended to execute the necessary aggregation step required to generate the unique ground truth estimate [34]. Consequently, reliable approximation methods are indispensable for evaluation purposes. Object detection and instance segmentation require both localization and classification, which means that existing methods for repeated labels that are used for classification tasks such as image classification or named entity recognition are not applicable [28]. That is, the available selection of ground truth inference methods is limited. Furthermore, the creation of bounding box or polygonal annotations is expensive [10] and reduces the number of datasets with repeated labels available for evaluating ground truth inference methods [35; 23]. However, Figure 2: Left: Original input image featuring three separate annotations by distinct annotators. Center: Application of the LAEM aggregation method to the three annotations, yielding an approximate ground truth. Right: Aggregated ground truth utilized during the training process. we deliberately avoid using synthetic data and focus on real datasets. Our contributions are as follows: 1. We propose a localization algorithm that enables the use of existing ground truth estimation methods such as majority voting or expectation maximization for instance-based recognition tasks and evaluate it extensively with existing methods [32, 18]. 2. We introduce a comparative analysis of ground truth inference methods that highlights their properties and limits. 3. We conduct ablation studies to analyze the costs associated with creating repeated annotations, and what to do when the amount of available annotated data is limited. 4. We introduce an extension for the TexBiG dataset [35] in the form of a test subset, wherein each of the 200 test images has been annotated by five expert annotators. Utilizing our aggregation method, we establish a unique approximation of the ground truth, which will serve as the unknown reference standard on an evaluation server. This approach allows the TexBiG dataset to be used for evaluation of robust learning methods addressing the challenge of noisy labels. Once released, the link to the evaluation server will be posted on the GitHub repository where the code is hosted: [https://github.com/Madave94/gtiod](https://github.com/Madave94/gtiod). ## 2 Related Work To approximate the ground truth, estimation methods make assumptions about the data and task properties as well as the annotation process. Majority Voting (MJV) [14, 29, 26] assumes correct labels for the majority of training samples and aggregates the labels accordingly: \[\tilde{y_{i}}=\begin{cases}1\text{ if }(1/R)\sum_{r}^{R}=y_{i}^{r}>0.5\\ 0\text{ if }(1/R)\sum_{r}^{R}=y_{i}^{r}<0.5\end{cases} \tag{1}\] In case of a tie, the label is chosen randomly between the tied ones or selected by a super-annotator. On data with high inter-annotator agreement, majority voting can be a straightforward approach to obtain ground truth estimates reasonable quality. Numerous methods for inferring ground truth rely on the Expectation-Maximization (EM) approach, first introduced by Dawid and Skene [8]. This approach estimates annotator confidence and integrates it into a weighted voting procedure for determining the true label. By considering annotator performance, these methods address the limitations of majority voting, thereby avoiding potential outliers. One notable advancement in this area is the GLAD [39] method, which not only attempts to identify the most probable class but also assesses image difficulty, additional to the annotator confidence. It should be noted, however, that this approach is limited to binary classification tasks [31]. In addition to classification tasks, pixel-wise classification (semantic segmentation) also has existing ground truth inference methods, such as STAPLE [38], SIMPLE [17], and COLLATE [1]. Recent developments in this field have led to approaches that incorporate data difficulty into the estimation process, as seen in a newly developed simultaneous method [11]. Although there are numerous variations of ground truth estimation methods for classification and segmentation tasks, this discussion will focus on methods applicable to object detection and instance segmentation, rather than diving deeper into this area. For instance-based recognition tasks like object detection and instance segmentation, there is an additional issue to consider - the localization step. During training, methods consisting of a combination of thresholds and non-maximum suppression are used to solve the localization problem and then focus on classification accuracy. While this may work during training, repeated labeling is likely to have more than just a prediction and ground truth pair to match, since multiple annotators might have created multiple labels. Hence, existing methods are not applicable. An existing approach to aggregate annotations for object detection is called Weighted Boxes Fusion (WBF) [32, 18], which was used for the VinDr-CXR dataset [23]. WBF focuses on the weighted aggregation within each class, ignoring inter-class disagreements and also not discarding any annotations even with low agreement. This is beneficial in cases where missing a possible case is far more severe then finding too many, such as a task that requires high recall. Apart from this single existing instance-based recognition approach, we are not aware of any other aggregation methods for object detection or instance segmentation. ## 3 Method In the following section we introduce a novel adaptation of the EM algorithm, _localization-aware expectation maximization_ (LAEM), for instance-based recognition tasks. The same localization algorithm can also be used with majority voting, which therefore functions as a baseline. Additionally, we expand the existing weighted boxes fusion technique to encompass weighted mask fusion, which enables its use in instance segmentation and facilitates benchmarking on a broader range of datasets. As extending weighted boxes fusion is not our core contribution it can be found in Appendix 1. ### Localization-Aware Expectation-Maximization Our novel approach adds a localization stage to existing methods like majority voting and expectation maximization, enabling the use of these established methods for instance-based recognition tasks. Thus, the proposed label aggregation process consists of two stages: (1) _localization-stage_ and (2) _classification-stage_. Assuming that \(R\) annotators have created noisy instance-based labels \(\tilde{y}_{ij}^{r}\) for image \(x_{i}\). Subscript \(j=0,...,M_{r}\) refers to the single instances annotated by annotator \(r\) for image \(x_{i}\). \(M_{r}\) can be zero if no instances were labeled by \(r\) Each instance contains a class \(c\in C\) denoted \(\tilde{y}_{ijc}^{r}\). Furthermore, \(\tilde{y}_{ijb}^{r}\) refers to the respective bounding box and \(\tilde{y}_{ijs}^{r}\) to the optional pixel-wise segmentation mask. ``` 1:\(X=\{x_{i}\}_{i=1,\dots,N}\)\(\triangleright\) Set of images \(\tilde{Y}=\{\tilde{Y}_{i}\}_{i=1,\dots,N}\)\(\triangleright\) Set of noisy labels per image \(S=\{S_{i}\}_{i=1,\dots,N}\)\(\triangleright\) Set of annotators per image \(\theta\)\(\triangleright\) IoU threshold for\(i\in X\)do\(\tilde{Y}_{i}=\{\tilde{y}_{i1}^{1},\tilde{y}_{i2}^{1},\dots,\tilde{y}_{iM_{1}}^{1}, \tilde{y}_{i1}^{2},\tilde{y}_{i2}^{2},\dots,\tilde{y}_{iM_{2}}^{2},\dots,\tilde {y}_{i1}^{R},\tilde{y}_{i2}^{R},\dots,\tilde{y}_{iM_{R}}^{R}\}\) \(\tilde{Y}_{i}^{\text{LAEM}}=\emptyset\)\(Q=\{U_{k}|U_{k}\in\mathcal{P}(S_{i})\wedge|U_{k}|\geqslant|U_{k+1}|\wedge\bigcap|S_{i}|/2 |\leqslant|U_{k}|\}\)\(\triangleright\) Ordered set of annotator combinations for\(U\in Q\)do\(\triangleright\) Loop over annotator combinations \(\text{L}=\{\tilde{Y}_{i1}^{k_{1}}\times\dots\times\tilde{Y}_{i}^{k_{n}}|k_{1}, \dots,k_{n}\in U\wedge n=|U|\}\)\(\triangleright\) Possible combinations of labels \(F=\{u_{k}|u_{k}\in L\wedge\theta\leqslant IoU(u_{k})\wedge IoU(u_{k})\geqslant IoU(u_{k+1})\}\)\(\triangleright\) Filtered and ordered L for\(k\in N\)do\(K=\{k_{1},k_{2},\dots,k_{n}\}\) if\(K\cap\tilde{Y}_{i}=\emptyset\)then\(\triangleright\) Check for label availability \(\tilde{Y}_{i}=\tilde{Y}_{i}\backslash K\)\(\triangleright\) Remove labels from available labels \(\tilde{Y}_{i}^{\text{LAEM}}=\tilde{Y}_{i}^{\text{LAEM}}\cup aggregate(K)\)\(\triangleright\) Add aggregated label to accepted labels endif endfor endfor endfor ``` **Algorithm 1** Outline of the localization algorithm used for LAEM Algorithm 1 outlines the LAEM approach. The algorithm requires image set \(X\), a set of noisy labels \(\tilde{Y}\), a set of annotators \(S\), and a threshold \(\theta\). Looping over the images of the dataset, the power set \(\mathcal{P}(\mathcal{S})\) over the annotators is computed. Subsets containing less than half the number of annotators are removed and a descending order is enforced onto the set. It subsequently iterates through the remaining ordered subsets of annotators and computes the Cartesian product between the respective annotators. Each tuple is then ordered and filtered according to threshold \(\theta\) based on the intersection over union in its generalized form: \[IoU=\frac{\bigcap_{r=1}^{R}\tilde{y}_{ijb}^{r}}{\bigcup_{r=1}^{R}\tilde{y}_{ ijb}^{r}} \tag{2}\] The remaining set of tuples ordered by descending IoU forms the set of candidate solutions \(F\). In case all labels from a candidate tuple are still available, they are aggregated according to an aggregation function and added to the inferred solutions \(\tilde{Y}_{i}^{\text{LAEM}}\). The aggregation function comprises two steps: (1) all classes contained in the original tuple \(\tilde{y}_{ijc}^{r}\) are appended to a list serving as input for expectation maximization or majority voting and (2) the areas of the different candidates \(\tilde{y}_{ijb}^{r}\) are combined according to the union, intersection, or average area of all boxes involved. The average operation is based on the WBF algorithm [32] with uniform weights. If available, the same procedure (for details cf. Appendix 1) is applied to the segmentation masks \(\tilde{y}_{ijs}^{r}\). This concludes the localization stage. In the subsequent classification-stage existing ground truth inference methods can be applied such as majority voting or expectation maximization [8]. ### Algorithmic Design Choices Our algorithm is designed in a divide-and-conquer manner. Firstly, we prioritize localization, effectively reducing the problem to a classification task for each matched instance after localization. This strategy consequently facilitates the application of established methods for ground truth inference developed in other fields. We always prefer a localization match with more annotators to maximize consensus. If a localization match involving all available annotators cannot be found given the threshold value \(\theta\), we ensure successive reduction to potentially prefer the next largest number of annotators. This approach first guarantees localization quality, and only upon establishing matched areas based on their localizations do we aggregate the classes. The algorithm is parameterized by the threshold value \(\theta\), which can be adjusted to enforce stricter localization quality and also control the order in which instances are matched. Though this heuristic solution may not provide an optimal outcome for larger problem sizes (e.g., numerous instances on a single image), when an image exhibits high agreement among annotators, a consensus area can be aggregated, and the class of this area can be unambiguously determined. One advantage of the Expectation-Maximization (EM) approach is that assignment is unambiguous. The confidence calculated during the EM algorithm serves as a tie-breaker, a benefit not present with Majority Voting (MJV). Furthermore, fitting the EM algorithm is efficient; following localization matching, no further areas are calculated, and only the solutions \(\tilde{Y}_{i}^{\text{LAEM}}\) are considered along with their classes. While localization fusion functions, such as union or intersection, are available and applicable for training data, the intended use for test data within the context of LAEM (Localization-Aware Expectation-Maximization) primarily involves the averaging fusion function. This approach enables a balanced aggregation of areas across different annotators. Additionally, this method is also utilized to aggregate test data as required for the TexBiG dataset [35]. ### Comparative Analysis In Table 1, we present a comparative analysis of the four available ground truth inference methods for instance-based recognition tasks, distinguished by their respective characteristics and properties. Each method is described based on eight distinct features relevant to ground truth estimation methods. A noteworthy difference between LAEM and MJV, as compared to WBF and its adaptation (detailed in Appendix 2), is the handling of instances that lack consensus among annotators. Figure 1 and Appendix 3 illustrates the aggregation processes of MJV, LAEM, and WBF on a few specific images, serving as practical examples. This illustrations reveal that MJV and LAEM tend to find consensus instances, resulting in a final image that appears as if a single annotator has labelled the image. In contrast, the WBF image is relatively cluttered with overlapping instances. This discrepancy arises because WBF merges areas of the same class that significantly overlap but does not discard any annotation. This is also the case, for instances where two annotators found the same instance area but disagreed on the class, resulting in more instances overall. Although this property might be beneficial for a high-recall scenario \(-\) where missing an instance is more detrimental than detecting multiple false positives \(-\) it is not ideal for many applications. It's important to note that none of the current methods incorporate data dependence, a feature described in several state-of-the-art ground truth estimation methods for semantic segmentation [17, 38, 1]. ## 4 Experimental Results In our preliminary experiment, we scrutinized the influence of annotation budget size by exploring scenarios in which repeated labels might be preferred over single labels. This ablation study was designed to determine the optimal use of a restricted annotation budget, i.e., whether it is more beneficial to train with a larger volume of noisy labels or a reduced set of refined labels. This experimental analysis was conducted using two separate datasets. In our subsequent investigation, we assessed the effect of annotator selection on model performance by deliberately excluding certain annotators from the labeling process. This enabled us to emulate the potential impact of annotator selection on model performance and to probe the influence of proficient and suboptimal annotators on model output. Our final experiment, which is detailed in Appendix 2, was actually conducted first since it influenced the choice of the aggregation method used for the training data. However, this experiment is not the main focus of this publication. \begin{table} \begin{tabular}{l|c c c c} \hline \hline Methods & LAEM & MJV & WBF & WBF+EM \\ \hline Assignment & 1) Localization & 1) Localization & Localization & Localization \\ & 2) Classification & 2) Classification & only & only \\ \hline Low agreement & Discard annotation & Discard & Keep & Keep \\ & & annotation & annotation & annotation \\ \hline Edge cases & Use confidence & Randomized & \(-\) & \(-\) \\ \hline Localization fusion & Union / average / & Union / average / & Averaging & Weighted \\ & intersection & intersection & & averaging \\ \hline Annotator confidence & \(\checkmark\) & \(\times\) & \(\times\) & \(\checkmark\) \\ \hline Handling missing data & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ \hline Dataset characteristic & Balanced & Precision & Recall oriented & Recall oriented \\ & & & oriented & \\ \hline Data dependence & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison table for the characteristics and properties of the different ground truth inference methods. MJV and LAEM both use the novel localization algorithm. ### Set-Up To the best of our knowledge, there are only two datasets available that contain repeated labels for object detection and instance segmentation, respectively: the VinDr-CXR [23, 18] dataset and the TexBiG [35] dataset. We focus solely on these two datasets and do not make use of any synthetic data. **VinDr-CXR dataset.** This dataset comprises 15,000 training and 3,000 test chest X-ray images, each of which was annotated by three annotators and five annotators, respectively. With a total of 36,096 instances in the training dataset, the dataset can be considered sparsely annotated, with in average 0.8 instances per image1. The dataset consists of 14 different classes that were annotated by 17 radiologists [22]. Using the agreement evaluation method presented in [35] describing the data quality, the K-\(\alpha\) (Krippendorff's alpha) is 0.79. However, since only 29.3% of the images in the dataset contains any annotations at all, another K-\(\alpha\) was calculated for this reduced subset, resulting in a K-\(\alpha\) value of 0.29. This indicates that while annotators largely agree in cases where no anomaly is present, there is significant disagreement in cases that contain instances. Footnote 1: Computed by dividing the number of instances by the product of the number of images and the number of annotators. **TexBiG dataset.** Recently published [35], the TexBiG provides labels for document layout analysis, similar to the PubLayNet [46] or DocBank [19] datasets. It covers 19 classes for complex document layouts in historical documents during a specific time period, and in the version used here, the training data contains 44,121 instances, the validation data 8,251 instances and the test data 6,678 instances. While the total number of instances is larger as in the VinDr-CXR dataset, there are only 2,457 images in total, 1,922 in the training set, 335 in the validation set and 200 in the test set. Due to the iterative creation process of the dataset, the number of repeated labels is different depending on the sample. An agreement value was used per image to evaluate which samples were to be annotated again. For each image, two annotators were assigned, and in case the agreement value was low after the first iteration, an additional annotator was added to that specific sample. This was done until a maximum of four annotators per sample. In the combined validation and training set, 34 images were annotated by 4 annotators, 336 by at least 3 annotators (including the 34 from before), and 2,257 by at least 2 annotators. We created an additional test set with 5 annotators for 200 newly selected images from the same domain, in accordance with the guideline provided by the authors [35]. We plan to publish this test-set for benchmarking purposes on an evaluation server. The TexBiG dataset is more densely annotated, with 10.7 instances per image, which is 13 time more than the VinDr-CXR dataset. Furthermore, the K-\(\alpha\) for the TexBiG training dataset is higher with 0.93. Comparing the two datasets we find that they represent two opposing marginal cases with one dataset having high-agreement and dense annotations, while the other one has a low-agreement and sparse annotations. However, a more balanced dataset is missing. **Architecture choice.** Regarding the architecture choice, we aimed to find a well-performing and stable choice, rather than aiming for state-of-the-art results since we wanted to focus on comparing the ground truth inference methods and ablation studies on different tasks. For the VinDr-CXR dataset, we tested various architectures including different anchor-based two-stage detectors like Faster R-CNN [27], Cascade R-CNN [2] and Double Head R-CNN [40], and additionally, the transformer-based Detection Transformer (DETR) [3]. After thorough investigation, we found that the Double Head R-CNN performs stably and with reasonable results. Therefore, we selected this architecture for our experiments. On the TexBiG dataset, we tried several instance segmentation models like Mask R-CNN [13], Cascade Mask R-CNN [2] and DetectorRS [24], as well as the Mask2Former [6] as a transformer-based architecture. In this case, DetectoRS yielded the most stable performance, and we continued our experiments with this model. We extended MMDetection [4] with our implementation, and the code is available on GitHub under [https://github.com/Madave94/gtiod](https://github.com/Madave94/gtiod). ### Annotation Budget Ablation When working on a deep learning project, data is often a limiting factor, and resources must be carefully allocated to create task-specific data. Even when there are ample resources, maximizing the value of those resources is important. In the context of human-annotated data, this concept is referred to as the "annotation budget", which represents the available number of images or instances that can be labeled by a pool of annotators within their available time. The question then becomes, "How can a limited annotation budget be best utilized?" One approach is to prioritize annotating as many different images as possible to cover a broad range of cases within the application domain. However, this approach comes with the risk of introducing more noisy labels due to the inherent variability in annotator performance. Alternatively, creating repeated labels may be more beneficial to improve the quality of the annotations. Ultimately, the decision between prioritizing _quantity versus quality_ of labels must be carefully weighed and considered in the context of the project goals and available resources. In the two ablation studies presented in Table 2 and 3, we compare the performance of different annotation budgets, which refer to the available number of images or instances that can be labeled with the pool of annotators and their available time. The splits used in the studies represent three different cases: (1) Only single annotated labels are available, which are more prone to label noise. (2) A mix of repeated labels and single annotated labels is available. Multiple splits may have this property. (3) Maximum label repetition, where no or very few single annotated labels are available, resulting in significantly less training data. To reduce randomization effects, we create five different random versions for each split and compute their mean and maximum results. Our results show that the TexBiG dataset quickly reached a data saturation point, suggesting potential benefits from employing multi-annotator learning methods to better utilize repeated labels. Conversely, the VinDr-CXR dataset showed improved performance with higher budgets, indicating that more data helps performance in scenarios with noisy, low-agreement labels. Both datasets demonstrate that moderate inclusion of repeated labels does not adversely impact performance, with mixed splits achieving peak results at their lowest budgets. These findings highlight the value of repeated annotations, which not only increase label reliability, but also allow for efficient use of multi-annotator learning methods. Remarkably, the opportunity costs for creating such repeated labels seem negligible. Our findings suggest that higher fragmentation in annotator splits could lead to reduced performance, possibly due to enhanced intracoder consistency. Moreover, the influence of split distribution appears prominent only when the annotation budget is limited. Identifying a systematic relationship between split distribution and performance, thereby suggesting optimal splits before the annotation process, could be a promising future research direction. The overall takeaway is that multiple annotations may not always yield significant advantages, yet in scenarios with a constrained annotation budget, they could prove beneficial. Determining which cases fall into each category remains an open challenge. \begin{table} \begin{tabular}{r|c|c|c c|c c} \hline \hline \multirow{2}{*}{Split} & Budget & \multicolumn{2}{c|}{Averaged} & \multicolumn{2}{c}{Maximum} \\ & rel. & abs. & AP & AP\({}^{bb}\) & AP & AP\({}^{bb}\) \\ \hline 1922\(\times\)2 & 100\% 3844 & 41.9 & 47.5 & 43.3 & 48.7 \\ \hline 966\(\times\)2 & \multirow{2}{*}{75\% 2883} & \multirow{2}{*}{42.4} & \multirow{2}{*}{47.9} & \multirow{2}{*}{42.8} & \multirow{2}{*}{48.4} \\ 966\(\times\)1 & & & & & \\ \hline 1922\(\times\)1 & 50\% 1922 & **42.7** & **48.3** & 43.4 & **48.8** \\ \hline 641\(\times\)2 & \multirow{2}{*}{50\% 1922} & \multirow{2}{*}{42.4} & \multirow{2}{*}{47.7} & \multirow{2}{*}{**43.9**} & \multirow{2}{*}{**48.8**} \\ 640\(\times\)1 & & & & & \\ \hline 966\(\times\)2 & \multirow{2}{*}{50\% 1922} & \multirow{2}{*}{41.1} & \multirow{2}{*}{46.2} & \multirow{2}{*}{42.8} & \multirow{2}{*}{47.8} \\ \hline 30\(\times\)4 & \multirow{3}{*}{50\% 1922} & \multirow{3}{*}{41.9} & \multirow{3}{*}{47.3} & \multirow{3}{*}{43.0} & \multirow{3}{*}{48.2} \\ 243\(\times\)3 & & & & & \\ \hline 30\(\times\)4 & \multirow{3}{*}{50\% 1922} & \multirow{3}{*}{39.7} & \multirow{3}{*}{45.0} & \multirow{3}{*}{41.0} & \multirow{3}{*}{46.5} \\ 243\(\times\)3 & & & & & \\ 536\(\times\)2 & & & & & \\ 1\(\times\)1 & & & & & \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study on the TexBiG dataset using a limited annotation budget. The results are in \(mAP@[.5:.95]\), show that multi annotator learning methods are required to justify repeated labels. However, even without multi annotator methods the performance loss using repeated annotations is marginal. \begin{table} \begin{tabular}{r|c|c|c|c} \hline \hline \multirow{2}{*}{Split} & \multicolumn{2}{c|}{Budget rel. } & \multirow{2}{*}{Avg.} & \multirow{2}{*}{Max.} \\ & rel. & abs. & AP & AP \\ \hline 15,000\(\times\)2 & 66.6\% 30,000 & 14.8 & 15.0 \\ \hline 10,000\(\times\)3 & 66.6\% 30,000 & 14.7 & 14.9 \\ \hline 15,000\(\times\)1 & 33.3\% 15,000 & 13.4 & 13.9 \\ \hline 10,000\(\times\)1 & \multirow{2}{*}{33.3\% 15,000} & \multirow{2}{*}{**13.6**} & \multirow{2}{*}{14.1} \\ 2,500\(\times\)2 & & & & \\ \hline 7,500\(\times\)2 & 33.3\% 15,000 & 13.5 & 13.8 \\ \hline 3,000\(\times\)3 & \multirow{2}{*}{33.3\% 15,000} & \multirow{2}{*}{**13.6**} & \multirow{2}{*}{**14.3**} \\ 3,000\(\times\)2 & & & & \\ \hline 5,000\(\times\)3 & 33.3\% 15,000 & 13.4 & 14.0 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study on the VinDr-CXR dataset using a limited annotation budget. The results are in \(mAP_{40}\) as provided by the leaderboard. ### Leave-One-Out Annotator Selection Table 4 displays the results of a final experiment conducted on the TexBiG dataset. To create four groups of annotators, each group consisting of one to three individuals, annotations were distributed unevenly among them, resulting in groups of different sizes. Subsequently, each group was left out of the training process, while the remaining three groups were used to train the model. This approach led to a smaller training set. Surprisingly, the experiment showed that when the largest group, denoted as **B**, was excluded, leaving only 61.6% of the annotations available, the model's performance reached its peak. This outcome underscores the importance of selecting precise annotators in the training process, since less precise ones may introduce noisy labels that can hinder performance. However, it is challenging to identify precise annotators before the annotation process, as there is no data available to determine their level of precision. ## 5 Conclusion Our results indicate the potential benefits of repeated labels which seem to be contingent on several factors. The identified key factors are the balance between (1) the complexity or variation in the dataset and its corresponding task difficulty, (2) the variability in annotation depending on inter-annotator consistency and annotator proficiency, and (3) the constraints of the annotation budget. This interaction suggests the existence of an 'optimal range' for image annotation strategy. For instance, datasets with high variance and low annotator consistency may benefit from multiple annotations per image, while in cases with low image variation and high annotator consistency, many images annotated once might suffice. This balancing act between data and annotation variation could guide decisions when choosing between single or multiple annotators per image, given a fixed annotation budget. However, the utility of repeated labels is substantially hampered due to the lack of multi-annotator-learning approaches for object detection and instance \begin{table} \begin{tabular}{l|r r|r r|r r} \hline \multirow{2}{*}{Left out group} & \multicolumn{2}{c|}{Left out images} & \multicolumn{2}{c|}{Left out annotations} & \multicolumn{2}{c}{Perfor-} \\ & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{annotations} & \multicolumn{2}{c}{mance} \\ & rel. & abs. & rel. & abs. & AP & AP\({}^{bb}\) \\ \hline Group A & 25.1\% & 1,040 & 26.8\% & 11,810 & 42.4 & 47.1 \\ Group B & 29.5\% & 1,225 & 38.4\% & 16,932 & **44.1** & **49.8** \\ Group C & 25.7\% & 1,067 & 18.2\% & 8,017 & 42.6 & 48.1 \\ Group D & 19.7\% & 815 & 16.7\% & 7,362 & 43.1 & 48.3 \\ \hline \end{tabular} \end{table} Table 4: Choosing the right annotator? If annotators are not in the group what would happen to the results? Splits are unequal, due to the annotation distribution. segmentation. Thus, future work should concentrate on developing methods that bridge this gap between these areas and other computer vision domains like image classification or semantic segmentation. Lastly, a significant challenge remains regarding the availability of suitable datasets. With limited datasets in the domain and disparities among them, our findings' generalizability remains constrained to the two domains covered in this study. A larger dataset with repeated labels and balanced agreement would be valuable for future research. Synthetic data could be beneficial but pose the risk that models trained on these data may only learn the distribution used to randomly create repeated labels from the original annotations. Thus, creating a suitable dataset remains a formidable task. #### Acknowledgments. This work was supported by the Thuringian Ministry for Economy, Science and Digital Society / Thuringer Aufbaubank (TMWWDG / TAB).
2309.06471
Transient Corotating Clumps Around Adolescent Low-Mass Stars From Four Years of TESS
Complex periodic variables (CPVs) are stars that exhibit highly structured and periodic optical light curves. Previous studies have indicated that these stars are typically disk-free pre-main-sequence M dwarfs with rotation periods ranging from 0.2 to 2 days. To advance our understanding of these enigmatic objects, we conducted a blind search using TESS 2-minute data of 65,760 K and M dwarfs with $T$<16 mag and $d$<150 pc. We found 50 high-quality CPVs, and subsequently determined that most are members of stellar associations. Among the new discoveries are the brightest ($T$$\approx$9.5 mag), closest ($d$$\approx$20 pc), and oldest ($\approx$200 Myr) CPVs known. One exceptional object, LP 12-502, exhibited up to eight flux dips per cycle. Some of these dips coexisted with slightly different periods, and the shortest-duration dips precisely matched the expected timescale for transiting small bodies at the corotation radius. Broadly, our search confirms that CPVs are mostly young ($\lesssim$150 Myr) and low-mass ($\lesssim$0.4 $M_\odot$). The flux dips characteristic of the class have lifetimes of $\approx$100 cycles, although stellar flares seem to induce sudden dip collapse once every few months. The most plausible explanation for these phenomena remains corotating concentrations of gas or dust. The gas or dust is probably entrained by the star's magnetic field, and the sharp features could result from a multipolar field topology, a hypothesis supported by correspondences between the light curves of CPVs and of rapidly rotating B stars known to have multipolar magnetic fields.
Luke G. Bouma, Rahul Jayaraman, Saul Rappaport, Luisa M. Rebull, Lynne A. Hillenbrand, Joshua N. Winn, Alexandre David-Uraz, Gáspár Á. Bakos
2023-09-12T18:00:01Z
http://arxiv.org/abs/2309.06471v2
# Transient Corotating Clumps Around Adolescent Low-Mass Stars From Four Years of TESS ###### Abstract Complex periodic variables (CPVs) are stars that exhibit highly structured and periodic optical light curves. Previous studies have indicated that these stars are typically disk-free pre-main-sequence M dwarfs with rotation periods ranging from 0.2 to 2 days. To advance our understanding of these enigmatic objects, we conducted a blind search using TESS 2-minute data of 65,760 K and M dwarfs with \(T\)\(<\)16 mag and \(d\)\(<\)150 pc. We found 50 high-quality CPVs, and subsequently determined that most are members of stellar associations. Among the new discoveries are the brightest (\(T\)\(\approx\)9.5 mag), closest (\(d\)\(\approx\)20 pc), and oldest (\(\approx\)200 Myr) CPVs known. One exceptional object, LP 12-502, exhibited up to eight flux dips per cycle. Some of these dips coexisted with slightly different periods, and the shortest-duration dips precisely matched the expected timescale for transiting small bodies at the corotation radius. Broadly, our search confirms that CPVs are mostly young (\(\lesssim\)150 Myr) and low-mass (\(\lesssim\)0.4 \(M_{\odot}\)). The flux dips characteristic of the class have lifetimes of \(\approx\)100 cycles, although stellar flares seem to induce sudden dip collapse once every few months. The most plausible explanation for these phenomena remains corotating concentrations of gas or dust. The gas or dust is probably entrained by the star's magnetic field, and the sharp features could result from a multipolar field topology, a hypothesis supported by correspondences between the light curves of CPVs and of rapidly rotating B stars known to have multipolar magnetic fields. Weak-line T Tauri stars (1795), Periodic variable stars (1213), Circumstellar matter (241), Star clusters (1567), Stellar magnetic fields (1610), Stellar rotation (1629) + Footnote †: 51 Pegasi b Fellow + Footnote †: 51 Pegasi b Fellow ## 1 Introduction All young stars vary in optical brightness, and the origin of such variability is, in most cases, understood. Well-explored sources of optical variability include inhomogeneities on stellar surfaces such as starspots and faculae (e.g. Basri, 2021), occultations by circumstellar disks (e.g. Bodman et al., 2017), and, in geometrically favorable circumstances, eclipses by stars and planets (e.g. Rizzuto et al., 2020). More exotic sources of optical variability that are potentially relevant to this work include transiting exocometes (e.g. \(\beta\) Pic; Zieba et al., 2019), disintegrating rocky bodies (e.g. KOI-2700; Rappaport et al., 2014), and occultations by circumstellar plasma clumps (e.g. \(\sigma\) Ori E; Townsend et al., 2005; Townsend and Owocki, 2005). Data from K2 (Howell et al., 2014) and TESS (Ricker et al., 2015) have revealed a new class of variable star for which the root cause of variability is only beginning to become clear: complex periodic variables (CPVs). These objects are identified from their optical light curves, which show nearly periodic troughs that are either sharp or broad; these troughs are often superposed on quasi-sinusoidal spot-like modulation (Stauffer et al., 2017, 2018; Zhan et al., 2019). Some CPVs show up to eight dips per cycle. Most CPVs are pre-main-sequence M dwarfs with ages of \(\approx\)5-150 million years (Myr), and rotation periods of 0.2-2 days. They are observed to comprise \(\approx\)1-3% of M dwarfs younger than 100 Myr (Rebull et al., 2016; Gunther et al., 2022). They generally do not show near-infrared excesses indicative of dusty disks, but the wavelength-dependent dip amplitudes of some CPVs is consistent with reddening by dust (Onitsuka et al., 2017; Bouma et al., 2020; Gunther et al., 2022; Koen, 2023). The dip amplitudes and phases usually evolve gradually over tens to hundreds of cycles, although they have occasionally been observed to change abruptly within one cycle (e.g. Stauffer et al., 2017; Palumbo et al., 2022; Popinchalk et al., 2023). The sharp features of CPV light curves can have durations as short as 5% of the rotation period (\(P_{\rm rot}\)), which is too short to be caused by starspots rotating into and out of view. Starspots produce flux variations with characteristic timescales of \(P_{\rm rot}\) and 0.5 \(P_{\rm rot}\). With finely-tuned view ing geometries, starspots can produce dip durations as short as \(\approx\)0.2\(P_{\rm rot}\), but in such cases, limb darkening causes the dip amplitudes to be smaller than the observed amplitudes of \(\sim\)1% (see Stauffer et al., 2017, Figures 37-41). Thus, a "starspot-only" scenario can be ruled out for many CPVs (Stauffer et al., 2017; Zhan et al., 2019; Koen, 2021). Given that many CPVs cannot be explained by starspots alone, and working under the assumption that all CPVs share the same basic physical scenario, we discard the "starspot-only" model. Instead, the correct explanation probably involves spatially concentrated circumstellar material (e.g. Stauffer et al., 2017; Gunther et al., 2022). Figure 1 illustrates two proposed configurations for the extrinsic material. The first scenario invokes opaque dust "clumps" that orbit near the Keplerian corotation radius [\(R_{\rm c}\) = \((GM/\Omega^{2})^{1/3}\), where \(\Omega\) = \(2\pi/P_{\rm rot}\)] and periodically transit the star (e.g. Stauffer et al., 2017; Farihi et al., 2017; Sanderson et al., 2023). The second scenario invokes "prominences", long-lived condensations of cool, dense, marginally-ionized gas that are embedded within the hotter corona and that corotate with the star (Collier Cameron & Robinson, 1989; Jardine & Collier Cameron, 2019; Waugh & Jardine, 2022). These hypothetical prominences are analogous to quiescent prominences and filaments seen in the solar corona (see e.g. Vial & Engvold, 2015), though rather than existing at a fraction of the stellar radius as in the solar case, they would exist at distances of a few stellar radii. A final possibility is that an optically-thick ring obscures a narrow band of the stellar photosphere (Zhan et al., 2019); hot spots passing behind such a ring could produce sudden dips. We disfavor this scenario for reasons described in Appendix A. While the dust clump and gas prominence hypotheses both invoke magnetically-entrained material, the two pictures differ in the origin and the composition of the occulting material. "Dust clumps" invoke opacity from dust, which would need to be collisionally charged (Sanderson et al., 2023), and which might be sourced from a low-mass debris disk. "Gas prominences" invoke opacity from partially-ionized gas, perhaps bound-free transitions in hydrogen or a molecular opacity. The gas might be sourced from a stellar wind. Unambiguous evidence in support of either scenario has yet to be acquired. Such evidence might include a spectroscopic detection of silicate 10 \(\mu\)m dust absorption during a dip, or perhaps detection of transient Balmer-line excesses as a function of cycle phase, similar to observations made in systems such as AB Dor (see Collier Cameron, 1999) or PTFO 8-8695 (Johns-Krull et al., 2016). In both models, the corotation radius is the location at which matter concentrates. The empirical basis for this is that the sharp CPV features are superposed over smooth, quasi-sinusoidal starspot profiles. The theoretical importance of the corotation radius has been noted in previous studies of magnetic rotators (e.g. Lamb et al., 1973; Nakajima, 1985; Konigl, 1991; Long et al., 2005). In regions where the magnetic field dominates the flow (i.e. \(B^{2}/8\pi>\rho v^{2}/2\)), matter is dragged along with the field lines. Within such regions, Figure 1: **Complex periodic variables (CPVs)**: _Top:_ Phase-folded TESS light curves for three CPVs. Each panel shows the average of the data accumulated over one month, relative to the mean stellar brightness. Gray circles are raw 2-minute data; black circles are binned to 300 points per cycle. The period in hours is printed in the bottom right corner. Left-to-right, the objects are LP 12-502 (TIC 402980664; Sector 19), TIC 94088626 (Sector 10), and TIC 425933644 (Sector 28). _Bottom:_ Cartoon configurations for magnetically-entrained corotating material. The dust clump scenario (left) and gas prominence scenario (right) propose different opacity sources, and different occultation geometries. charged gas or dust can become trapped at corotation because of the four relevant forces - gravity, Lorentz, inertial Coriolis, and inertial centrifugal - the Lorentz and Coriolis only act perpendicular to field lines, while gravity and the centrifugal force are in balance at \(R_{\rm c}\) (e.g. Townsend & Owock, 2005, their Section 2). Another way to phrase this statement is that, in the corotating frame, the effective potential experienced by charged particles tends to have local minima at \(R_{\rm c}\); given a flow from either the star or from a tenuous accretion disk, this local potential minimum enables material to build up (Townsend & Owock, 2005). While theoretical heritage for understanding magnetic rotators exists, CPVs have remained mysterious because they have been both hard to discover and hard to characterize. They have been hard to discover because they are rare: CPVs comprise \(\approx\)1% of the youngest \(\approx\)1% of M dwarfs (Rebull et al., 2018). Out of the millions of stars monitored by K2 and TESS, about 70 CPVs have been reported to date (Rebull et al., 2016; Stauffer et al., 2017, 2018; Zhan et al., 2019; Bouma et al., 2020; Stauffer et al., 2021; Gunther et al., 2022; Popinchalk et al., 2023). They have been hard to characterize because many of the known CPVs are faint; the initial K2 discoveries (Rebull et al., 2016; Stauffer et al., 2017) were M2-M6 dwarfs at distances \(\gtrsim\)100 pc, with optical brightnesses of \(V\)\(\approx\)15.5 to \(V\)\(>\)20. At such magnitudes, high-resolution time-series spectroscopy is out of reach with current facilities, despite the potential utility of such observations. In this work, we aim to find bright and nearby CPVs, since these objects will be the most amenable to detailed photometric and spectroscopic analyses. To do this, we use 2-minute cadence data acquired by TESS between 2018 July and 2022 September (Sectors 1-55; Cycles 1-4). We present our search methods in Section 2, and the resulting CPV catalog in Section 3. The observed evolution of many CPVs over a two-year baseline is described in Section 4, including a deep-dive into the behavior of an especially interesting object, LP 12-502. We discuss a few implications in Section 5, and conclude in Section 6. Some comments on nomenclature are needed. What we are calling "complex periodic variables" (Koen, 2023) have also been called "complex rotators" (Zhan et al., 2019; Gunther et al., 2022; Popinchalk et al., 2023), "transient flux dips", "persistent flux dips", and "scallop shells" (Stauffer et al., 2017). The CPVs should not be conflated with "dippers", which are classical T Tauri stars with infrared excesses, and which show large-amplitude variability linked to obscuring inner disk structures and accretion hot spots (Cody et al., 2014; Robinson et al., 2021). The phenomenology and stellar properties of CPVs and dippers are quite different (though see Sections 3.3 and 5.10). The defining phenomenological features of the CPVs are that their light curves are _complex_, relative to quasi-sinusoidal starspots, and the complex features are _periodic_, meaning they typically repeat for at least tens of days. While rotation likely does play a central role in explaining their physical behavior, the acronym for "complex rotator" is already used in the astrophysical literature for cosmic rays. Given these considerations, we refer to the stars as complex periodic variables (CPVs); our preferred explanation for their behavior is that transient clumps of gas or dust orbit at their corotation radii. ## 2 Methods ### Stellar selection function We searched for CPVs by analyzing the short-cadence data acquired by TESS between 2018 July 25 and 2022 September 1 (Sectors 1-55). Specifically, we used the 2-minute cadence light curves produced by the Science Processing and Operations Center at the NASA Ames Research Center (Jenkins et al., 2016). While the TESS data products from these sectors also included full frame images with cadences of 10 and 30 minutes for a larger number of sources, we restricted our attention to the 2-minute data for the sake of uniformity and simplicity in data handling. In exchange, we sacrificed both completeness and homogeneity of the selection function. While TESS cumulatively observed \(\approx\)90% of the sky for at least one lunar month between 2018 July and 2022 September, the 2-minute cadence data were collected for only a subset of observable stars that were preferentially nearby and bright (see Fausnaugh et al., 2021). The total 2-minute data volume from Sectors 1-55 included 1,087,475 short-cadence light curves, which were available for 428,121 unique stars. To simplify our search, we defined our target sample as stars with 2-minute cadence TESS light curves satisfying the following four conditions: \[T <16\] (Amenable with TESS) \[G_{\rm BP}-G_{\rm RP} >1.5\] (Red stars only) \[M_{\rm G} >4\] (Dwarf stars only) \[d <150\,{\rm pc}\] (Close stars only). Here, \(M_{\rm G}=G+5\log(\varpi_{\rm as})+5\) is the Gaia \(G\)-band absolute magnitude, \(\varpi_{\rm as}\) is the parallax in units of arcseconds, and \(d\) is a geometric distance defined by inverting the parallax and ignoring any zero-point correction. We performed this selection by cross-matching TICS.2 (Stassun et al., 2019; Paegert et al., 2021) against the Gaia DR2 point-source catalog (Gaia Collaboration et al., 2018). We opted for Gaia DR2 rather than DR3 because the base catalog for TIC8 was Gaia DR2, which facilitated a one-to-one crossmatch using the Gaia source identifiers. The target sample ultimately included 65,760 M dwarfs and late-K dwarfs, down to \(T\)\(<\)16 and out to \(d\)\(<\)150 pc. For stars with multiple sectors of TESS data available, we searched for CPV signals independently. In total, our 65,760 star target list included 180,017 month-long light curves. We assessed the completeness of our selection function by comparing the number of stars with TESS Sector 1-55 short-cadence data against the number of Gaia DR2 point sources. We required all stars to meet conditions 1-4. The results are shown in Figure 2. TESS 2-minute data exist for \(\approx\)50% of \(T\)\(<\)16 M and late-K dwarfs at \(\approx\)50 pc. Within 20 pc, \(\gtrsim\)80% of the \(T\)\(<\)16 M and late-K dwarfs have at least one sector of short-cadence data. Beyond 100 pc, \(\lesssim\)10% of such stars have any short-cadence data available. This can be translated into our sensitivity for the lowest mass stars by considering that the spectral type of a _T_=16 star at _d_=50 pc is \(\approx\)M5.5V, corresponding to a main-sequence mass of \(\approx\)0.12 \(M_{\odot}\). ### CPV discovery Prior to this study, most CPVs have been found by visually examining all the light curves of stars in young clusters (Rebull et al., 2016; Stauffer et al., 2017; Popinchalk et al., 2023), or by flagging light curves with short periods and strong Fourier harmonics for visual inspection (Zhan et al., 2019). In this work, we implemented a new search approach based on counting the number of sharp local minima in phase-folded light curves, while also using the Fourier approach. We applied these two search techniques independently to our 65,760 targets. #### 2.2.1 Counting dips The dip counting technique aims to count sharp local minima in phase-folded light curves. The most remarkable CPVs often show three or more dips per cycle, which distinguishes them from other types of variables such as synchronized and spotted binaries (RS CVn stars). For our dip-counting pipeline, we began with the PDC_SAP light curves for each sector (Smith et al., 2017), removed non-zero quality flags, and normalized the light curve by dividing out its median value. We then flattened the light curve using a 5-day sliding median filter, as implemented in wotan(Hippke et al., 2019). We computed a periodogram of the resulting cleaned and flattened light curve, opting for the Stellingwerf (1978) phase dispersion minimization (PDM) algorithm implemented in astrobase(Bhatti et al., 2021) due to its shape agnosticism. If a period \(P\) below 2 days was identified, we reran the periodogram at a finer grid to improve the accuracy of the period determination. Once a star's period was identified, we binned the phased light curve to 100 points per cycle. To separate sharp local minima from smooth spot-induced variability, we then iteratively fit robust penalized splines to the wrapped phase-folded light curve, excluding points more than two standard deviations away from the local continuum (Hippke et al., 2019). The wrapping procedure is discussed below. In this fitting framework, the maximum number of equidistant spline knots per cycle is the parameter that controlled the meaning of "sharp" -- we allowed at most 10 such knots per cycle, though for most stars fewer knots were preferred based on cross-validation using an \(\ell^{2}\)-norm penalty. An example fit is shown in panel _(e)_ of Figure 3. We then identified local minima in the resulting residual light curve using the SciPy find_peaks utility (Virtanen et al., 2020), which is based on comparing adjacent values in an array. For a peak to be flagged as significant, we required it to have a width of at least 0.02\(P\), and a height of at least twice the noise level. The noise level was defined as the 68th percentile of the distribution of the residuals from the median value of \(\delta f_{i}\equiv f_{i}-f_{i+1}\), where \(f\) is the flux and \(i\) is an index over time. In panel _(e)_ of Figure 3, automatically-identified local minima are shown with the gray triangles. Wrapping is necessary to eliminate edge effects when fitting the light curve and when identifying local minima in the residuals. A phased light curve would usually cover phases \(\phi\in[0,1]\). We instead performed the analysis described above using a phase-folded light curve spanning \(\phi\in[-1,2]\), which was created by duplicating and concatenating the ordinary phase-folded light curve. The free parameters we adopted throughout the analysis - for instance the maximum number of spline knots per cycle, and the height and depth criteria for dips - were chosen during testing based on the desire to correctly re-identify a large fraction (\(>\)90%) of previously known CPVs, while also being able to consistently reject common false positives such as spot-induced variability and eclipsing binaries. In short, CPV candidates were identified by requiring a peak PDM period below two days and the presence of at least three sharp local minima, based on at least one sector of the TESS 2-minute data. Candidates were then inspected visually as described in Section 2.2.3. #### 2.2.2 Fourier analysis We performed an independent search using a Fourier-based approach, following Zhan et al. (2019) and Pribulla et al. (2023, their Section 1.3). Starting with the PDC_SAP light curves, we normalized each light curve, and then re-binned it into equal width 2-minute bins to account for the uneven Figure 2: Completeness of the TESS 2-minute data for late-K and early M dwarfs near the Sun, from Sectors 1-55. The orange dotted curve shows the number of stars in successive radial shells, each with a width of 10 pc. To be part of our selection function, these stars must meet the following conditions: they must be red dwarf stars (\(G_{\rm BP}-G_{\rm RP}\)?1.5; \(M_{\odot}\)?4) amenable for TESS observations (_T_<16). The blue solid curve shows the fraction of such stars with at least one sector of TESS 2-minute cadence data acquired between Sectors 1-55. spacing in the TESS data, as well as the data gap caused by satellite downlink during each sector. We then padded the data to ensure that the light curve had a length that was a power of two, as described by Zhan et al. After taking the Fourier transform of the padded light curve using numpy.fft, we searched for peaks with a significance exceeding 12-\(\sigma\) within a set of 500 frequency bins. Peaks of significance were found for \(\approx\)10% of the searched stars. For all such cases, we generated an interim "summary sheet" with information about the star, its full and folded light curves, Fourier transform, potential contaminating stars, and information about these contaminating stars. We then reviewed each summary sheet, and tentatively classified each light curve based on visual inspection of its morphology (with common categories including eclipsing binary, CPV, RS CVn, and cataclysmic variable). #### 2.2.3 Manual vetting We homogeneously assessed whether the objects identified using the dip-counting (Section 2.2.1) and Fourier (Section 2.2.2) approaches were consistent with expectations for CPVs by assembling the data shown in Figure 3. We labeled a star as a "good" CPV if it met all of the following criteria for at least one TESS sector: * \(P<2\) days. * At least three dips per cycle, or else otherwise oddly-shaped dips relative to expectations for quasi-sinusoidal starspot-induced modulation. * Persistent dips over multiple consecutive rotation cycles. We also noted a few stars with potentially oddly-shaped dips as "ambiguous" CPVs, and a few interesting "false positives" that are definitely not CPVs. The most common false positives for both the Fourier and dip-counting techniques were eclipsing binaries, ordinary spotted rapid rotators, and light curves that were complex due to multiple stars contributing to the photometric aperture. Our specialized dip-counting pipeline flagged 368 unique stars for visual inspection; about 20% were subsequently labeled either good or ambiguous CPVs. From the more general Fourier pipeline, \(\approx\)0.5% of stars that passed the 12-\(\sigma\) peak threshold were eventually classified as CPVs. ### Stellar properties #### 2.3.1 Ages While most of our target stars were field stars, color-absolute magnitude diagrams suggested that most CPVs tended to be on the pre-main-sequence (e.g. panel _(k)_ of Figure 3). We therefore estimated stellar ages by checking for probabilistic spatial and kinematic associations between the CPVs and known clusters in the solar neighborhood. For most stars in our sample, we did this using BANYAN\(\,\Sigma\)(Gagne et al., 2018).1 This algorithm calculates the probability that a given star belongs to either the field, or to any of 27 young clusters ("associations") within 150 pc of the Sun. This is achieved by modeling the field and cluster populations as multivariate Gaussian distributions in 3-D position and 3-D velocity space. We used the Gaia DR2 sky positions, proper motions, and distances to calculate the membership probabilities. BANYAN\(\,\Sigma\) in turn analytically marginalizes over the radial velocity dimension. The probabilities returned by this procedure are qualitatively helpful, but should be interpreted with caution because the assumption of Gaussian distributions is questionable for most groups within the solar neighborhood (see e.g. Kerr et al., 2021, Figure 10). Footnote 1: [https://github.com/jadmarkaus/Comove](https://github.com/jadmarkaus/Comove), git commit 278b372; see also Tofflemire et al. (2021). For a few cases where BANYAN\(\,\Sigma\) yielded ambiguous results, we consulted the meta-catalog of young, age-dated, and age-dateable stars assembled by Bouma et al. (2022), and also searched the local volume around each star for co-moving companions.2 A few important sources in the former meta-catalog included the Theia groups from Kounkel and Covey (2019) and Kounkel et al. (2020), and the SPYGLASS stars from Table 1 of Kerr et al. (2021). Finally, to provide a base for comparison, we also ran the BANYAN\(\,\Sigma\) membership analysis on our entire 65,760 target star sample. Footnote 2: [https://github.com/adamkraus/Comove](https://github.com/adamkraus/Comove), git commit 278b372; see also Tofflemire et al. (2021). #### 2.3.2 Effective temperatures, radii, and masses We determined the stellar effective temperature and radii for the CPVs by fitting the broadband spectral energy distributions (SEDs); we then estimated the masses by interpolating against the sizes, temperatures, and ages of the PARSEC v1.2S models (Bressan et al., 2012; Chen et al., 2014). For the SED fitting, we used astroARIADNE(Vines and Jenkins, 2022). We adopted the BT-Settl stellar atmosphere models (Allard et al., 2012) assuming the Asplund et al. (2009) solar abundances, and the Barber et al. (2006) water line lists. The broadband magnitudes we considered included \(GG_{\rm BP}G_{\rm BP}\) from Gaia DR2, \(Vgri\) from APASS, \(JHK_{\rm S}\) from 2MASS, SDSS \(riz\), and the WISE \(W1\) and \(W2\) passbands. We omitted UV flux measurements from our SED fit to avoid any possible bias induced by chromospheric UV excess. We omitted WISE bands \(W3\) and \(W4\) due to reliability concerns. astroARIADNE compares the measured broadband flux measurements against pre-computed model grids, and by default fits for six parameters: \(\{T_{\rm eff},R_{*},A_{\rm V},\log g,[{\rm Fe/H}],d\}\). The distance prior is drawn from Bailer-Jones et al. (2021). The surface gravity and metallicity are generally unconstrained. Given our selection criteria for the stars, we assumed the following priors for the temperature, stellar size, and extinction: \[T_{\rm eff}/{\rm K} \sim \mathcal{U}(2000,8000), \tag{5}\] \[R_{*}/R_{\odot} \sim \mathcal{U}(0.1,1.5),\] (6) \[A_{\rm V}/{\rm mag} \sim \mathcal{U}(0,0.2), \tag{7}\] for \(\mathcal{U}\) the uniform distribution. We validated our chosen upper bound on \(A_{\rm V}\) using a 2MASS color-color diagram. Fi nally, using Dynesty(Speagle, 2020), we sampled the posterior probability assuming the default Gaussian likelihood, and set a stopping threshold of \(\mathrm{dlog}\,\mathcal{Z}<0.01\), where \(\mathcal{Z}\) denotes the evidence. With the effective temperatures and stellar radii from the SED fit, we estimated the stellar masses by interpolating against the PARSEC isochrones (v1.2S; Chen et al., 2014). The need for models that incorporate some form of correction for M dwarfs is well-documented (e.g. Boyajian et al., 2012; Stassun et al., 2012; David and Hillenbrand, 2015; Feiden, 2016; Kesseli et al., 2018; Morrell and Naylor, 2019; Somers et al., 2020). Plausible explanations for the disagreement between observed and theoretical M dwarf colors and sizes include starspot coverage (e.g. Gully-Santiago et al., 2017) and incomplete line lists (e.g. Rajpurohit et al., 2013). In the PARSEC models, Chen et al. (2014) performed an empirical correction to the temperature-opacity relation drawn from the BT-Settl model atmospheres, in order to match ob Figure 3: **Validation plots used to classify CPVs**. The complete figure set contains one image per sector for each of the 66 objects in Table 1, and is accessible both through the online journal and via [https://zenodo.org/record/8327508](https://zenodo.org/record/8327508). Panels are as follows. _a)_: Phase-folded light curve; gray points are raw 2-minute data and black points are binned to 200 points per cycle. The adopted period is given in the lower-right corner. _b)_: Phase-dispersion minimization (PDM) periodogram. Dotted lines show up to the 10\({}^{\mathrm{th}}\) harmonic and subharmonic. _c)_: DSS finder chart, with 21\({}^{\prime\prime}\) and 42\({}^{\prime\prime}\) radius circles for scale. One TESS pixel has a full side length of 21\({}^{\prime\prime}\). _d)_: Cleaned light curve, binned to 20-minute cadence, in Barycentric TESS Julian Date (BTJD). _e)_: Phase-folded light curve, binned to 100 points per cycle. The gray line denotes the spline-fit to the wrapped phase-folded light curve, and small gray triangles denote automatically identified local minima. _f)_: Phase-folded light curve at twice the peak period. _g)_: Phase-folded light curve at half the peak period. _h)_: Phase-folded time-series within the “background” aperture defined in the SPOC light curves. _i)_: Phase-folded flux-weighted centroid in the column direction. _j)_: Phase-folded flux-weighted centroid in the row direction. _k)_: Gaia DR2 color–absolute magnitude diagram. The gray background denotes stars within 100 pc from Gaia Collaboration et al. (2021). _l)_: Information from Gaia DR2, TIC8, and the automated dip-counting search pipeline. “Neighbors”, abbreviated “nbhr”, are listed within apparent distances of 2 TESS pixels if \(\Delta T\)\(<\)2.5. _m)_: BANYAN \(\Sigma\) v1.2 association probabilities, calculated using positions, proper motions, and the parallax. served masses and radii of young eclipsing binaries. This is sufficient for our goal of estimating stellar masses. Given our estimates of \(\{\tilde{T}_{\rm eff},\tilde{R}_{*},\tilde{t}\}\), and approximating their uncertainties as Gaussian \(\sigma_{\tilde{T}_{\rm eff}}\), \(\sigma_{\tilde{R}_{*}}\) and \(\sigma_{\tilde{t}}\), we define a distance metric \(\Delta\) to each model PARSEC grid-point \(\{T_{\rm eff},R_{*},t\}\) via \[\Delta^{2}=\left(\frac{\tilde{T}_{\rm eff}-T_{\rm eff}}{\sigma_{\tilde{T}_{ \rm eff}}}\right)^{2}+\left(\frac{\tilde{R}_{*}-R_{*}}{\sigma_{\tilde{R}_{*}}} \right)^{2}+\left(\frac{\tilde{t}-t}{\sigma_{\tilde{t}}}\right)^{2}, \tag{8}\] where the division by the uncertainties helps to assign equal importance to each dimension. The mass reported in Table 1 is the model mass that minimizes the distance. The reported uncertainties in the masses are based on propagating the statistical uncertainties in the radii, temperatures, and ages. #### 2.3.3 Binarity The main types of binaries of interest in this work are those that are unresolved, because they can lead to misinterpretations of the data. For instance, unresolved binaries might produce multiple photometric signals and hinder our ability to correctly identify the star hosting the CPV signal. Unresolved binaries could also bias photometric magnitude and color measurements, which would affect our stellar parameter estimates. To attempt to identify binaries, we considered the following lines of information. _Radial velocity scatter_--We examined diagrams of the Gaia DR3 "radial velocity error" as a function of stellar color for all 63 CPVs and candidate CPVs. Since this quantity represents the standard deviation of the non-published Gaia RV time series, outliers can suggest single-lined spectroscopic binarity (e.g. Chance et al., 2022). These plots showed two clusters of stars, at \(\lesssim\)10 km s\({}^{-1}\)and 20-25 km s\({}^{-1}\). We therefore adopted a threshold of 20 km s\({}^{-1}\) to flag possible single-lined spectroscopic binaries, which selected three stars: TIC 405910546, TIC 224283342, and TIC 280945693. _RWE_--We examined plots of Gaia DR3 RUWE as a function of color.3 Elevated RUWEs imply excess astrometric noise relative to a single-source model. This can be caused by marginally resolved binaries, intrinsic photometric variability, or intrinsic astrometric motion (e.g. Wood et al., 2021). Based on this exercise, we adopted a threshold of \(\rm RUWE_{DR3}>2\) to flag sources with excess astrometric noise. This threshold was met by 16/50 high-quality CPVs and by 0/13 of the ambiguous CPVs. The choice of the threshold RUWE is somewhat subjective, since the RUWE distribution has an extended tail (e.g. Penoyre et al., 2022). If we had instead required \(\rm RUWE_{DR3}>1.4\), 21/50 high-quality CPVs and 2/13 of the ambiguous sample would have been flagged. Footnote 3: For an explanation of the renormalized unit weight error (RUWE), see the GAIA DPAC technical note [http://www.rssd.esa.int/doc_fetch.php?id=3757412](http://www.rssd.esa.int/doc_fetch.php?id=3757412). _Gaia DR3 non-single stars_--Gaia DR3 included a non_single_star column that flagged eclipsing, astrometric, and spectroscopic binaries. None of the stars in our CPV sample were identified as binaries in this column. _Multiple periodic TESS signals_--During our visual analysis of the TESS light curves and PDM periodograms, we flagged sources with beating light curves, and with PDM periodograms that showed multiple periods. For such cases, we then subtracted the mean CPV signal over each sector, and repeated the phase-dispersion minimization analysis. The resulting secondary periods, \(P_{\rm sec}\), are listed in Table 1; we required these to be at least 5% different from the primary period. The majority of secondary signals showed morphologies corresponding to starspot modulation. This process yielded 22/50 high-quality CPVs with secondary periods; 3/13 of the ambiguous sample met the same criterion. Of the 16 good CPVs with \(\rm RUWE_{DR3}>2\), 15 also showed secondary periods in the TESS light curves. Considering the weaker threshold of \(\rm RUWE_{DR3}>1.4\), 18/21 such CPVs showed secondary TESS periods. The latter results strongly suggest that the secondary periods are associated with bound binary companions. Table 1 summarizes each of the sources of binarity information into a single bitwise column. We describe detailed results concerning binarity in Section 3.4, and summarize those results Section 5.2. ## 3 Results ### CPV catalog Table 1 lists the 66 objects identified by our search. The 50 stars in the "good" sample demonstrated what we deemed to be the key characteristics of the CPV phenomenon in at least one TESS sector. The classification of 13 CPV candidates was ambiguous, and the 3 remaining objects were notable false positives that we discuss below. The quality column in the table divides the three classes; additional data from TESS or other instruments could help resolve the classification of the ambiguous cases. Of the 63 CPVs and candidate CPVs, 32 were found using both the dip-counting and Fourier techniques, 23 were found using only the dip-counting technique, and 8 were found using only the Fourier technique. In the following, we will focus our discussion on the good sample, irrespective of discovery method. We will often refer to stars by their TIC identifiers; these can be referenced against the figures in most digital document readers using a "find" (Ctrl+F) utility. Figure 4 is a mosaic of phased light curves for the 50 CPVs. The objects are sorted first in order of the number of TESS 2-minute cadence sectors in which they clearly demonstrated the CPV phenomenon, and secondarily by descending brightness. The top five objects by this metric are TIC 300651846 (12 sectors); TIC 402980664 (7 sectors); TIC 89463560 (5 sectors); TIC 363963079 (5 sectors); and TIC 294328887 (4 sectors). The brightest five CPVs span 9.3\(<\)\(T\)\(<\)11.1; the faintest five span 14.5\(<\)\(T\)\(<\)15.0. The fastest five have periods spanning 3.6 hr\(<\)\(P\)\(<\)6.2 hr, and the slowest five span 27 hr\(<\)\(P\)\(<\)38 hr. The light curves show between two and eight local minima per cycle. Some stars show ordinary sinusoidal modulation during one portion of the phased light curve, and highly structured modulation in the remainder of the cycle Figure 4: **CPVs found in the TESS 2-minute data.** Phased TESS light curves over one month are shown for 50 CPVs in the high quality sample. Gray are raw 2-minute data; black bins to 300 points per cycle. Objects are ordered such that sources with the most TESS data available are on top (see Section 3.1). Zero phase is chosen to correspond to minimum light. Each panel is labeled by the TIC identifier, the TESS sector number, the period in hours, and the three-bit binarity flag from Table 1, which denotes Gaia DR3 radial_velocity_error outliers (bit 1), Gaia DR3 rue outliers (bit 2), and stars with secondary TESS periods (bit 3). (e.g. TIC 206544316, TIC 224283342, TIC 402980664). Others show structured modulation over the entire span of a cycle (e.g. TIC 2234692, TIC 425933644, TIC 142173958). Others show some mix between these two modes. A small number of objects at first glance seem reminiscent of eclipsing binaries, such as TIC 193831684, TIC 59836633, or TIC 5714469. We believe these cases are unlikely to be eclipsing binaries due to the additional coherent peaks and troughs in the light curves, which are distinct from any binary phenomena of which we are aware. ### Ages of CPVs Of our 63 confirmed and candidate CPVs, 61 were associated with a nearby moving group or open cluster, primarily using BANYAN \(\Sigma\) as described in Section 2.3.4 The relevant groups are listed in Table 1; their ages span \(\approx\)5-200 Myr. For comparison, BANYAN \(\Sigma\) assigned high-probability (\(>\)95%) field membership to 59,361 out of the 65,760 target stars. Most stars in our target sample are old; the CPVs returned by our blind search are young. Footnote 4: Two of the 61 memberships were made with low confidence and are flagged in Table 1. The assignment of TIC 397791443 to IC2602 was based not on BANYAN \(\Sigma\) but instead on a literature search (e.g. Cantat-Gaudin and Anders, 2020). The groups that contain the largest number of CPVs in our catalog are Sco-Cen, Tuc-Hor, and Columba. Six CPVs were also identified in the Argus association (Zuckerman, 2019), which serves as an indirect line of evidence supporting the reality and youth of that group. The large contribution from Sco-Cen is not surprising since Sco-Cen contains the majority of pre-main-sequence stars in the solar neighborhood, and many of its stars were selected for TESS 2-minute cadence observations by guest investigators. Given the \(\lesssim\)10% completeness of our data beyond 100 pc (Figure 2), there may be many more CPVs in Sco-Cen that remain to be discovered. There were two stars for which neither BANYAN \(\Sigma\) nor a literature search led to a confident association with any young group. Both stars display CPV signals over multiple TESS sectors. Both are photometrically elevated relative to the main sequence, an indication of youth. Both were also noted by Kerr et al. (2021) as being in the "diffuse" population of \(<\)50 Myr stars near the Sun. Our search confirms that the CPV phenomenon persists for at least \(\approx\)150 Myr. Table 1 includes three \(\approx\)150 Myr CPVs in AB Dor (Bell et al., 2015), a \(\approx\)112 Myr old Pleiades CPV (Dahm, 2015), and a similarly-aged Psc-Eri member (Ratzenbock et al., 2020). To our knowledge, TIC 332517282 in AB Dor (\(t\)=149\({}^{+51}_{-19}\) Myr; Bell et al., 2015) was the previous record-holder for the oldest-known CPV (Zhan et al., 2019; Gunther et al., 2022); at least one unambiguous CPV (EPIC 211070495) and a few other candidates were also previously known in the Pleiades (Rebull et al., 2016). The maximum age of CPVs might even exceed 200 Myr, based on the candidate membership of TIC 294328887 in the Carina Near moving group (Zuckerman et al., 2006). The estimated age of this group, \(200\pm 50\) Myr, is based on the lithium sequence of its G-dwarfs (Zuckerman et al., 2006), which shows a coeval population of stars older than the Pleiades and younger than the 400 Myr Ursa Major moving group. However, the formal BANYAN \(\Sigma\) membership probability is somewhat low (only 6%), perhaps due to the missing radial velocity. This lack of information could be rectified by acquiring even a medium-resolution spectrum. An independent assessment of the group's kinematics using Gaia data, and its rotation sequence using TESS, could also bear on the question of whether TIC 294328887 is a member. ### Infrared excesses of CPVs Most CPVs in our catalog did not show infrared excesses in the \(W1\)-\(W4\) bands, which is typical for this class of object (Stauffer et al., 2017). Inspecting the SEDs of our 66 star sample and the WISE images available through IRSA, we labeled two objects as having reliable infrared excesses (both \(W3\) and \(W4\) fluxes are more than 3\(\sigma\) above the photospheric prediction): TIC 193136669 (TWA 34) and TIC 57830249 (TWA 33). However, neither is considered a "good" CPV for the reasons that follow. Both of the stars with IR excesses are in the TW Hydrae association (\(\approx\)10 Myr). They have periods of 38 hr and 44 hr, respectively. In our initial labeling, we labeled both as "ambiguous" CPVs because the dips in their Sector 36 light curves seemed to stochastically evolve over only one or a few cycles, which is atypical for CPVs; their periods were also long in comparison with most of the other CPVs. Inspection of additional sectors clarified that both sources are dippers, not CPVs (see the online plots in Figure 3). For TIC 57830249, the Sector 10 light curve shows completely different behavior from Sector 36, with variability amplitudes of \(\pm\)50% and no obvious periodicity. TIC 57830249 also shows continuum emission at 1.3 mm (Rodriguez et al., 2015), which suggests that cold dust grains are present. The dipper classification of TIC 193136669 is less obvious; the main indication that it is a dipper is that Sectors 62 and 63 show its dips appearing and disappearing within the span of one cycle. None of the CPVs in our sample exhibit this property. Independently, TIC 193136669 is known to have a cold disk of dust and molecular gas, based on 1.3 mm continuum emission and resolved \({}^{12}\)CO(2 \(-\) 1) emission (Rodriguez et al., 2015). It was labeled a dipper by Capistrant et al. (2022); we agree with their designation, and label it an "impostor" CPV in Table 1. Section 5.10 highlights plausible evolutionary connections between CPVs and dippers in light of these "misclassifications". ### Binarity of CPVs #### 3.4.1 Binary statistics A significant fraction of the CPVs show indications of unresolved binarity. Excess noise above the Gaia single-source astrometric model is common (16/50 high-quality CPVs have RUWE\({}_{\rm DR3}\)\(>\)2), as is the presence of multiple periods in the TESS light curves (22/50). Elevated astrometric noise almost always implies multiple detectable TESS periods (15/16 high-quality CPVs). The latter observation corroborates the claim that most sources with RUWE\({}_{\rm DR3}\)\(>\)2 are binaries with projected apparent separations below 1'', and projected physical separations \(\lesssim\)50 AU. These observations are also in agreement with previous analyses of multi-periodic low-mass objects discovered by K2, which found that such systems are almost always binaries (Tokovinin & Briceno, 2018; Stauffer et al., 2018). #### 3.4.2 Do K dwarf CPVs exist? To date, the only stars reported to show the CPV phenomenon are M dwarfs, with typical stellar masses \(\lesssim\)0.3 \(M_{\odot}\)(Stauffer et al., 2017; Gunther et al., 2022). However the two most massive CPVs in our sample, TIC 40575448 and TIC 405910546, were assigned masses of \(\approx\)0.82 \(M_{\odot}\) and \(\approx\)0.60 \(M_{\odot}\) respectively. The next-highest mass in our sample belongs to TIC 59836633 (\(\approx\)0.45 \(M_{\odot}\)), with all remaining CPVs having masses \(\lesssim\)0.40 \(M_{\odot}\). The locations of TIC 405754448 and TIC 405910546 in color-absolute magnitude diagrams, combined with their probable membership in Lower Centaurus Crux, support the conclusion that these stars have relatively high masses. However in detail, both objects are subject to ambiguities in interpretation. The TIC 405910546 light curve has a unique shape, suggestive of an eclipsing binary. Independently, TIC 405910546 was one of only three CPVs flagged with a Gaia DR3 radial velocity scatter exceeding 20 km s\({}^{-1}\). Combined, these factors suggest that TIC 405910546 could be a pre-main-sequence eclipsing binary; it should be studied further to clarify this classification. Figure 5: **Properties of CPVs identified by our search**. CPVs are mostly pre-main-sequence M dwarfs, younger than \(\approx\)150 Myr, with rotation periods faster than \(\approx\)1 day. The 50 bona fide CPVs in Table 1 are the dark blue dots; 13 ambiguous CPV candidates are light blue dots. Unresolved binaries (red rings) are objects for which the Gaia DR3 radial velocity scatter exceeded 20 km s\({}^{-1}\), or if Gaia RUWE\({}_{\rm DR3}\)\(>\)2 and multiple photometric signals were present in the TESS light curve. The top panels show the 65,760 target stars with 2-minute cadence TESS data as the shaded gray background; darker regions correspond to a larger relative number of searched stars. The lower-left panel compares the rotation–color distribution of CPVs against the rotation periods of K and M dwarfs in the Pleiades from Rebull et al. (2016). The lower-middle panel plots the derived corotation radii \(R_{\rm c}=(GM/\Omega^{2})^{1/3}\) in units of stellar radii against the measured CPV periods, in units of hours. Ages in the final panel are known from cluster membership. For the other object, TIC 405754448, the evidence for binarity is stronger. The RUWE\({}_{\rm DR3}\) statistic is 6.8, and the raw light curves in Sectors 11, 37, and 38 show both the CPV signal with period 12.9 hr and amplitude \(\approx\)1% and an additional sinusoidal signal with a period \(\approx\)6.5 days and amplitude \(\approx\)0.3%, likely from a second star. If TIC 405754448 is a K+M binary, then the flux ratio between the primary and secondary would be expected to be \(\approx\)10:1. Thus, if the K star were the source of the CPV signal, its intrinsic variability amplitude would be \(\approx\)1%, while if the M star were responsible its intrinsic variability amplitude would be \(\approx\)10%. In short, these two objects suggest that the CPV phenomenon may extend up in mass to pre-main-sequence K dwarfs, but more data are needed to substantiate this claim. #### 3.4.3 An astrophysical CPV false positive: TIC 435903839 We originally classified TIC 435903839, with RUWE\({}_{\rm DR3}\)=17.7, as an "ambiguous" CPV with a 10.8 hr period, because this period minimized the dispersion in the phase-folded light curve. More careful inspection revealed an impostor: this source is a photometric blend of two ordinary rotating stars with \(P_{0}\)=3.60 hr, and \(P_{1}\)=5.41 hr, giving a beat period (\(P_{0}^{-1}-P_{1}^{-1}\))\({}^{-1}\) of 10.8 hr. This is a novel false positive scenario for CPVs: two rapid rotators near the 3:2 period commensurability. The beat between the two rotation signals produces the apparent CPV signal. Such false positives can be excluded through careful accounting of all peaks in a periodogram. For instance, TIC 435903839 shows a peak at 16.27 hr, which is not an integer multiple of the dispersion-minimizing 10.82 hr period. #### 3.4.4 Multiple CPVs in the same system: TIC 425937691 and TIC 142173958 TIC 142173958 and TIC 425937691 both show evidence for two separate CPV signals in their TESS light curves. For TIC 142173958, the signals have periods of 11.76 hr and 12.84 hr. For TIC 425937691, the two periods are 4.82 hr and 3.22 hr, near the 3:2 period commensurability. Given that both sources have two photometric signals and elevated RUWEs, each source is probably an unresolved binary consisting of two CPVs. To our knowledge, these are the third and fourth such systems known: EPIC 204060981 has two CPVs with periods of 9.59 hr and 9.12 hr (Stauffer et al., 2018), and TIC 242407571 has two CPVs with periods of 11.33 hr and 13.63 hr, near the 6:5 period commensurability (Stauffer et al., 2021). ## 4 Evolution of CPV Behavior ### Evolution over two year baseline Figure 6 shows "before" and "after" views of 27 CPVs for which TESS 2-minute cadence observations were available at least two years apart. Such a baseline was available for 32 of the 50 confirmed CPVs in our catalog; for plotting purposes we show the brightest 27. We have defined \(\phi=0\) for each sector to be the time of minimum light observed in that sector, rather than using a consistent phase definition across multiple sectors. This is because for most of the sources we do not know the period at the precision necessary to be able to accurately propagate an ephemeris over two years. The achievable period precision, \(\sigma_{P}\), can be estimated as \[\sigma_{P}=\frac{\sigma_{\phi}P}{N_{\rm baseline}}, \tag{9}\] for \(N_{\rm baseline}\) the number of cycles in the observed baseline and \(\sigma_{\phi}\) the phase precision with which any one feature (e.g. a dip, or the overall shape of the sinusoidal envelope) can be tracked. Assuming \(\sigma_{\phi}\)\(\approx\)0.02 and a 20-day baseline over a single TESS sector yields \(\sigma_{P}\)\(\approx\)0.25\({}^{+0.38}_{-0.14}\) minutes for the population shown in Figure 6; propagated forward 1,000 cycles yields a typical ephemeris uncertainty range of 2 to 11 hours. Measuring the period independently for each sector did not reveal evidence for significant (\(>\)3\(\sigma\)) changes in period, implying a period stability of \(\lesssim\)0.1% over two years. A few objects in Figure 6 show the CPV phenomenon in one sector, and only marginal signs or no sign of CPV behavior in the other sector. In our subjective assessment, cases for which at least one sector would be flagged as "ambiguous" include TIC 368129164 (Sector 23 might be labeled an EB), TIC 177309964 (Sector 38 would be simply a rotating star), TIC 404144841 (Sector 38 looks like a rotating star), TIC 201898222 (Sector 3 looks like a rotating star), TIC 144486786 (Sector 32 might be an RS CVn), and TIC 38820496 (Sector 28 might be an RS CVn). TIC 193831684, assessed on a single-sector basis, would probably be labeled an eclipsing binary--in fact, Justesen & Albrecht (2021) already gave this source such a label. However, based on the shape evolution between Sectors 13 and 39, it is a CPV. Based on the fraction of sources that "turned off", the observed shape evolution implies that CPVs have an on-off duty cycle of \(\approx\)75%. Correcting for the duty cycle might be important in population-level estimates of the intrinsic frequency of the CPV phenomenon (e.g. Gunther et al., 2022). ### Evolution over consecutive sectors, & LP 12-502 A few of our complex periodic variables were near the TESS continuous viewing zones (Figure 5, top right). Out of this already small sample, LP 12-502 (TIC 402980664; \(d\)=21 pc, \(J\)=9.4, \(T\)=11.1) stood out due to the quality and content of its data. We discuss another interesting source, TIC 300651846, in Appendix B. In this section, we describe the LP 12-502 observations and the possible implications. #### 4.2.1 Lp 12-502 observations Whenever LP 12-502 was located within a TESS sector, it was observed at 2-minute cadence. Figure 7 shows all the available data, from Sectors 18, 19, 25, 26, 53, 58, and 59. Vertical offsets were applied to separate the data from different spacecraft orbit numbers; there are always two orbits per sector. We binned the light curve to 15-minute intervals to facilitate visual inspection. Points more than 2.5\(\sigma\) above the median are drawn in gray, to prevent outliers from seiz Figure 6: **Evolution of CPV light curves over two years.** Out of the 50 CPVs in Figure 4, 32 had 2-minute cadence TESS data available for a baseline of at least two years; the 27 brightest are shown here due to space constraints. Each panel shows one sector of TESS data, and is phased to its deepest minimum in flux. Each panel’s title shows the TIC identifier and period in hours. Text insets show the TESS sector numbers, which generally span two years, or at least 1,000 cycles. The vertical scale is fixed across sectors to clarify shape changes. Gray circles are raw 2-minute data; colored circles bin to 300 points per cycle. Figure 7: **LP 12-502 (TIC 402980664) light curve**, where each time segment represents one TESS orbit. Data were acquired in Sectors 18-19, 25-26, 53, and 57-58. Flares are drawn in gray. The light curve is binned to 15-minute intervals so that there are 96 points per day, and each point is connected by a line. Data gaps have nothing plotted. The red vertical lines highlight apparently instantaneous state changes in the shape of the dip pattern. ing attention. Data gaps are not connected by lines (a common source of confusion in light curve visualization). Figure 8 shows the same data after phase-folding each TESS spacecraft orbit, assuming \(P\)=18.5611 hr and a fixed reference epoch of BTJD=1791.5367. Finally, Figure 9 shows "river plots" of the same data, split into similar intervals: the Sector 18-19 data, 25-26 data, 53 data, and 58-59 data. The river plots are subject to one additional processing step: we fitted and subtracted a maximum-likelihood two-harmonic sinusoid independently from the Sector 18-19 data, 25-26 data, and 53, 58, and 59 data in order to accentuate changes in the dip timing and structure. The average period, determined by measuring the PDM peak period over each sector independently, was \(\langle P\rangle\) = 18.5560 hr. The range between the maximum and minimum sector-specific periods was measured to be about one minute. However, a period shift of \(\pm\)1 minute leads to large phase drifts over the entire timespan of observations. One minute is \(\approx\)1/1000\({}^{\rm th}\) of a period, and we have observed 1500 cycles. By folding with a fine grid of trial periods, we found that the choice \(P\) = 18.5611 \(\pm\) 0.0001 hr causes more of the features in the LP 12-502 light curve to maintain constant phases over the entire dataset. We now attempt to describe the complex morphology of the light curve and its evolution. For the first 64 cycles, the star shows four obvious local minima. We dub these dips \(\{1,2,3,4\}\) at phases \(\{-0.28,-0.08,0,0.25\}\), respectively. Dips 2 and 3 are part of the same "global" minimum, which otherwise resembles a long eclipse. Over cycles 0-64, the depth of dips 1 and 3 remain roughly fixed. Dip 4 decreases in depth by about 2%, and dip 2 increases in depth by about the same amount (see Figure 8). A subtle fifth dip may also be present at phase +0.08, at the end of the global minimum that includes dips 2 and 3. There is then a 6-month (184-cycle) gap to Cycles 248-315, which show two highly structured dip complexes, plus a small leading dip. The leading dip has the same phase (relative to minimum light) as in cycles 0-64, and therefore seems likely to be due to the same structure. Along a similar line of logic, it seems plausible that the first "dip complex" dur Figure 8: **Evolution of LP 12-502** (\(P\)=18.5611 h) at fixed period and epoch over three years. Each panel shows one (averaged) TESS orbit; small text denotes relative cycle number. There are 200 binned black points per cycle. The TESS pointing law dictates the large time gaps between cycles 64-248, 315-1233, and 1264-1410; larger gaps tend to yield larger shape changes. The dips usually evolve over tens to hundreds of cycles. However cycles 1233-1264 show a dip that switched from a depth and duration of 3% and 3 hr to 0.3% and 1 hr over less than one cycle (cf. Figure 7). ing cycles 248-264 represents an evolution and reduction in amplitude of dips 2 and 3 that were seen during cycles 0-64. During cycles 266-310, an additional local minimum develops between the two complexes; this feature is best visualized on the river plots (Figure 9), where it is seen to have a shorter period than the other dips (as described below). The second dip complex during cycles 248-315 shows the most substructure. During e.g. cycles 283-298, this single complex shows six local minima. The first and deepest dip is sharp: it shows a flux excursion of 3.5% over about 22 minutes (0.02 \(P\)), which is the steepest slope exhibited anywhere in the LP 12-502 dataset. After the sharp dip, there is a roughly exponential return to the baseline flux spanning about a quarter of a period, punctuated by coherent local minima and maxima that the river plot (Figure 9) reveals to have slightly longer periods than the sharp dip. The sharp leading dip remains roughly constant in amplitude until a sudden "state change" at BTJD 2030.7 (cycle 309) that occurred at the same time as a flare, and left the trailing dips seemingly unaffected. This apparent state change, and two others, are marked with red lines in Figure 7. Figure 9: **River plots of the LP 12-502 light curve**, showing (clockwise from top-left) Sectors 18-19, 25-26, 53, and 58-59. A two-harmonic sinusoid has been subtracted to highlight the sharp dips. A fixed period and phase are adopted for all sectors; the dips across all observations are bounded by \(\phi\in[-0.35,0.35]\). In Sectors 25-26 (cycles 248-315), periods are visible at the fundamental period of 18.5611 hr, as well as at faster (\(\phi\)\(\approx\)0-0.07) and slower (\(\phi\)\(\approx\)0.25-0.27) relative periods based on the presence of blue dips with distinct slopes. Multiple simultaneous periods are also visible in Sector 53 (cycles 1234-1263) and Sectors 58-59 (cycles 1411-1479). White chunks denote missing data. The state changes noted with red markers in Figure 7 occur in cycles 261, 309, and 1241. The behavior during Sectors 53-58 (cycles 1233-1481) is comparatively tame; the light curve shows only four to six dips per cycle. Some dips remain stable in depth and duration over this five-month interval. Other dips grow, like the one at \(\phi=+0.06\) between cycles 1458 and 1481. Other dips, such as the one at \(\phi=+0.12\) in cycles 1233-1264, disappear entirely. The most dramatic state change occurs during cycle 1241, when a large dip switches from a depth of 3% and a duration of 3 hours to a depth of 0.3% and a duration of 1 hour. #### 4.2.2 Lessons from Lp 12-502 State-changes reveal dip independence--The state-changes seen in cycles 261, 309, and 1241 confirm that dips can disappear in less than one cycle. While such behavior was also noted by Stauffer et al. (2017), the data presented here show further that the dips can be _independent_ and _additive_. For example, throughout cycles 1233-1264, there are three sharp dips between phases of 0 and 0.3 with different amplitudes but similar slopes. During the transition, the leading dip nearly disappeared while the other two dips hardly changed; compare the centermost two panels of Figure 8. Evidently, the material or process responsible for one dip can vary independently of the materials or processes responsible for other dips. The state changes during cycles 261 and 309 support the same conclusion, while also hinting that the _leading_ dip of a complex is most prone to disappearing, leaving the trailing dips unchanged in its wake. Slow growth; rapid death--LP 12-502 shows at least three instances in which dips switch off over less than one cycle; we did not see any such instances of dips switching on. Dip growth seems to happen more slowly. For instance, the dip at phase 0-0.1 between cycles 258-290 begins to become detectable during cycle 258, and growths in depth by about 2% over the next eight cycles. The evolution of this particular dip is most clear in the river plots. The evolution of the dip group at phases 0.1-0.3 during cycles 1410-1481 is another example of this slow mode of dip growth. Dip durations--The shortest dip duration for any of the individual LP 12-502 dips seems to be \(\approx\)0.06 \(P\approx 1.08\) hr. This is very similar to the characteristic timescale of a transiting small body at the corotation radius, \[T_{\rm dur}\equiv R_{*}P_{\rm rot}/(\pi a)=1.02\pm 0.10\,{\rm hr}, \tag{10}\] where we have inserted the stellar radius and mass derived in Section 2.3. Thus, the shortest-duration dips are likely produced by transits of bodies or distributions of material that are smaller than the star. The corotation radius corresponds to \(a/R_{*}\approx 5.8\), i.e., the transit of a body at the corotation radius has a duration about six times shorter than a feature on the stellar photosphere that is carried across the visible hemisphere by rotation. On the other hand, some dip durations are sufficiently long that an explanation involving transits would require structures that are larger than the star along the direction of orbital motion. Dip periods--Most of the LP 12-502 dips repeat with a period of \(P=18.5611\pm 0.0001\) hr. However the river plots (Figure 9) reveal that a few dips have detectably distinct periods. For instance, in sectors 25-26, the dip that develops around cycle 262 has a period shorter than the mean period by \(\approx\)0.1%, and some of the trailing local minima in the main dip complex have periods slower than the mean period by \(\approx\)0.04%. In addition to the fundamental period, we were able to identify at least four distinct periods shown by specific dips over the full Sectors 18-59 dataset: 18.5683, 18.5672, 18.5473, and 18.5145 hr, with a measurement uncertainty of \(\approx\)0.0002 hr. Possibly, the different periods belong to clumps of dust or prominences of gas at slightly different orbital distances surrounding the corotation radius. ## 5 Discussion ### Typical and extreme CPVs Referring back to Figure 5, typical CPV masses span 0.1-0.4 \(M_{\odot}\), typical ages span 2-150 Myr, and relative to the Pleiades, the CPVs are among the more rapidly rotating half of M dwarfs. The CPV mass and age range includes both fully convective stars and stars with a combination of radiative cores and convective envelopes; the dividing line for these ages is at around \(M_{*}=0.25\) \(M_{\odot}\)(Baraffe & Chabrier, 2018). We found no obvious differences in light curve morphology for CPVs above and below this fully-convective pre-main-sequence boundary. The closest CPV in our catalog is DG CVn (TIC 368129164), a member of AB Dor at \(d\)=18 pc. The three brightest CPVs are DG CVn (\(T\)=9.3), TIC 405754448 (\(T\)=9.6), and TIC 167664935 (\(T\)=10.3). The shortest period, 3.64 hr, belongs to TIC 201789285. The longest period, 37.9 hr, belongs to TIC 405910546. Based on the Gaia\({}_{\rm DP3}\) RV scatter, the latter source may turn out to be an eclipsing binary; if so, the longest-period CPV in our catalog would be TIC 193831684 (31.0 hr). By definition, we required the periods to be below 48 hr. The lowest mass (\(\approx\)0.12 \(M_{\odot}\)) belongs to TIC 267953787. The catalog contains a few other stars with similar mass. We cannot rule out the possibility that CPVs exist with even lower masses, given the small number of such low-mass stars in our target sample. Perhaps even brown dwarfs can be CPVs, although it might be difficult to distinguish the type of variability we associate with CPVs from the usual variability of brown dwarfs caused by clouds and latitudinal bands (e.g. Apai et al., 2021; Vos et al., 2022). ### Is binarity important for CPVs? For CPVs, binarity seems to provide either nuisances or curiosities. The nuisances include astrophysical false positives with two beating rapidly rotating stars, as well as uncertainty about which star produces the CPV signal in binary systems. Our two candidate K dwarf CPVs suffer from this latter concern (see Section 3.4.2). Curiosities include the four binary systems that are now known to each host two separate CPVs (see Section 3.4.4). CPVs are sufficiently rare that such systems may have physical import. Recent work has shown that the orbits of binaries closer than \(\lesssim\)700 AU tend to be aligned with their planetary systems (e.g. Christian et al., 2022). If we assume that observing CPV variability requires high line-of-sight inclinations, and that the inclinations in binaries are correlated, then we would expect the detection of one CPV in a binary system to raise the probability that the other star is a CPV. The limitations of the current catalog prevent further exploration of this issue, but it might be interesting for future study. ### Transience of CPV dips While CPV periods appear to remain fixed over thousands of cycles, the light curve shapes evolve over typical timescales of 10 to 1,000 cycles (e.g. Figures 6 and 9). Although we refer to them as "periodic", the CPVs are therefore actually quasiperiodic, with coherence timescales of \(\approx\)100 cycles. This marks a qualitative departure from the "persistent" _vs._ "transient" flux dip distinction previously described by Stauffer et al. (2017), which was based on \(\lesssim\)100 cycle K2 baselines. The observation that CPVs have a population-averaged on-off duty cycle of \(\approx\)75% (Figure 6) is also new. Appendix B for instance shows \(\approx\)1,000 cycles of a source, TIC 300651846, with between zero and five sharp local minima per cycle. During the "zero" epochs (cycles \(\approx\)503-542), the source would likely be labeled an ordinary rotating star. ### Special phases of CPV dips An independent peculiarity of CPV evolution is that the dips do not explore all phase angles with equal weight. LP 12-502, and other CPVs, exhibit preferred phases lasting for at least two years. For LP 12-502, all of the dips happen over phases corresponding to only two thirds of the period (Figures 8 and 9). The remaining third seems to be "out of limits" for dips over the timespan of observations. This could be evidence that the stellar magnetic field is not azimuthally symmetric. Alternatively, the source of the material (e.g. a planetesimal swarm) might be distributed over an arc rather than occurring randomly around the entire orbit. ### Dip asymmetries? The asymmetry of a dip around the time of minimum light might be caused by the variation in optical depth of the occulting material as a function of orbital phase angle. Sharp leading edges with trailing exponential egresses, for instance, have been previously seen for transiting exocomets and disintegrating rocky bodies (e.g. Rappaport et al., 2012; Brogi et al., 2012; Vanderburg et al., 2015; Zieba et al., 2019). Examining Figure 4, it is clear that CPV dips can be asymmetric but it is not obvious whether there is a preference for sharper ingresses or sharper egresses. In some cases (e.g. TIC 425933644), the flux variations do not resemble isolated dips, making the meaning of "ingress" and "egress" unclear. In other cases, such as Sector 36 of TIC 89463560, there is a sharp drop with an exponential return to the baseline flux, resembling the signatures of exocomets (e.g. Rappaport et al., 2018; Zieba et al., 2019), and the outflowing exospheres of some transiting planets (e.g. McCann et al., 2019; MacLeod & Oklopcic, 2022). ### What causes the CPV phenomenon: dust vs. gas Both the dust clump and the gas prominence scenarios (Figure 1 and Section 1) invoke clumps of material at the corotation radius; one property that distinguishes the two ideas is the composition of the material. #### 5.6.1 What is a prominence? The prominence idea is based on a loose analogy with quiescent prominences/filaments in the solar corona that last as long as a few weeks (see Vial & Engvold, 2015). In the context of the Sun, a prominence is a clump of cold, partially ionized hydrogen viewed in emission against the dark backdrop of space. A filament is the same clump of plasma, but viewed in absorption against the solar disk. In an extrasolar context, spectroscopic detections of transient Balmer- and resonance-line absorption seen for stars such as AB Dor and Speedy Mic (e.g. Collier Cameron & Robinson, 1989; Jeffries, 1993; Dunstone et al., 2006; Leitzinger et al., 2016) have been interpreted as prominences that scatter a star's chromospheric emission (see Collier Cameron & Robinson, 1989). The short-term mechanical stability of such gas configurations is theoretically plausible for rapid rotators (Ferreira, 2000; Waugh & Jardine, 2022). To our best knowledge, this class of spectroscopic observation also has no viable alternative explanations. We performed a simple visual examination of the TESS light curves for five prominence-hosting systems studied by Jardine & Collier Cameron (2019)-AB Dor, Speedy Mic, LQ Lup, HK Aqr, and V374 Peg-and detected no CPV behavior. While individual prominences may only last one to tens of rotation cycles, the prominence system itself is thought to always be "on", due to the repeatable detectability of spectroscopic transients (e.g. Collier Cameron et al., 1990, and references therein). Assuming that spectroscopically observable prominence systems indeed do not turn off, this would imply that they are not always accompanied by photometric CPV-like dips: a link between the spectroscopic prominences that may exist around rapidly rotating low-mass stars and the CPV phenomenon has yet to be made. #### 5.6.2 What is the microphysical source of opacity? CPVs show broadband flux variations that can be 1-2\(\times\) deeper in the blue than in the red (Onitsuka et al., 2017; Bourna et al., 2020; Gunther et al., 2022; Koen, 2023). Dust can naturally explain this chromaticity, since it has a larger absorption cross-section in the blue than the red (e.g. Cardelli et al., 1989). Gas might also explain the observed chromaticities (Gray, 1992). While bound-bound absorption can be excluded, since it provides opacity only at narrow resonant lines, the hydrogen opacity due to bound-free absorption is "jagged" (see Gray, 1992, Figure 8.5 and Eq. 8.8), such that at temperatures of \(\approx\)3,000 K to \(\approx\)10,000 K the opacity can be larger at blue wavelengths than at red wavelengths. Bound-free absorption of H\({}^{-}\) is often important at such temperatures, but this opacity source is stronger in the red than the blue, the opposite of what is required to produce deeper dips in the blue than in the red. Likewise, Thomson scattering is too gray to be the dominant opacity source. From hydrogen alone, bound-free absorption therefore seems like the most plausible opacity source. However it remains to be demonstrated whether a sufficient population of excited states could be maintained, particularly given the short (\(\approx\)microsecond) radiative decay timescales. An instructive point of comparison is the rapidly rotating magnetic B star, \(\sigma\) Ori E, which shows dips that are deeper in the blue than in the red (Hesser et al., 1977). Photometric and spectroscopic observations of this star have been understood in terms of a warped torus of corotating circumstellar material (Landstreet and Borra, 1978; Nakajima, 1985; Townsend et al., 2005). The circumstellar material is unlikely to be dust, which would sublimate quickly at the distance of the torus from the star.5 The opacity source for \(\sigma\) Ori E and its analogs is instead thought to be bound-free absorption by neutral hydrogen (Nakajima, 1985), although to our best knowledge direct evidence for this conclusion has yet to be acquired. Separate and smaller-amplitude continuum flux brightenings in \(\sigma\) Ori E may also come from electrons scattering photospheric light toward the observer when the clouds are not transiting (Berry et al., 2022). Footnote 5: Zhan et al. (2019) explored the sublimation timescales for a canonical CPV with \(M_{\ast}=0.2M_{\odot}\), \(R_{\ast}=0.3R_{\odot}\), and \(T_{\rm eff}=3200\) K. They found that non-shielded, generic silicate dust mixture (Draine, 1985) with a single size of 0.1 \(\mu\)m reached the \(\approx\)1500 K sublimation temperature at \(\approx\)3 \(R_{\ast}\). This suggests that dust sublimation could be an important effect even for CPVs. Given these complexities, it seems important for a future theoretical study to be conducted to determine to what degree the observed chromaticities in CPVs match, or do not match, expectations from radiative transfer. This issue has a key ability to resolve the question of whether the CPVs are explained by dust or by gas, which has bearing on whether the material producing the dips is coming from the star, or whether it is a byproduct of the protoplanetary disk. #### 5.6.3 The lifetime constraint The observed lifetime of the CPV phenomenon could provide another way to discern between the gas and dust clump scenarios. Based on the available statistics from Rebull et al. (2022) and references therein, it seems plausible that CPV occurrence decreases with stellar age from \(\approx\)3% at 10 Myr (Sco-Cen), to \(\approx\)1% at 100 Myr (Pleiades), down to 0% by the \(\approx\)700 Myr age of Praesepe. This is odd in the context of the prominence scenario, because pre-main-sequence M dwarfs spin up over the first 100 Myr; prominences might therefore be expected to be _more_ common at 100 Myr than at 10 Myr, under the assumption that the production of prominences depends only on the stellar rotation rate. The dust clump scenario would hold a natural explanation: the lower occurrence of CPVs around older stars would simply reflect a finite supply of dust. One potential complication however is that the magnetic field topology of rapidly rotating M dwarfs may depend on factors other than the rotation rate (e.g. the age), which might alter the production of prominences. ### Why are the dip and spot periods nearly equal? The CPV dips are usually superposed on nearly sinusoidal modulation. The sinusoidal modulation is probably induced by brightness inhomogeneities on the stellar surface. If the dips are explained by material orbiting the star, then the proximity of the spot and dip periods is surprising. For instance, the dust clumps modeled by Sanderson et al. (2023) tend to accumulate near - but not exactly at - corotation. To explore the proximity of the dip and spot periods, we generated synthetic light curves that superposed a sinusoid and a single eclipse. We imposed periods, amplitudes, sampling, and noise properties similar to typical CPVs in Figure 4. We then tuned the period of the eclipse signal to differ slightly from the sinusoidal period, and processed the resulting synthetic signals through the same period-finding routine to which we subjected the real data. Provided the dip and sinusoid periods agreed to within \(\approx\)0.1%, we found that phase-folding on the dominant period (that of the larger-amplitude starspots) yielded dip signals analogous to those in Figure 4. However, increasing the period difference beyond \(\gtrsim\)0.3%, the dip signal quickly becomes "smeared" out and unidentifiable when phase-folding. This exercise highlights that our search was sensitive only to stars with dip and spot periods that agreed to within a few parts per thousand. Some CPVs may exceed this threshold; we encourage future work aimed at determining whether such systems exist. ### Planets or planetesimals near corotation? Close-in planets are common around M dwarfs; studies from Kepler have shown that early M dwarfs have \(\approx\)0.1 planets per star with sizes between 1-4 \(R_{\oplus}\) and orbital periods within 3 days (Dressing and Charbonneau, 2015). The frequency of planets per star increases to \(\approx\)0.7 when considering periods as long as 10 days. Extrapolating to all small (0.1-4 \(R_{\oplus}\)) planets within 10 days, it is reasonable to expect nearly all M dwarfs to have at least one planet. In the context of disk-driven planet migration, the stopping location for the innermost planet is set by the protoplanetary disk's truncation radius (e.g. Izidoro and Raymond, 2018, and references therein). The truncation radius is often calculated by equating the magnetic pressure from the stellar magnetosphere with the ram pressure of the inflowing gas. As it happens, the truncation radius is close to the corotation radius for low accretion rates (e.g. Romanova and Owocki, 2015; Li et al., 2022). These considerations invite us to imagine one or more planets migrating inward due to gas drag, and arriving at \(\approx\)5-10 stellar radii before the disk is depleted. With this picture in mind, it is tempting to attribute features of the CPV light curves to transits of material ejected by planets or planetesimals. Young rocky bodies are expected to be hot, and they might expel either gas or dust. The Jupiter-Io system (e.g. Saur et al., 2004) is analogous, in that a small rocky body feeds the construction of a plasma torus. We emphasize that although this type of configuration seems a priori plausible, no direct evidence currently supports it. The main logical function of the planetesimals would be to serve as a source for the occulting gas or dust; they would not necessarily need to explain the observed phases of the observed dips. The azimuthal angle of the eventual entrainment could be entirely dictated by the stellar magnetic field. In this scenario, the obscuring material would inspiral from one or more rocky bodies well beyond the corotation radius. The planetesimals themselves would not necessarily need to transit. However if they did, they would need to be \(\lesssim 1\,R_{\oplus}\) based on their non-detections in the TESS data. Possibly analogous systems include K2-22 (Sanchis-Ojeda et al., 2015) and KOI-2700 (Rappaport et al., 2014), though the obscuring material in the CPVs would need to be observed much further from the emitting planet than for those two examples. A more restrictive variant of the planetesimal scenario would be to posit that the obscuring material remains close to the launching body, similar to comets, or to the aforementioned K2-22 and KOI-2700 systems. If so, then the planetesimals would need to be at the corotation radius. One prediction would therefore be that certain orbital phases would produce recurrent dips when observed over sufficiently long baselines, because the launching planetesimal would be massive enough to remain in orbit, while stochastically ejecting material. For most CPVs (Figure 6), the data seem to be in tension with this expectation because the relative spacing between dips is almost never conserved. With that said, certain sources do seem to exhibit "special phases", including LP 12-502 (TIC 402980664), DG CVn (TIC 368129164), TIC 193831684, and TIC 146539195. One possible explanation for this might be if obscuring material is remaining close to its launching body, or bodies. An alternative explanation could be that the stellar magnetic field configurations responsible for confining said material are stable over the existing two-year baseline. ### Mass flux estimate Assuming for the moment that the obscuring material is dust, we can estimate the mass of a transiting clump. First, we convert the transit depth into an effective cloud radius, \(R_{\rm cloud}\). For most CPVs in Figure 4, this yields \(\approx\)2-20 \(R_{\oplus}\). A minimum constraint on the number density of dust particles is obtained by requiring the cloud to be optically thick. For cases like LP 12-502, this is reasonable because the transit duration of the shortest dips implies \(R_{\rm cloud}\ll R_{*}\). Carrying out the relevant calculation assuming the dust grains are 1 \(\mu\)m in size, Sanderson et al. (2023) reported minimum cloud masses of order 10\({}^{12}\) kg (their Eq. 23), which scale linearly with both the optical depth and dust grain radius. This is comparable to a small asteroid; the asteroid belt itself has a mass of order \(\approx\)10\({}^{21}\) kg (Park et al., 2019). A similar calculation that assumed occulting clumps of hydrogen, rather than dust, derived gas prominence masses of at least 10\({}^{14}\) kg (Collier Cameron et al., 1990), about 100\(\times\) larger than the lower limit on the dust mass. If the disappearance of a dip represents the permanent loss of the obscuring material - for example, if it is the result of a dust clump being accreted or ejected - then we can also estimate the rate at which mass is flowing through the structures that lead to dips. For instance, LP 12-502 showed three "state-switch" events over the six months of available TESS observations, during cycles 261, 309, and 1241 (Figure 9). The other source for which we performed a comparable analysis, TIC 300651846 (Appendix B), showed two state-switches over 11 months. In all such cases, at least one dip turned off. For purposes of estimation, we will take LP 12-502 as our prototype. Assuming the occulting material is dust, the corresponding \(\dot{M}\equiv M\cdot{\rm d}N/{\rm d}t\) time-averaged over six months is \(\approx\)\(1\times 10^{-12}M_{\oplus}\,{\rm yr}^{-1}\). Considered cumulatively over the \(\approx\)\(10^{8}\) years for which the CPV phenomenon is observed, this yields a cumulative moved dust mass of 10\({}^{-4}M_{\oplus}\), of order the Solar System's asteroid belt. If the occulting material is gas, the lower mass bounds would be of order 100 times larger. For cases in which we observe the _growth_ of dips, such as the Sector 29 data for TIC 224283342, or Sector 5 of TIC 294328885, the dip depths typically increase by of order a few percent over ten to twenty days. This growth rate yields a mass flux one order of magnitude larger than the earlier estimate. ### From dippers to debris disks About one in three young stars with infrared-detected inner dusty disks show quasiperiodic or stochastic dimming over timescales of roughly one day (e.g. Alencar et al., 2010; Cody and Hillenbrand, 2010). The dimming amplitudes can reach a few tenths of the stellar brightness, and dips with identical depths and phases rarely recur. These "dipper" stars are probably explained by occulting circumstellar dust in the inner disk (e.g. Cody et al., 2014; Ansdell et al., 2016; Robinson et al., 2021; Capistrant et al., 2022). While the phenomenon can persist beyond \(\approx\)10 Myr (Gaidos et al., 2019, 2022), in all such cases it seems to be associated with the presence of infrared excesses. Phenomenologically, dippers are different from CPVs in that their dips are usually deeper, less periodic, and more variable in depth over timescales of only one or a few cycles. Dipper stars also tend to be younger, since they tend to be classical T Tauri stars with infrared excesses. In identifying the two candidate CPVs with outlying SEDs (TICs 193136669 and TIC 57830249; Section 3.3), we were prompted to reconsider our light curve-based labeling, and ultimately concluded that these sources are dippers. This episode suggests that there could be overlap between CPVs and dippers. Taking TIC 57830249 as one example, the Sector 36 TESS data are suggestive of a CPV, with relatively periodic, sharp dips with depths of a few percent. The Sector 10 data are completely different, varying in apparent flux by a factor of two, with no discernible periodicity at all. Perhaps this source becomes a "dipper" when an inflow of dust reaches the inner disk wall, and is otherwise a "CPV" when the inner disk is starved of dust. Although TIC 57830249 is an intriguing outlier, the general picture is that stars without infrared excesses have more stable optical light curves than those with infrared excesses. While some dippers may evolve into CPVs after the disk is mostly gone, this would be generically expected based on population statistics: young objects become old. There may be no other causal connection between the two evolutionary stages. With that said, a common mystery between the CPVs and dippers is how exactly the _narrowness_ of their flux dimmings is produced. A similar mechanism may operate for both types of object, tied perhaps to a shared magnetic topology, or perhaps to a preference for dust to inspiral to the star in clumped structures. ### Strengthening the magnetic B star connection Stauffer et al. (2017) previously noted a possible connection between the CPVs and rapidly rotating magnetic B stars such as \(\sigma\) Ori E, which can have circumstellar gas clouds trapped in corotation (Townsend et al., 2005). The \(\sigma\) Ori class is distinct from Be-star decretion disks, which are found to systematically not host detectable magnetic fields (Rivinius et al., 2013; Wade et al., 2016). An argument against the connection between CPVs and the \(\sigma\) Ori E analogs is that the light curve of \(\sigma\) Ori E is simpler than those in Figure 4, with only two broad local minima, and one "hump" (Figure 10; see also Jayaraman et al., 2022). Within the model proposed by Townsend et al., the simplicity of the light curve is the result of a simple dipolar magnetic field, which is typical of magnetic B stars (Auriere et al., 2007; Donati and Landstreet, 2009). The magnetic axis needs to be tilted relative to the stellar spin axis in order to match the qualitative behavior of both the broadband light curves, and the line-profile variations seen in hydrogen, helium, and carbon (Oksala et al., 2012). Two interesting and possibly telling exceptions to the rule that magnetic B stars have simple light curves are HD 37776 and HD 64740. HD 37776 is known from spectropolarimetry to have an extreme field geometry dominated by high order multipoles (Kochukhov et al., 2011). The field geometry of HD 64740, while potentially less extreme, also shows evidence for a non-dipolar contribution (Shultz et al., 2018). Recent TESS light curves of these two B stars appear surprisingly similar to the CPV light curves (Mikulasek et al., 2020). The middle row of Figure 10 shows the phased TESS light curves for these two stars, with by-eye best-matching CPVs shown underneath for comparison. The number of dips per cycle, the shapes of the dips, and the dip depths relative to the sinusoidal envelope are all similar. This connection suggests that the highly structured light curves of both the M dwarfs and the B stars are associated with (and perhaps caused by) strong non-dipolar magnetic fields. Non-dipolar fields for M dwarfs are plausible, given that Zeeman Doppler Imaging has revealed non-axisymmetric magnetic field patterns for the few M dwarfs for which this technique is technically feasible (see Kochukhov, 2021, and references therein). The physical similarity between the B stars and the M dwarfs could have its origin in the existence of a "centrifugal magnetosphere" (see Petit et al., 2013). In other words, both classes of objects might satisfy the condition \(R_{\rm m}>R_{\rm c}\), for \(R_{\rm m}\) the magnetosphere radius (sometimes called the Alfven radius). Provided that charged particles are confined to move along magnetic field lines, material can then build up at the corotation radius (e.g. Romanova and Owocki, 2015, Sec. 4, and references therein). In the converse "dynamical" case, Figure 10: **The magnetic B star connection.**\(\sigma\) Ori E and HD 345439 (top row) are magnetic B stars with predominantly dipolar magnetic fields known to host circumstellar plasma tori. HD 37776 and HD 64740 (middle row) are analogous magnetic B stars with field topologies potentially dominated by high order multipoles. The bottom row compares the latter systems against the “best-matching” CPV light curves, selected by eye from Figure 4. CPVs have light curves that are visually similar to the topologically complex magnetic B stars. Stellar masses rounded to one significant figure are given in the lower right of each panel; the star, TESS sector, and period are listed in each subtitle. when \(R_{\rm m}<R_{\rm c}\), material interior to the magnetospheric radius returns to the stellar surface over the free-fall timescale. A simple estimate assuming a dipole field with \(B_{0}\approx 1\) kG at the star's surface, a local plasma number density \(n\approx 10^{9}\) cm\({}^{-3}\), and a plasma temperature \(10^{6}\) K gives magnetospheric radii of order a few times the corotation radii, \(R_{\rm c}\). This suggests that the existence of a centrifugal magnetosphere is plausible for young, rapidly rotating M dwarfs. ## 6 Conclusions In this work, we searched 2-minute cadence TESS data collected from 2018 July to 2022 September for complex periodic variables (CPVs). The target stars were 65,760 late-K and early-to-mid M dwarfs within 150 pc and with TESS magnitudes _T_<16. The selection function included \(>\)80% of such stars within 30 pc, and \(<\)10% of such stars at distances exceeding 100 pc (Figure 2). We found 50 objects that showed complex quasiperiodic behavior over at least one TESS sector. These 50 bona fide CPVs are listed in Table 1. This table also includes 13 ambiguous CPVs, whose designation is less certain, and 3 impostors. We inferred ages for all but two of the 66 objects based on memberships in young stellar associations; we also derived temperatures and radii using SED fitting, and inferred stellar masses by interpolating against stellar evolutionary models. We caution that our sample is far from being volume-limited and is not even magnitude-limited: the TESS 2-minute stellar sample had a heterogeneous selection function which may have been biased in favor of young stars over field stars. Previous work however has shown that \(\approx\)1-3% of M dwarfs younger than \(\approx\)100 Myr show the CPV phenomenon (Rebull et al., 2016; Gunther et al., 2022; Rebull et al., 2022). Analyzing the TESS light curves and stellar properties of our CPVs, we draw the following conclusions. 1. The sharpest CPV dips have durations of \(\approx\)0.05 \(P\) and depths of \(\approx\)1-3% (Figures 4 and 6). Explaining dips this sharp requires material extrinsic to the stellar surface (see Section 1). 2. The shortest CPV dips, also with durations of \(\approx\)0.05 \(P\), match the expected transit duration for a small body at the corotation radius, \(T_{\rm dur}\equiv R_{*}P_{\rm rot}/(\pi a)\) (see Section 4.2.2). Such dips are therefore likely produced by transits of bodies or distributions of optically-thick material that are smaller than the star. 3. Many CPV dips have durations a few times longer than \(T_{\rm dur}\) (Figure 4). The dips are often superposed on a quasi-sinusoidal signal that presumably originates from starspots and faculae on the stellar surface. The only viable explanation currently known for sharp dips being superposed on the starspot signals is that concentrations ("clumps") of circumstellar material corotate with the star. Assuming that the longer dips have the same physical origin as the shortest dips, the corotating clumps must also be capable of having sizes comparable to the star. 4. The mean periods of CPVs remain fixed to within a relative precision \(\lesssim\)0.1% over the two-year (\(\approx\)1,000 cycle) baseline of available observations. The light curve shapes always evolve over this timescale (Figure 6). 5. The dips in CPV light curves can have slightly different periods. LP 12-502, for instance, showed dips with four distinct periods within \(\pm\)0.3% of its fundamental period, sometimes simultaneously, and each lasting for up to 50 cycles (Figure 9). 6. The CPV peaks and dips evolve over timescales that are both secular (\(\approx\)100 cycles) and impulsive (\(<\)1 cycle). Dip growth seems to happen over durations of at least ten cycles, and slow dip decay can also occur. "State-switches" correspond to dips collapsing instantaneously; they occur once every few months for both LP 12-502 and TIC 300651846. State-switches are almost always linked with observed optical flares. Such switches are suggestive of magnetic reconnection opening the "magnetic cage" that traps the dust. 7. The detailed morphology changes exhibited by LP 12-502 during its state-switches (e.g. Figure 8, cycles 1233-1264) imply that the flux dips are additive and independent. 8. The on-off duty cycle for CPVs is \(\approx\)75%, based on the fraction of bona fide CPVs that either turned on or turned off during TESS re-observations, two years after the initial observation (Figure 6). 9. The CPV phenomenon persists for \(\gtrsim\)150 Myr, based on the existence of multiple CPVs in AB Dor, the Pleiades, and Psc-Eri (Section 2.3). It may even extend to 200 Myr, based on the one CPV we found in the Carina Near moving group (TIC 294328887; \(\approx\)200 Myr). The lack of detected CPVs in the Hyades and Praesepe suggests that the lifetime of the phenomenon is limited to the first few hundred million years. 10. Most CPVs are M dwarfs with masses 0.1-0.4 \(M_{\odot}\). Two sources, TIC 405754448 and TIC 405910546, have masses that appear to exceed 0.5 \(M_{\odot}\). Both are potentially binaries, and this may confuse our ability to accurate identify the source of the CPV signal (Section 3.4.2). We encourage additional scrutiny of these objects in future work. 11. The closest CPVs to the Sun are at distances of 15-20 pc; the brightest have \(V\)\(\approx\)12 (\(J\)\(\approx\)7.5). We have found most of the close exemplars in this work, since our CPV sample was \(\gtrsim\)80% complete within 30 pc. The lack of CPVs in the volume-complete \(<\)15 pc sample of 0.1-0.3 \(M_{\odot}\) stars analyzed by Winters et al. (2021) is consistent with this estimate. Expanding our analysis of the TESS data to the full frame images would yield a truly volume-limited selection function, and would expand the CPV census by about a factor of two within 50 pc, and by a factor of ten within 100 pc. 12. Surprising analogs to CPVs exist in two magnetic B stars, one of which is known to have an extreme multipolar field topology (Section 5.11). Since most magnetic B stars have dipolar magnetic fields, this suggests that the CPV dips and warps are similarly being sculpted by the stellar magnetic fields, and that the magnetic fields themselves are potentially also multipolar. 3. The rate of dip evolution can be used to place a model-dependent lower bound on how much material is either being accreted or ejected during the state changes (Section 5.9). Order of magnitude estimates require at least an asteroid belt's worth of dust (\(10^{-4}\,M_{\oplus}\)) over \(10^{8}\) years, or at least \(\approx\)\(10^{-2}\,M_{\oplus}\) if the occulting material is gas. While many questions remain, two in particular will be important for clarifying what these objects might teach us in a broader astrophysical context: _1)_ Is the eclipsing material responsible for the phenomenon gas or dust? _2)_ What sets the characteristic clumping size for the circumstellar material? The distinction between gas or dust is important because it could clarify whether the CPV phenomenon is intrinsic, so that material comes from the star, or extrinsic, so that it is sourced through some generic evolutionary phase of debris disks. This knowledge would in turn propagate to our understanding of whether the phenomenon is primarily teaching us about dust production and processing in gas-poor disks, or whether it is teaching us about the ability of cold gas to remain stable in hot stellar coronae for long durations. Observationally, acquisition of medium- or high-resolution time-series spectra holds a good chance at resolving the gas vs. dust question. Given our observed \(\approx\)75% on-off duty cycles, such data must be acquired simultaneously with photometric time-series observations (e.g. during TESS re-observation) in order for detections and non-detections to be interpretable. In both the gas and dust scenarios, CPVs are preferentially viewed edge-on. This implies that after correcting for the line-of-sight inclination, roughly one third of low mass stars (those that rotate rapidly enough; Gunther et al.2022) could trap circumstellar material in the same way. It also suggests that CPVs may preferentially show transiting planets at larger distances than the corotating material, though this conclusion would be dependent on whether the magnetic and stellar spin axes tend to be aligned. Given these points, observational follow-up work should include searching for outer transiting planets, and measuring equatorial velocities in order to test whether the stellar inclination angles are indeed preferentially edge-on. Any source of empirical information on the stellar magnetic field, whether from the Zeeman effect (e.g. Kochukhov2021) or perhaps radio emission (e.g. Hallinan et al.2015), could also help clarify the strength of the magnetospheres for these objects. On the theoretical front, building a physical understanding of what sets the characteristic size scale of the clumping material would help clarify why the light curves have the bizarre shapes that are observed. The relevant puzzles in plasma physics and radiative transfer could perhaps be connected to our understanding of the close-in rocky planets that are expected to be present around most of these stars. LGB is grateful for support from the Heising-Simons 51 Pegasi by Fellowship, and for helpful conversations with J. Spake, A. Mann, G. Laughlin, and B. Draine. We are also grateful for the assistance of S. Yee, L. Weiss, H. Isaacson, and A. Howard in acquiring and reducing the HIRES spectra, and for the reviewer's suggestion to consider the proximity of the dip and spot periods. This paper relied primarily on data collected by the TESS mission; the specific 2-minute cadence observations can be accessed via DOI 10.17909/t9-nmc8-f686. Funding for the TESS mission is provided by NASA's Science Mission Directorate. We thank the TESS Architects (G. Ricker, R. Vanderspek, D. Latham, S. Seager, and J. Jenkins) and the many TESS team members for their efforts to make the mission a continued success. LP 12-502 in particular was observed at 2-minute cadence thanks to the TESS Guest Investigator programs G022252 (PI: J. Schlieder; Sectors 18, 19, 25, 26) and G04168 (PI: R. Jayaraman; Sector 53). ADU acknowledges support by NASA under award number 80GSFC21M0002, as well as ROSES award 22-ADAP22-00707. LGB conceived the project, performed the dip-counting search, light curve classification, cluster membership, SED, variability, and secondary-period analyses, and wrote the manuscript. RJ and SR performed the Fourier-based analysis and contributed to light curve classification. LR cross-examined the light curve classification, and contributed an independent SED analysis. ADU identified the magnetic B star connection. LAH contributed to project design. JNW, SR, and LAH significantly improved the clarity of the manuscript. GAB acquired and maintained the servers used to run the dip-finding pipeline. astrobase(Bhatti et al.2021), astropy(Astropy Collaboration et al.2013, 2018, 2022), lightkurve(Lightkurve Collaboration et al.2018), numpy(Harris et al.2020), pyGAM(Serven and Brummitt, 2018), scipy(Virtanen et al.2020), TESS-point(Burke et al.2020), wotan(Hippke et al.2019). AstrometryGaia(Gaia Collaboration et al.2018, 2023). ImagingSecond Generation Digitized Sky Survey. Spectroscopy: Keck-I (HIRES; Vogt et al.1994). Photometry: TESS(Ricker et al.2015), Broadband photometry: 2MASS(Skrutskie et al.2006), APASS(Henden et al.2016), Gaia(Gaia Collaboration et al.2018, 2023), SDSS(York et al.2000), WISE(Wright et al.2010; Cutri et al.2021).
2305.20005
$τ$ data-driven evaluation of Euclidean windows for the hadronic vacuum polarization
We compute for the first time the $\tau$ data-driven Euclidean windows for the hadronic vacuum polarization contribution to the muon $g-2$. We show that $\tau$-based results agree with the available lattice window evaluations and with the full result. On the intermediate window, where all lattice evaluations are rather precise and agree, $\tau$-based results are compatible with them. This is particularly interesting, given that the disagreement of the $e^+e^-$ data-driven result with the lattice values in this window is the main cause for their discrepancy, affecting the interpretation of the $a_\mu$ measurement in terms of possible new physics.
Pere Masjuan, Alejandro Miranda, Pablo Roig
2023-05-31T16:25:32Z
http://arxiv.org/abs/2305.20005v3
# \(\tau\) data-driven evaluation of Euclidean windows for the hadronic vacuum polarization ###### Abstract We compute for the first time the \(\tau\) data-driven Euclidean windows for the hadronic vacuum polarization contribution to the muon \(g-2\). We show that \(\tau\)-based results agree with the available lattice window evaluations and with the full result. On the intermediate window, where all lattice evaluations are rather precise and agree, \(\tau\)-based results are compatible with them. This is particularly interesting, given that the disagreement of the \(e^{+}e^{-}\) data-driven result with the lattice values in this window is the main cause for their discrepancy, affecting the interpretation of the \(a_{\mu}\) measurement in terms of possible new physics. ## 1 Introduction The first Fermilab measurement of the muon anomalous magnetic moment \((a_{\mu}=(g_{\mu}-2)/2\), with \(g\) the gyromagnetic factor) confirmed [1] the final result from Brookhaven [2], yielding the world average \[a_{\mu}^{\rm Exp}\times 10^{11}=116592061(41)\,. \tag{1}\] This reaffirmed and strengthened the interest of the high-energy physics community in this observable, given its \(4.2\sigma\) tension with the Standard Model prediction [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], obtained within the Muon g-2 Theory Initiative [36]1 Footnote 1: See later developments in e.g. refs. [37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62]. \[a_{\mu}^{\rm SM}\times 10^{11}=116591810(43)\,. \tag{2}\] However, the BMWc lattice Collaboration published [63] a precise lattice-QCD evaluation of the hadronic vacuum polarization (HVP) contribution with the result \[a_{\mu}^{\rm BMWc}\times 10^{10}=707.5\pm 5.5\,, \tag{3}\] which yields an \(a_{\mu}\) evaluation at \(1.5\sigma\) from \(a_{\mu}^{\rm Exp}\), Eq. (1), and at \(2.1\sigma\) from the data-driven result, based on \(\sigma(e^{+}e^{-}\rightarrow\)hadrons), that was employed to get Eq. (2). Recently, other three accurate lattice evaluations, by the Mainz CLS [64], the ETMC [65] and the RBC/UKQCD [59] collaborations, have agreed within the so-called intermediate window (defined below, in Eq. (6)) with the BMWC result, having commensurate uncertainties. Reference [66], which sets the bases to start this project, has translated the data-driven results to the three different windows in Euclidean time used by lattice practitioners, to scrutinize the root of the current discrepancy between both groups of results. This is a crucial endeavour, since it currently limits the implications on new physics of the \(a_{\mu}\) measurements, as may help to pinpoint in what energy domain discrepancies emerge. It is also a timely task, as we expect the soon release of the second \(a_{\mu}\) FNAL value, with improved precision, in the forthcoming months. The situation has become even more puzzling with the recent accurate \(\sigma(e^{+}e^{-}\to\pi^{+}\pi^{-})\)2 CMD-3 measurement [67], which would -on its own- reduce the discrepancy with \(a_{\mu}^{\rm Exp}\) to less than two standard deviations. Consequently, it is in tension with the previous measurements of this reaction, by CMD-2 [68, 69, 70, 71], SND [72, 73], KLOE [74, 75, 76, 77], BaBar [78], BES [79] and CLEO [80]. Dedicated studies related to the radiative corrections employed in the Monte Carlo generators used by these experiments [81] (see e.g. refs. [82, 83, 84]) may explain this controversy. Footnote 2: This contribution yields a bit more than 70% of \(a_{\mu}^{\rm HVP,LO}\). Within the data-driven evaluations of \(a_{\mu}^{\rm HVP}\), \(\tau^{-}\to\nu_{\tau}\)hadrons was proposed in 1997 to reduce the error (by \(\sim 37\%\) then) of the method when using only \(e^{+}e^{-}\to\)hadrons in LEP times [85]. Through the years, this alternative data-driven method has always been [86, 87, 88, 89, 90, 91] approximately \([2,2.5]\sigma\) away from the \(a_{\mu}^{\rm Exp}\) world average, a situation which seems now favored by the lattice QCD evaluations of \(a_{\mu}^{\rm HVP}\)3. Still, a direct comparison between lattice QCD results and \(\tau\) data-driven methodology is not straightforward either. This motivates us to compute in this work Euclidean windows for \(a_{\mu}^{\rm HVP}\) using \(\tau\) data and compare results. We hope our outcomes may shed some light on this puzzling situation and be useful for the lattice effort [96] to compute the required isospin-breaking corrections (see below Eq. (6)) entering the \(\tau\)-based method. In section 2 we explain the needed formulae and apply them to obtain the results, which are summarized in section 3. Footnote 3: The difference between both data-driven groups of results could also be due to non-standard interactions modifying slightly -but noticeably at this precision- the di-pion \(\tau\) decays [92, 93, 94, 95]. ## 2 Results The HVP contribution at leading order (LO) in the data-driven approach is given by [97, 98, 99, 100] \[a_{\mu}^{\rm HVP,LO}=\frac{1}{4\pi^{3}}\int_{s_{\rm thr}}^{\infty}ds\,K(s)\, \sigma_{e^{+}e^{-}\to{\rm hadrons}(+\gamma)}^{0}(s), \tag{4}\] where \(\sigma_{e^{+}e^{-}\to{\rm hadrons}(+\gamma)}^{0}(s)\) is the bare hadronic cross-section, which excludes effects from vacuum polarization (VP) [101], and \(K(s)\) is a smooth kernel concentrated at low energies [99] \[K(s)=\frac{x^{2}}{2}(2-x^{2})+\frac{(1+x^{2})(1+x)^{2}}{x^{2}}\left(\log(1+x) -x+\frac{x^{2}}{2}\right)+\frac{(1+x)}{(1-x)}x^{2}\log(x), \tag{5}\] which is written in terms of the variable \(x=\frac{1-\beta_{\mu}}{1+\beta_{\mu}}\), \(\beta_{\mu}=\sqrt{1-4m_{\mu}^{2}/s}\). This dispersive approach revolves around the handiness of \(e^{+}e^{-}\) hadronic cross-section measurements at energies below a few GeV. However, as was pointed out early by Alemany, Davier, and Hocker [85], it is also possible to use hadronic \(\tau\) decays to evaluate the HVP, LO contributions to \(a_{\mu}\). For some time this approach was competitive with the \(e^{+}e^{-}\) data, although this is generally not considered the case at the moment [36]. Recently, a new \(\tau\) data-driven approach was performed in Ref. [91]. In this section, we utilize their results to evaluate the leading HVP contribution to the anomalous magnetic moment of the muon in the so-called window quantities [10, 102]. For this enterprise, we make use of the weight functions in center-of-mass energy \(\hat{\Theta}(s)\) from Eq. (12) in Ref. [66] which are related to those in Euclidean time [10] \[\begin{split}\Theta_{SD}(t)&=1-\Theta(t,t_{0},\Delta),\\ \Theta_{win}(t)&=\Theta(t,t_{0},\Delta)-\Theta(t,t_{1}, \Delta),\\ \Theta_{LD}(t)&=\Theta(t,t_{1},\Delta),\\ \Theta(t,t^{\prime},\Delta)&=\frac{1}{2}\left(1+\tanh \frac{t-t^{\prime}}{\Delta}\right).\end{split} \tag{6}\] The subscripts in Eq. (6) refer to the short-distance (\(SD\)), intermediate (\(win\), although we will use \(int\) in the following) and long-distance (\(LD\)) contributions with parameters \[t_{0}=0.4\ \text{fm},\quad t_{1}=1.0\ \text{fm},\quad\Delta=0.15\ \text{fm}, \tag{7}\] which correspond to inverse energies of the order of \(500,200\) and \(1300\) MeV, respectively. In what follows, we will focus on the dominant \(2\pi\) contribution only. Including isospin breaking (IB) corrections, i.e., \(\mathcal{O}[(m_{u}-m_{d})p^{2}]\) and \(\mathcal{O}(e^{2}p^{2})\) contributions, the _bare_ hadronic \(e^{+}e^{-}\) cross-section \(\sigma^{0}_{\pi\pi(\gamma)}\) is related to the _observed_ differential \(\tau\) decay rate \(d\Gamma_{\pi\pi[\gamma]}\) through [86, 87] \[\sigma^{0}_{\pi\pi(\gamma)}=\left[\frac{K_{\sigma}(s)}{K_{\Gamma}(s)}\frac{d \Gamma_{\pi\pi[\gamma]}}{ds}\right]\times\frac{R_{\text{IB}}(s)}{S_{\text{EW} }}, \tag{8}\] where \[\begin{split} K_{\Gamma}(s)&=\frac{G_{F}^{2}|V_{ud }|^{2}m_{\tau}^{3}}{384\pi^{3}}\left(1-\frac{s}{m_{\tau}^{2}}\right)^{2}\left( 1+\frac{2s}{m_{\tau}^{2}}\right),\\ K_{\sigma}(s)&=\frac{\pi\alpha^{2}}{3s},\end{split} \tag{9}\] and the IB corrections are encoded in \[R_{\text{IB}}(s)=\frac{\text{FSR}(s)}{G_{\text{EM}}(s)}\frac{\beta_{\pi^{+}\pi^ {-}}^{3}}{\beta_{\pi^{+}\pi^{0}}^{3}}\left|\frac{F_{V}(s)}{f_{+}(s)}\right|^{2}. \tag{10}\] The \(S_{\text{EW}}\) term in Eq. (8) includes the short-distance electroweak corrections [103, 104, 105, 106, 107, 108, 109, 110]. FSR refers to the Final-State-Radiation corrections to the \(\pi^{+}\pi^{-}\) channel [111, 112], while the \(G_{\text{EM}}(s)\) factor includes the QED corrections to the \(\tau^{-}\to\pi^{-}\pi^{0}\nu_{\tau}\) decay with virtual plus real photon radiation. \(\beta_{\pi^{-}\pi^{+}}^{3}/\beta_{\pi^{-}\pi^{0}}^{3}\) is a phase space correction owing to the \(\pi^{\pm}-\pi^{0}\) mass difference. The last term in \(R_{\text{IB}}(s)\) corresponds to the ratio between the neutral (\(F_{V}(s)\)) and the charged (\(f_{+}(s)\)) pion form factor, which includes one of the leading IB effects, the \(\rho^{0}-\omega\) mixing correction. The IB corrections to \(a_{\mu}^{\text{HVP, LO}}\) using \(\tau\) data in the dominant \(\pi\pi\) channel can be evaluated using the following expression [88] \[\Delta a_{\mu}^{\text{HVP, LO}}[\pi\pi,\tau]=\frac{1}{4\pi^{3}}\int_{4m_{\pi} ^{2}}^{m_{\tau}^{2}}\!ds\,K(s)\left[\frac{K_{\sigma}(s)}{K_{\Gamma}(s)}\frac{d \Gamma_{\pi\pi[\gamma]}}{ds}\right]\left(\frac{R_{\text{IB}}(s)}{S_{\text{EW }}}-1\right), \tag{11}\] which measures the difference between the correct expression for \(\sigma^{0}_{\pi\pi(\gamma)}\) and the naive Conserved Vector Current approximation, with \(S_{\text{EW}}=1\) and \(R_{\text{IB}}(s)=1\). These contributions are summarized in Table 1 for each IB correction. * The \(S_{\text{EW}}\) factor (\(S_{\text{EW}}=1.0201\), at the scale \(m_{\tau}\)) contributes \(3.0\%\), \(28.7\%\) and \(68.3\%\) to the complete \(\Delta a_{\mu}^{\text{HVP, LO}}=-10.31(0)\cdot 10^{-10}\) correction for the \(SD\), \(int\) and \(LD\) contributions, respectively. * The phase space (PS) correction yields \(1.7\%\), \(18.7\%\) and \(79.6\%\) of a total of \(\Delta a_{\mu}^{\text{HVP, LO}}=-7.45(0)\cdot 10^{-10}\) for the \(SD\), \(int\) and \(LD\) contributions, respectively. * The final state radiation (FSR) induces \(2.9\%\), \(27.0\%\) and \(70.1\%\) of a total of \(\Delta a_{\mu}^{\text{HVP, LO}}=+4.55(45)\cdot 10^{-10}\) for the \(SD\), \(int\) and \(LD\) contributions, respectively. * \(G_{\text{EM}}(s)\) was originally computed in Ref. [87] including those operators yielding resonance saturation of the \(\mathcal{O}(p^{4})\) chiral couplings in the frame of Resonance Chiral Theory (R\(\chi\)T) [113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 188, 189, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 209, 211, 202, 204, 206, 208, 209, 211, 207, 211, 208, 212, 213, 214, 215, 216, 217, 218, 219, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 259, 270, 252, 254, 256, 257, 259, 261, 258, 252, 259, 271, 259, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 285, 286, 287, 289, 288, 289, 291, 392, 393, 394, 395, 396, 397, 398, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 409, 410, 411, 422, 43, 444, 445, 446, 401, 425, 426, 447, 44, 458, 46, 46, 47, 48, 49, 500, 402, 403, 404, 405, 406, 407, 409, 411, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 55, 56, 57, 58, 59, 59, 51, 50, 51, 52, 53, 54, 53, 55, 57, 59, 52, 54, 55, 58, 59, 50, 53, 59, 50, 54, 55, 56, 57, 59, 52, 55, 59, 53, 54, 56, 58, 59, 50, 54, 59, 50, 55,6, 57, 59, 50, 57, 58, 59, 50, 59, 51, 50, 52, 59, 54, 50, 56, 59, 51, 53, 57, 59, 54, 52, 55, 57, 56, 58, 59, 50, 57, 59, 50, 58, 59, 50, 59, 51, 50, 59, 52, 50, 59, 52, 53, 56, 59, 53, 57, 58, 59, 50, 51, 54, 55, 59, 50, 56, 57, 58, 59, 50, 59, 5 In Ref. [91] (see also [126]), two of us explored the impact of the R\(\chi\)T operators contributing to resonance saturation at the next chiral order (\({\cal O}(p^{6})\)) on the \(G_{\rm EM}(s)\), as well as estimating the uncertainty of the original computation in [87]. These results [91] are consistent with the earlier R\(\chi\)T and the VMD estimates [91]. Acvialling of these results, \(G_{\rm EM}(s)\) produces a correction of \(\sim-3.5\%\), \(\sim-17.1\%\) and \(\sim+120.6\%\) of \(\Delta a_{\mu}^{\rm HVP,\;LO}=-1.70(^{0.61}_{1.48})\cdot 10^{-10}\) at \({\cal O}(p^{4})\), and \(\sim 0.4\%\), \(\sim 7.6\%\) and \(\sim 92.0\%\) of \(\Delta a_{\mu}^{\rm HVP,\;LO}=-7.59(^{6.50}_{4.56})\cdot 10^{-10}\) at \({\cal O}(p^{6})\) for the \(SD\), \(int\) and \(LD\) contributions, respectively. Interestingly, the \(SD\) and \(int\) contributions at \({\cal O}(p^{4})\) change in sign while this is not the case at \({\cal O}(p^{6})\) where all the contributions are always negative. * The ratio of the form factors (FF) gives an overall correction of \(\Delta a_{\mu}^{\rm HVP,\;LO}=+7.10(1.48)(^{1.59}_{1.54})(^{0.85}_{0.80})\cdot 1 0^{-10}\), from which \(\sim 2.2\%\), \(\sim 26.8\%\) and \(\sim 71.0\%\) stand for the SD, int and LD corrections, respectively. The errors quoted in this contribution correspond to the electromagnetic shifts in the widths and masses of the \(\rho\) meson, and to the \(\rho^{0}-\omega\) mixing parameter [87, 127], respectively. For this analysis, we use the same numerical inputs as in [91]. The central value reported in Table 1 corresponds to the weighted average between the FF1 and FF2 sets 6. Footnote 6: The main distinction between FF1 and FF2 comes from the width difference between the \(\rho^{\pm}\) and \(\rho^{0}\) mesons (\(\Delta\Gamma_{\rho}=\Gamma_{\rho^{0}}-\Gamma_{\rho^{\pm}}\)), while the mass difference (\(\Delta M_{\rho}=M_{\rho^{0}}-M_{\rho^{\pm}}\)) and the \(\theta_{\omega\rho}\) parameter are the same in both. The overall correction is also consistent with those in Refs. [87, 88, 91]. Using the \(\tau\) spectral functions measured by ALEPH [128], Belle [129], CLEO [130] and OPAL [131], we evaluate \(a_{\mu}^{\rm HVP,\;LO}[\pi\pi]\) using the window parameters in Eq. (7). These results are outlined in Tables 2 and 3 for \(s\leq 1\), 2, 3, and 3.125 \(\,{\rm GeV}^{2}\), i.e., integrating Eq. (4) with \(\sigma^{0}_{\pi\pi(\gamma)}\) in Eq. (8) from \(s_{\rm thr}=4m_{\pi}^{2}\) up to some given cut-off. In the aforementioned tables 2 and 3, the first uncertainty is connected to the systematic errors on the mass spectrum, and from the \(\tau\)-mass and \(V_{ud}\) uncertainties; the second error is associated to \(B_{\pi\pi^{0}}\) and \(B_{e}\); and the third one is due to the IB corrections. The _Mean_ value in the tables corresponds to the weighted average from the different window contributions for each experiment, the first error is related to the experimental measurements, while the second one comes from the IB corrections. An evaluation of \(a_{\mu}^{\rm HVP}\) in the windows in Euclidean time using \(e^{+}e^{-}\) data was performed in Ref. [66] using the parameters in Eq. (7). A comparison between these window quantities for HVP in the \(2\pi\) channel below \(1\,{\rm GeV}\) amounts to a discrepancy of \(4.3\,\sigma\), \(3.2\,\sigma\) and \(2.1\,\sigma\) between the \(\tau\) and \(e^{+}e^{-}\) evaluations applying the \(G_{\rm EM}(s)\) correction at \({\cal O}(p^{4})\) in R\(\chi\)T to the \(\tau\) data for the \(SD\), \(int\) and \(LD\) contributions, respectively. On the other hand, when we include the corrections at \({\cal O}(p^{6})\), the difference between these two evaluations decreases to \(3.6\,\sigma\), \(2.6\,\sigma\) and \(0.9\,\sigma\) for the \(SD\), \(int\) and \(LD\) contributions, respectively. These results are depicted in Figs. 1 and 2, where the blue band corresponds to the experimental average using \(\tau\) data. Fig. 3 shows a zoomed comparison between \(\tau\) (after IB corrections at \({\cal O}(p^{6})\)) and \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) spectral function using the ISR measurements from BABAR [78] and KLOE [76] (left-hand panel) and the energy-scan measurements from CMD-3 [67] (right-hand panel). Colored bands correspond to the weighted average of the uncertainties coming from both sets of data in each figure. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{5}{|c|}{\(a_{\mu}^{\rm HVPLO}[\pi\pi,\tau]\)} \\ \hline \multicolumn{5}{|c|}{SD} \\ \hline Experiment & \(s\leq 1\,\mbox{GeV}^{2}\) & \(s\leq 2\,\mbox{GeV}^{2}\) & \(s\leq 3\,\mbox{GeV}^{2}\) & \(s\leq 3.125\,\mbox{GeV}^{2}\) \\ \hline ALEPH & 14.29(4)(8)(6) & 15.51(2)(9)(6) & 15.57(3)(9)(6) & 15.57(3)(9)(6) \\ Belle & 14.31(4)(22)(6) & 15.39(5)(24)(6) & 15.44(6)(24)(6) & 15.46(12)(24)(6) \\ CLEO & 14.33(6)(25)(6) & 15.46(6)(27)(6) & 15.50(6)(27)(6) & 15.51(6)(27)(6) \\ OPAL & 14.23(7)(19)(6) & 15.51(3)(21)(6) & 15.56(3)(21)(6) & 15.56(3)(21)(6) \\ Mean & 14.29(7)(6) & 15.49(8)(6) & 15.55(8)(6) & 15.55(8)(6) \\ \hline \multicolumn{5}{|c|}{Intermediate} \\ \hline Experiment & \(s\leq 1\,\mbox{GeV}^{2}\) & \(s\leq 2\,\mbox{GeV}^{2}\) & \(s\leq 3\,\mbox{GeV}^{2}\) & \(s\leq 3.125\,\mbox{GeV}^{2}\) \\ \hline ALEPH & 143.18(61)(79)(\({}^{59}_{61}\)) & 149.49(50)(83)(\({}^{59}_{61}\)) & 149.65(48)(83)(\({}^{59}_{61}\)) & 149.65(48)(83)(\({}^{59}_{61}\)) \\ Belle & 143.70(45)(2.24)(\({}^{59}_{61}\)) & 149.30(48)(2.33)(\({}^{60}_{62}\)) & 149.44(49)(2.33)(\({}^{60}_{62}\)) & 149.47(54)(2.33)(\({}^{60}_{62}\)) \\ CLEO & 143.86(60)(2.52)(\({}^{59}_{61}\)) & 149.70(61)(2.62)(\({}^{60}_{62}\)) & 149.81(61)(2.62)(\({}^{60}_{62}\)) & 149.84(61)(2.62)(\({}^{60}_{62}\)) \\ OPAL & 143.43(1.17)(1.92)(\({}^{58}_{60}\)) & 150.03(97)(2.01)(\({}^{58}_{61}\)) & 150.16(97)(2.01)(\({}^{58}_{61}\)) & 150.16(97)(2.01)(\({}^{58}_{61}\)) \\ Mean & 143.34(81)(\({}^{59}_{61}\)) & 149.56(80)(\({}^{59}_{62}\)) & 149.70(79)(\({}^{59}_{62}\)) & 149.71(79)(\({}^{59}_{62}\)) \\ \hline \multicolumn{5}{|c|}{LD} \\ \hline Experiment & \(s\leq 1\,\mbox{GeV}^{2}\) & \(s\leq 2\,\mbox{GeV}^{2}\) & \(s\leq 3\,\mbox{GeV}^{2}\) & \(s\leq 3.125\,\mbox{GeV}^{2}\) \\ \hline ALEPH & 348.12(4.12)(1.93)(\({}^{1.19}_{1.39}\)) & 351.89(4.08)(1.95)(\({}^{1.19}_{1.39}\)) & 351.93(4.08)(1.95)(\({}^{1.19}_{1.39}\)) \\ Belle & 351.65(1.36)(5.49)(\({}^{1.19}_{1.40}\)) & 355.01(1.37)(5.54)(\({}^{1.20}_{1.42}\)) & 355.04(1.37)(5.55)(\({}^{1.19}_{1.42}\)) \\ CLEO & 350.59(2.88)(6.13)(\({}^{1.19}_{1.39}\)) & 354.11(2.88)(6.20)(\({}^{1.19}_{1.39}\)) & 354.13(2.88)(6.20)(\({}^{1.19}_{1.40}\)) & 354.14(2.88)(6.20)(\({}^{1.19}_{1.39}\)) \\ OPAL & 362.04(9.33)(4.85)(\({}^{1.18}_{1.41}\)) & 365.96(9.26)(4.90)(\({}^{1.18}_{1.45}\)) & 365.98(9.26)(4.90)(\({}^{1.18}_{1.45}\)) \\ Mean & 350.75(3.01)(\({}^{1.19}_{1.41}\)) & 354.36(3.01)(\({}^{1.19}_{1.41}\)) & 354.39(3.01)(\({}^{1.19}_{1.42}\)) & 354.39(3.01)(\({}^{1.19}_{1.41}\)) \\ \hline \end{tabular} \end{table} Table 2: IB-corrected \(a_{\mu}^{\rm HVPLO}[\pi\pi,\tau]\) in units of \(10^{-10}\) at \({\cal O}(p^{4})\) in \({\rm R}\chi T\) using the experimental measurements from the ALEPH [128], Belle [129], CLEO [130] and OPAL [131] Colls. The first error is related to the systematic uncertainties on the mass spectrum and also includes contributions from the \(\tau\)-mass and \(V_{ud}\) uncertainties. The second error arises from \(B_{\pi\pi^{0}}\) and \(B_{e}\), and the third error comes from the isospin-breaking corrections. The uncertainties in the mean value correspond to the experiment and to the IB corrections, respectively. Figure 1: Windows quantities for HVP at \({\cal O}(p^{4})\) for \(2\pi\) below \(1.0\,\mbox{GeV}\) using the parameters in Eq. (7). The blue region corresponds to the experimental average from \(\tau\) data. The \(e^{+}e^{-}\) number was taken from Ref. [66]. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{5}{|c|}{\(a_{\mu}^{\rm HVP,LO}[\pi\pi,\tau]\)} \\ \hline \multicolumn{5}{|c|}{SD} \\ \hline Experiment & \(s\leq 1\,{\rm GeV}^{2}\) & \(s\leq 2\,{\rm GeV}\) & \(s\leq 3\,{\rm GeV}\) & \(s\leq 3.125\,{\rm GeV}^{2}\) \\ \hline ALEPH & \(14.22(4)(8)(8)\) & \(15.42(2)(9)(^{1}_{3})\) & \(15.47(2)(9)(^{1}_{3})\) & \(15.47(3)(9)(^{1}_{3})\) \\ Belle & \(14.24(4)(22)(8)\) & \(15.30(5)(24)(^{10}_{8})\) & \(15.35(5)(24)(^{10}_{8})\) & \(15.36(12)(24)(^{10}_{8})\) \\ CLEO & \(14.27(6)(25)(8)\) & \(15.37(6)(27)(^{10}_{8})\) & \(15.41(6)(27)(^{10}_{9})\) & \(15.42(6)(27)(^{10}_{9})\) \\ OPAL & \(14.16(7)(19)(8)\) & \(15.42(3)(21)(^{11}_{9})\) & \(15.46(3)(21)(^{11}_{9})\) & \(15.46(3)(21)(^{11}_{9})\) \\ Mean & \(14.22(7)(8)\) & \(15.40(8)(^{19}_{9})\) & \(15.45(8)(^{1}_{9})\) & \(15.46(8)(^{1}_{9})\) \\ \hline \multicolumn{5}{|c|}{Intermediate} \\ \hline Experiment & \(s\leq 1\,{\rm GeV}^{2}\) & \(s\leq 2\,{\rm GeV}^{2}\) & \(s\leq 3\,{\rm GeV}^{2}\) & \(s\leq 3.125\,{\rm GeV}^{2}\) \\ \hline ALEPH & \(142.49(60)(79)(^{2}_{31})\) & \(148.67(50)(83)(^{1.02}_{87})\) & \(148.82(48)(83)(^{1.02}_{87})\) & \(148.82(48)(83)(^{1.02}_{87})\) \\ Belle & \(143.00(45)(2.23)(^{92}_{81})\) & \(148.49(48)(2.32)(^{1.02}_{88})\) & \(148.62(48)(2.32)(^{1.03}_{88})\) \\ CLEO & \(143.16(60)(2.50)(^{92}_{81})\) & \(148.88(60)(2.60)(^{1.01}_{87})\) & \(148.99(60)(2.61)(^{1.01}_{87})\) & \(149.02(60)(2.61)(^{1.01}_{87})\) \\ OPAL & \(142.69(1.16)(1.91)(^{96}_{83})\) & \(149.15(95)(2.00)(^{1.06}_{90})\) & \(149.27(95)(2.00)(^{1.07}_{90})\) & \(149.27(95)(2.00)(^{1.07}_{90})\) \\ Mean & \(142.64(80)(^{92}_{82})\) & \(148.73(79)(^{1.03}_{0.88})\) & \(148.87(79)(^{1.03}_{88})\) & \(148.88(79)(^{1.03}_{0.88})\) \\ \hline \multicolumn{5}{|c|}{LD} \\ \hline Experiment & \(s\leq 1\,{\rm GeV}^{2}\) & \(s\leq 2\,{\rm GeV}^{2}\) & \(s\leq 3\,{\rm GeV}^{2}\) & \(s\leq 3.125\,{\rm GeV}^{2}\) \\ \hline ALEPH & \(344.17(3.99(1.91)(^{3.8}_{2.74})\) & \(347.87(3.95)(1.93)(^{3.99}_{2.79})\) & \(347.90(3.95)(1.93)(^{3.09}_{2.79})\) & \(347.90(3.95)(1.93)(^{3.09}_{2.79})\) \\ Belle & \(347.62(1.34)(5.43)(^{3.7}_{2.82})\) & \(350.92(1.35)(5.48)(^{3.78}_{2.86})\) & \(350.94(1.35)(5.48)(^{3.78}_{2.86})\) & \(350.94(1.36)(5.48)(^{3.78}_{2.86})\) \\ CLEO & \(346.63(2.77)(6.06)(^{3.55}_{2.76})\) & \(350.08(2.77)(6.12)(^{3.70}_{2.81})\) & \(350.10(2.77)(6.12)(^{3.70}_{2.81})\) & \(350.11(2.77)(6.13)(^{3.70}_{2.81})\) \\ OPAL & \(357.54(8.99)(4.79)(^{1.49}_{3.41})\) & \(361.38(8.92)(4.84)(^{4.25}_{3.18})\) & \(361.41(8.92)(4.84)(^{4.25}_{3.18})\) \\ Mean & \(346.73(2.95)(^{3.80}_{2.87})\) & \(350.27(2.95)(^{3.86}_{2.91})\) & \(350.29(2.95)(^{3.86}_{2.91})\) & \(350.30(2.95)(^{3.86}_{2.91})\) \\ \hline \end{tabular} \end{table} Table 3: Same as Table 2, but the \({\cal O}(p^{6})\) contributions to \(G_{\rm EM}(s)\) in Ref. [91] have been applied to the \(\tau\) data. Figure 3: Comparison between the \(\tau\) (after IB corrections) and \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) spectral function using the ISR measurements from BABAR [78] and KLOE [76] (left-hand) and the energy-scan measurements from CMD-3 [67] (right-hand). Figure 2: Analog to Fig. 1 but at \({\cal O}(p^{6})\). A direct comparison between \(a_{\mu}^{\rm HVP,\,LO}[\pi\pi,\tau]\) and the lattice results is not possible. For that endeavour, it is necessary to supplement the \(2\pi\) evaluation with the remaining contributions from all other channels accounting for the hadronic cross-section. To illustrate the impact of this contribution in \(a_{\mu}^{\rm HVP,\,LO}\), we follow two approaches. Firstly, using the values reported in Table 1 of Ref. [66] we subtract the contribution from the \(2\pi\) channel below \(1.0\,\)GeV (we represent this procedure with '\(<1\) GeV') and replace it by the corresponding mean value in Tables 2 and 3. This way, we get \[a_{\mu}^{SD}=69.0(5)\times 10^{-10},\quad a_{\mu}^{int}=234.4(1.2)\times 10^{-10},\quad a_{\mu}^{LD}=403.6(3.3)\times 10^{-10}, \tag{12}\] at \(\mathcal{O}(p^{4})\), and \[a_{\mu}^{SD}=68.9(5)\times 10^{-10},\quad a_{\mu}^{int}=233.7(1.4)\times 10^{-10},\quad a_{\mu}^{LD}=399.5(^{4.9}_{4.2})\times 10^{-10}, \tag{13}\] at \(\mathcal{O}(p^{6})\). Secondly, we rescale the contributions from the \(2\pi\) channel using the overall evaluation of \(a_{\mu}^{\rm HVP,\,LO}[\pi\pi,e^{+}e^{-}]\) in Refs. [7, 8], then we subtract this value from the total contribution and replace it by our results. Thus, we get \[a_{\mu}^{SD}=70.0(5)\times 10^{-10},\quad a_{\mu}^{int}=237.8(1.2)\times 10^{-1 0},\quad a_{\mu}^{LD}=399.7(^{3.3}_{3.4})\times 10^{-10}, \tag{14}\] at \(\mathcal{O}(p^{4})\), and \[a_{\mu}^{SD}=69.9(5)\times 10^{-10},\quad a_{\mu}^{int}=237.0(^{1.5}_{1.4}) \times 10^{-10},\quad a_{\mu}^{LD}=395.6(^{4.9}_{4.2})\times 10^{-10}, \tag{15}\] at \(\mathcal{O}(p^{6})\). All these results are reasonably consistent with each other. We summarize these results in Table 4 along with the lattice results [10, 59, 63, 64, 65, 132] and other \(e^{+}e^{-}\) data-driven evaluations [36, 66]. These numbers are depicted in Fig. 4 for the intermediate window, where the blue band represents the average of the lattice results excluding those from RBC/UKQCD 2018 [10] and ETMC 2021 [132] collaborations. The contributions of the intermediate window using \(\tau\) data are slightly closer to the results from lattice QCD than to the \(e^{+}e^{-}\) values. Therefore, the \(\sim 4.3\sigma\) discrepancy between the \(e^{+}e^{-}\) data-driven and lattice evaluations is reduced to \(\sim 1.5\sigma\) when \(\tau\) data is used for the \(2\pi\) channel. On the other hand, there is only one lattice result for the short-distance window [65] which seems to be in agreement with both data-driven HVP evaluations. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{5}{|c|}{\(a_{\mu}^{\rm HVP,LO}\)} \\ \hline & SD & int & LD & Total \\ \hline \(\tau\)-data \(\mathcal{O}(p^{4})\leq 1\) GeV & 69.0(5) & 234.4(1.2) & 403.6(3.3) & 707.0(5.0) \\ \(\tau\)-data \(\mathcal{O}(p^{6})\leq 1\) GeV & 68.9(5) & 233.7(1.4) & 399.5(\({}^{4.9}_{4.2}\)) & 702.2(\({}^{6.8}_{6.1}\)) \\ \hline \(\tau\)-data \(\mathcal{O}(p^{6})\) & 70.0(5) & 237.8(1.2) & 399.7(\({}^{3.4}_{3.4}\)) & 707.5(\({}^{3.7}_{3.7}\)) \\ \(\tau\)-data \(\mathcal{O}(p^{6})\) & 69.9(5) & 237.0(\({}^{1.5}_{1.4}\)) & 395.6(\({}^{4.9}_{4.2}\)) & 702.4(\({}^{6.9}_{6.1}\)) \\ \hline RBC/UKQCD 2018 [10] & – & 231.9(1.5) & – & 715.4(18.7) \\ ETMC 2021 [132] & – & 231.7(2.8) & – & – \\ BMW 2020 [63] & – & 236.7(1.4) & – & 707.5(5.5) \\ Mainz/CLS 2022 [64] & – & 237.30(1.46) & – & – \\ ETMC 2022 [65] & 69.33(29) & 235.0(1.1) & – & – \\ RBC/UKQCD 2023 [59] & – & 235.56(82) & – & – \\ \hline WP [36] & – & – & – & 693.1(4.0) \\ BMW 2020/KNT [4, 63] & – & 229.7(1.3) & – & – \\ Colangelo et al. 2022 [66] & 68.4(5) & 229.4(1.4) & 395.1(2.4) & 693.0(3.9) \\ \hline \end{tabular} \end{table} Table 4: Window quantities for \(a_{\mu}^{\rm HVP,LO}\) in units of \(10^{-10}\). The first two rows correspond to the \(\tau\) evaluation in the first approach, while the rows 3 and 4 are the evaluations in the second one. The rows 5-10 are the lattice results [10, 59, 63, 64, 65, 132]. The last three rows are the evaluations obtained using \(e^{+}e^{-}\) data. See Fig. 11 in Ref. [59] for more details. ## 3 Conclusions While the BNL and FNAL measurements of \(a_{\mu}\) agree nicely within errors, the situation is not that clear for its SM prediction's counterpart. On the one hand, data-driven methods based on \(e^{+}e^{-}\to\)hadrons data have to deal with the tensions between experiments (particularly among BaBar and KLOE, and now with CMD-3), which makes the computing of the uncertainty in Eq. (2) a non-trivial task, as will be its update. On the other side, there is still only one lattice QCD evaluation (BMWe Coll.) of \(a_{\mu}^{\rm HVP}\) with competitive uncertainties, that lies between \(a_{\mu}^{\rm Exp}\) and its SM data-driven prediction. However, the most recent Mainz/CLS, ETMC, and RBC/UKQCD results have similar errors to BMWe in the intermediate window, where all of them agree remarkably. It is long-known that alternative data-driven evaluations are possible, utilizing this time semileptonic \(\tau\)-decay data (and isospin-breaking corrections, with attached model-dependent uncertainty), as we have done here. In this context, we have applied the study from Ref. [59], that computed window quantities in Euclidean time for data-driven evaluations of \(a_{\mu}^{\rm HVP}\) using \(e^{+}e^{-}\to\) hadrons data, to the semileptonic \(\tau\) decays case (focusing on the dominant two-pion contribution). Our main results are collected in Table 4 and show that \(\tau\)-based results are compatible with the lattice evaluations in the intermediate window, being the \(e^{+}e^{-}\)-based values in tension with both of them. This difference is the main cause for the larger discrepancy of the latter with \(a_{\mu}^{\rm Exp}\) and should be further scrutinized. ## Acknowledgements We have benefited from discussions with Rafel Escribano on this topic. The work of P. M. has been supported by the European Union's Horizon 2020 Research and Innovation Programme under grant 824093 (H2020-INFRAIA- 2018-1), the Ministerio de Ciencia e Innovacion under grant PID2020-112965GB-I00, and by the Secretaria d'Universitats i Recerca del Departament d'Empresa i Coneixement de la Generalitat de Catalunya under grant 2021 SGR 00649. IFAE is partially funded by the CERCA program of the Generalitat de Catalunya. J. A. M. is also supported by MICINN with funding from European Union NextGenerationEU (PRTR-C17.I1) and by Generalitat de Catalunya. P. R. thanks partial funding from Conacyt and Catedras Marcos Moshinsky 2020 (Fundacion Marcos Moshinsky).
2309.16662
Geodesic Regression Characterizes 3D Shape Changes in the Female Brain During Menstruation
Women are at higher risk of Alzheimer's and other neurological diseases after menopause, and yet research connecting female brain health to sex hormone fluctuations is limited. We seek to investigate this connection by developing tools that quantify 3D shape changes that occur in the brain during sex hormone fluctuations. Geodesic regression on the space of 3D discrete surfaces offers a principled way to characterize the evolution of a brain's shape. However, in its current form, this approach is too computationally expensive for practical use. In this paper, we propose approximation schemes that accelerate geodesic regression on shape spaces of 3D discrete surfaces. We also provide rules of thumb for when each approximation can be used. We test our approach on synthetic data to quantify the speed-accuracy trade-off of these approximations and show that practitioners can expect very significant speed-up while only sacrificing little accuracy. Finally, we apply the method to real brain shape data and produce the first characterization of how the female hippocampus changes shape during the menstrual cycle as a function of progesterone: a characterization made (practically) possible by our approximation schemes. Our work paves the way for comprehensive, practical shape analyses in the fields of bio-medicine and computer vision. Our implementation is publicly available on GitHub: https://github.com/bioshape-lab/my28brains.
Adele Myers, Caitlin Taylor, Emily Jacobs, Nina Miolane
2023-09-28T17:58:19Z
http://arxiv.org/abs/2309.16662v1
# Geodesic Regression Characterizes ###### Abstract Women are at higher risk of Alzheimer's and other neurological diseases after menopause, and yet research connecting female brain health to sex hormone fluctuations is limited. We seek to investigate this connection by developing tools that quantify 3D shape changes that occur in the brain during sex hormone fluctuations. Geodesic regression on the space of 3D discrete surfaces offers a principled way to characterize the evolution of a brain's shape. However, in its current form, this approach is too computationally expensive for practical use. In this paper, we propose approximation schemes that accelerate geodesic regression on shape spaces of 3D discrete surfaces. We also provide rules of thumb for when each approximation can be used. We test our approach on synthetic data to quantify the speed-accuracy trade-off of these approximations and show that practitioners can expect very significant speed-up while only sacrificing little accuracy. Finally, we apply the method to real brain shape data and produce the first characterization of how the female hippocampus changes shape during the menstrual cycle as a function of progesterone: a characterization made (practically) possible by our approximation schemes. Our work paves the way for comprehensive, practical shape analyses in the fields of bio-medicine and computer vision. Our implementation is publicly available on GitHub. ## 1 Introduction Women are more likely to experience Alzheimer's, cognitive decline, and navigational issues after menopause [2, 20]. Yet, topics relevant to female brain health such as menstruation, pregnancy, menopause, and their associated female sex hormone fluctuations only account for 0.3% of the neuroimaging literature between 1995 and 2022 [17]. The hippocampal formation (a brain structure) is an excellent diagnostic tool for investigating the connection between female brain health and sex hormones, as it is the first cortical region to harbor neuropathology in the progression to Alzheimer's (causing characteristic shape changes visible on magnetic resonance images (MRI)) [5, 13], and it is also very sensitive to sex hormone fluctuations [7]. Sex hormone fluctuations significantly influence brain anatomy and function in healthy subjects [18, 16, 15]. Enhancing our understanding of how hormonal fluctuations affect the healthy brain is crucial to explaining why females are later more at risk for neurological conditions after menopause [4]. We seek to close this knowledge gap by starting with one question: How does the hippocampal formation respond to monthly fluctuations in ovarian hormones during the menstrual cycle? Recent research has shown that certain substructures of the hippocampal formation change their volume over the course of the menstrual cycle in response to progesterone, but no significant volumetric change was found on the whole-formation level [18]. Our 3D visualizations (Fig. 1) show that the hippocampal formation does change its _shape_ on a whole-formation level, but no team has quantified how this effect depends on progesterone levels. This is not surprising because quantifying 3D shape changes is technically challenging, and is in fact is an active research area in itself in mathematics and computer vision. For example, Figure 1: During the menstrual cycle, the ovaries cyclically release female sex hormones such as progesterone into the blood. We propose practical shape analysis tools that quantify 3D shape changes in the brain that occur during progesterone fluctuations, with a focus on the hippocampus: a structure involved in memory and navigation. Visualization created from data by [18, 15]. quantifying surface shape changes as a function of a continuous variable like progesterone can theoretically be performed through geodesic regression on Riemannian manifolds [6, 19]: an extension of linear regression dedicated to shape spaces. However, in its current form, geodesic regression on surface spaces is too slow for practical use. Here, we bridge the gap between computer vision and clinical neuroimaging by presenting a new practical method, a hybrid between geodesic and linear regression, that allows us to quantify how the _shape_ of the hippocampal formation changes in response to progesterone. ContributionsWe offer several contributions that span the fields of machine learning, differential geometry and clinical neuroimaging. First, in machine learning, we introduce our hybrid geodesic-linear regression method: a faster geodesic regression model which uses linear residuals instead of geodesic residuals in its loss function and also uses a linear regression result to initialize its geodesic regression optimization. Then, we perform extensive synthetic experiments to offer rules of thumb for deciding between linear, geodesic, and geodesic-linear regression. In differential geometry, these results give novel intuition about the curvature of the nonlinear data space of surface shapes. In clinical neuroimaging, these rules of thumb provide practitioners with guidelines to decide on the speed-accuracy trade-off between the regression types, revealing whether a surface mesh sequence can be adequately characterized by the considerably faster linear or geodesic-linear regressions without sacrificing accuracy. Finally, we apply our paradigm to real brain magnetic resonance images (MRIs) of a female brain through the menstrual cycle. We characterize, for the first time, shape changes in the female hippocampal formation as a function of progesterone. ## 2 Related Works Consider a series of hippocampal shapes, with the surface of each shape described as a mesh extracted via segmentation from a full-brain MRI (Fig. 1). The _shapes_ of these discrete surfaces (meshes) can be described either _extrinsically_ or _intrinsically_. The extrinsic approach with the Large Deformation Diffeomorphic Metric Mapping framework [3] represents a surface shape in 3D space by the amount of deformation that one needs to apply on the whole 3D ambient grid containing the surface in order to deform a reference shape into the surface shape of interest. By contrast, the intrinsic approach only deforms the surface itself [1], and hence provides us with two advantages: (i) intuitive deformations that can be discussed with neuroscientists, (ii) higher computational efficiency with up to 10x acceleration [1]. We focus here on the intrinsic approach. Analysis of Parameterized SurfacesIn the intrinsic approach, surfaces can be either _parameterized_ or _unparameterized_. Each mesh in a dataset of parameterized surfaces is constrained to have a consistent structure: the vertices of meshes in the dataset have one-to-one correspondences. By contrast, datasets of unparameterized surfaces relax this constraint such that statistical analyses can be performed independently of the number and indexation of the mesh vertices. While theoretically grounded, the computational complexity of this approach makes it unpractical for geodesic regression. As an example, statistical analyses within this framework are often limited to population average computations and machine learning algorithms that rely only on distances such as multidimensional scaling or k-means clustering [1, 9, 12]. Therefore, we consider the scenario where each mesh in the dataset is first parameterized to match a reference parameterization, and then statistical analysis such as regression is performed. In this scenario, the first natural choice is to consider linear regression on the set of 3D coordinates of the vertices. Linear regression has the advantage of being conceptually simple while enjoying analytical solutions: it will be our first baseline. However, it has the drawback that it does not enjoy parameterization-invariance. In other words, the distance (or dissimilarity) between two surfaces may change if we choose another reference parameterization, which may change the regression results. In the context of clinical application where the ultimate goal is human health, it is thought that we cannot afford such inconsistency. The alternative is to consider parameterization-invariant distances between parameterized surfaces. In differential geometry, this can be achieved by equipping the space of parameterized surfaces with a Riemannian metric that is invariant under reparameterizations [10, 9]. This process however, turns the data space into a nonlinear manifold, where linear regression needs to be generalized to geodesic regression [6, 19]. Geodesic regression does not enjoy a closed-form estimator and is typically computationally expensive. It will be our second baseline. Computational Challenges of Geodesic Regression Geodesic regression [6, 19] for parameterization-invariant Riemannian metrics presents unique computational challenges. In the Riemannian framework, calculating a single geodesic requires a computationally expensive approach. Geodesic regression solves an optimization problem by minimizing a mean square error (MSE) loss function. The MSE requires the computations of \(n+1\) geodesics at each iteration: 1 geodesic representing the generative model, and \(n\) geodesics required to compute the residuals in the MSE, where \(n\) is the number of surfaces (meshes) in the dataset. We observe that works developing the Riemannian framework limit the number of geodesic computations required for their analysis, and do not perform any form of regression. Kilian et al. [11] focus on geodesic interpolation or extrapolation of parameterized surfaces and do not study regression. Kurtek et al. [12] limit their experimental analysis to computing geodesics between pairs of unparameterized surfaces and performing clustering. Jermyn et al. [10] compute geodesics between pairs of parameterized surfaces and provide a classification experiment. Hartman et al [9] estimate population averages and perform dimension reduction with multidimensional scaling (MDS) and tangent PCA for parameterized and unparameterized surfaces. Bauer et al [1] compute geodesics and the population averages of unparameterized surfaces, together with multi-dimensional scaling and k-means clustering. We suspect that the authors did not perform geodesic regression in these works because of their computational costs, which we investigate here. ## 3 Background This section presents the mathematical background necessary to formulate our approximation schemes for geodesic regression on the shape space of (hippocampal) surfaces. We refer to [9, 8] for additional details. A Riemannian Metrics and GeodesicsWe first introduce concepts in differential geometry necessary for geodesic regression. A _Riemannian metric_ on a smooth manifold \(\mathcal{N}\) is a family \(\left(G_{p}\right)_{p\in\mathcal{N}}\) of inner products on each tangent space \(T_{p}\mathcal{N}\), such that \(G_{p}\) depends smoothly on the point \(p\in\mathcal{N}\). Any Riemannian metric \(G\) yields a notion of distance between points \(q_{0},q_{1}\) on \(\mathcal{N}\). Specifically, if \(\gamma:[0,1]\to\mathcal{N}\) is a smooth trajectory on \(\mathcal{N}\) with velocity vector at \(t\in[0,1]\) denoted as \(\dot{\gamma}_{t}\in T_{\gamma(t)}\mathcal{N}\), its length is defined as \(L_{\gamma}=\int_{0}^{1}\sqrt{G(\dot{\gamma}_{t},\dot{\gamma}_{t})_{\gamma_{t} }}dt\) and the distance between any two points \(q_{0},q_{1}\in\mathcal{N}\) is given by \(d(q_{0},q_{1})=\inf_{\gamma:\gamma(0)=q_{0},\gamma(1)=q_{0}}L_{\gamma}\). A _geodesic_ between two points \(q_{0},q_{1}\) that are "close" in \(\mathcal{N}\) is defined as a trajectory \(\gamma\) that locally realizes the shortest distance between \(q_{0}\) and \(q_{1}\). Intuitively, a geodesic is the generalization to manifolds of the concept of a straight line in vector spaces. While some manifolds enjoy analytical expression for their geodesics, this is not case for the manifold of (hippocampal) surface shapes that we will consider here. Thus, geodesics will need to be computed numerically. To this aim, geodesics are expressed as the solutions of the _geodesic equation_, which is an ordinary differential equation (ODE) which can be written in local coordinates as: \[\tilde{\gamma}^{k}(t)+\Gamma^{k}_{ij}\dot{\gamma}^{i}(t)\dot{\gamma}^{j}(t)=0, \tag{1}\] for all times \(t\in[0,1]\) where \(\Gamma^{k}_{ij}\) are the Christoffel symbols associated with the Riemannian metric. Solving this ODE provides numerical solutions for geodesics. To perform geodesic regression, we will also need two additional operations, called Exp and Log, which we define here. The map \((q,v)\mapsto\gamma_{q,v}(1)\) defined for \((q,v)\in\mathcal{I}\times T_{q}\mathcal{I}\) is called the _exponential map_ (Exp) and essentially computes the point \(\gamma_{q,v}(1)\) after following the geodesic of initial point \(q\in\mathcal{N}\) and initial velocity \(v\in T_{q}\mathcal{N}\). The inverse of the Exp map on its injectivity domain is called the _logarithm map_ (Log). B. Surfaces and Their ParameterizationsA _continuous surface_ can be described by a function \(q:M\to\mathbb{R}^{3}\), where \(M\) is a two-dimensional space of parameters \((u,v)\in M\) that parameterize the 3D points \(q(u,v)\in\mathbb{R}^{3}\) on the surface. Figure 2: A surface is represented by a function \(q:M\to\mathbb{R}^{3}\) that maps parameters \((u,v)\in M\) to points in 3D space \(q(u,v)\in\mathbb{R}^{3}\) (top row). Its parameterization can be changed by applying a diffeomorphism \(\phi\) to the domain \(M\) before mapping to \(\mathbb{R}^{3}\) (bottom row). Intuitively, the function \(q\) deforms the space of parameters \(M\) to give the surface its distinct shape, e.g., the ellipsoid shown in the top row of Fig. 2. Mathematically, \(q\) is required to be an oriented smooth mapping in \(C^{\infty}(M,\mathbb{R}^{3})\) that is also regular in the sense that its differential \(dq\) is injective everywhere on \(M\). The _parameterization of a surface_ refers to the placement of points on the surface. If we define one surface as \(q:M\rightarrow\mathbb{R}^{3}\), then we can describe the same surface with a different parameterization by \(q\circ\phi:M\rightarrow\mathbb{R}^{3}\), where \(\phi\) is an orientation-preserving diffeomorphism of \(M\). Intuitively, \(\phi\) smoothly deforms the placement of parameters on the domain \(M\), which in turn smoothly changes the placement of points in the co-domain, as shown with rainbow colors in the bottom row of Fig. 2. The change of parameterization \(\phi\) does not change the shape of the surface, which is an ellipsoid in both rows of Fig. 2. C. Space of SurfacesThe space of surfaces is denoted \(\mathcal{I}\subset C^{\infty}(M,\mathbb{R}^{3})\). The space \(\mathcal{I}\) is an infinite dimensional manifold immersed in the infinite dimensional vector space \(C^{\infty}(M,\mathbb{R}^{3})\). The Riemannian metric we choose to equip the manifold \(\mathcal{I}\) with defines the distance between its points \(q_{0},q_{1}\in\mathcal{I}\) and thus the notion of dissimilarity between the two surfaces \(q_{0},q_{1}\). We consider the second-order Sobolev metric [9]: \[\begin{split} G_{q}(h,k)&=\int_{M}\left(a_{0} \langle h,k\rangle+a_{1}g_{q}^{-1}\left(dh_{m},dk_{m}\right)\right.\\ &+b_{1}g_{q}^{-1}\left(dh_{+},dk_{+}\right)+c_{1}g_{q}^{-1}\left( dh_{\perp},dk_{\perp}\right)\\ &\left.+d_{1}g_{q}^{-1}\left(dh_{0},dk_{0}\right)+a_{2}\left\langle \Delta_{q}h,\Delta_{q}k\right\rangle\right)\mathrm{vol}_{q},\end{split} \tag{2}\] where \(h,k\) are tangent vectors at point \(q\in\mathcal{N}\); \(g_{q}^{-1}\) is the pullback metric from \(\mathbb{R}^{3}\) that defines distances on the surface \(q\) itself; \(\Delta_{q}\) is the Laplacian induced by \(q\); \(dh_{m},dh_{+},dh_{\perp},dh_{0}\) are orthogonal vector-valued one-forms and \(\mathrm{vol}_{q}\) is the surface area measure of \(q\). The scalars \(a_{0},a_{1},a_{2},b_{1},c_{1},d_{1}\) are weighting parameters that define distance between two surfaces based on how they are sheared, scaled, bent, or parameterized with respect to each other. The choice of second-order Sobolev metric is motivated by the following facts. First, the zero-order and first-order Sobolev metrics yield less stable results in geodesic interpolation between complex 3D shapes [9]. Second, the weighting parameters \(a_{0},a_{1},a_{2},b_{1},c_{1},d_{1}\) defining the second-order Sobolev metric in Eq. (2) can be linked to observable physical deformations (shearing, bending, etc) which helps with intuitively comparing physical objects. Last, the metric in Eq. (2) yields a distance that is rotation and reparameterization invariant [9]. In other words, if all the surfaces in the dataset are rotated and reparameterized in the same way, i.e., using the same rotation matrix and reparameterization diffeomorphism \(\phi\), then their pairwise distances are unchanged. We note that this property is practical only if we first assume that all the surfaces (are oriented and) have valid point-to-point correspondences. D. Space of Surface ShapesIn the space of surfaces \(\mathcal{I}\), if two surfaces have the same shape but different orientations or parameterizations, they correspond to different points. By contrast, we introduce the space of surface shapes where two surfaces with the same shape correspond to the same point, regardless of differences in their orientation or parameterization. Mathematically, the space of surface shapes is defined as the quotient space: \(\mathcal{I}/(\text{Rot}(\mathbb{R}^{3})\times\text{Diff}(M))\) --see [9] for details. For simplicity, we consider the case of parameterizations with the shape space \(\mathcal{S}=\mathcal{I}/\text{Diff}(M)\) while the case of orientations can be treated similarly. In the shape space \(\mathcal{S}\), the distance between two surface shapes \(q_{1}\) and \(q_{2}\) is given by: \[d^{\mathcal{S}}(q_{1},q_{2})=\text{inf}_{\phi}d(q_{1},q_{2}\circ\phi)=d(q_{1}, q_{2}^{\prime}), \tag{3}\] where \(\phi\) represents a choice in parameterization. In Eq. (3), the parameterization of \(q_{2}\) is varied until the second-order Sobolev distance \(d\) in Eq. (2) between \(q_{1}\) and \(q_{2}\) reaches an infimum as shown in Fig. 3. This operation matches the parameterization of \(q_{2}\) to the parameterization of \(q_{1}\) so that any remaining discrepancy between them is due to difference in shape, rather than difference in parameterization. Ideally, we would perform our geodesic regression methods directly in the shape space \(\mathcal{S}\). However, the high computational cost of this approach leads us to instead compute in the surface space \(\mathcal{I}\) after choosing a reference parameterization that corresponds to the first hippocampal surface of our dataset. Figure 3: Distances in surface space vs shape space. \(q_{1}\) and \(q_{2}\) are two surfaces with different parameterization and different shape. The distance given by the second-order Sobolev metric 2 measures both the parameterization and shape differences. The shape space distance 3 only measures difference in shape. ## 4 Methods We seek to quantify the _anatomical changes_ in the hippocampal formation that emerge from progesterone variations during the menstrual cycle. To achieve this, we propose approximations to geodesic regression on the space of 3D brain shapes that make it computationally fast enough for practical use. We further propose rules of thumb for determining when each approximation can be used, as summarized in Fig. 4. ### Linear Regression Model_Linear regression_ (LR) models the relationship between an independent variable \(X\in\mathbb{R}\) and the dependent variable \(Y\) taking values in \(\mathbb{R}^{D}\) as: \[Y=\alpha+X\beta+\epsilon, \tag{4}\] where \(\alpha\in\mathbb{R}^{D}\) is the intercept, \(\beta\in\mathbb{R}^{D}\) is the slope, and \(\epsilon\) represents the noise. LossGiven data \((x_{i},y_{i})\in\mathbb{R}\times\mathbb{R}^{D}\), for \(i=1,\ldots,n\), we fit the linear regression model through least squares, i.e., we compute the estimates for the intercept and slope \(\hat{\alpha},\hat{\beta}\) as: \[(\hat{\alpha},\hat{\beta})=\arg\min_{(\alpha,\beta)}\frac{1}{2}\sum_{i=1}^{n} \left\|y_{i}-\hat{y}_{i}\right\|^{2}\text{ for }\hat{y}_{i}=\alpha+x_{i}\beta, \tag{5}\] which minimizes the summed squared magnitude of the (linear) _residuals_: \(y_{i}-\alpha-x_{i}\beta\), for \(i=1,...,n\). LearningImportantly for computational purposes, this minimization has an analytical solution given by the normal equations \(\hat{\beta}=\frac{\frac{1}{2}\sum_{x_{i}y_{i}}-\bar{x}\bar{y}}{\sum_{x_{i}^{ 2}}-\bar{x}^{2}}\) and \(\hat{\alpha}=\bar{y}-\bar{x}\hat{\beta}\), where \(\bar{x}\) and \(\bar{y}\) are the sample means of the \(x_{i}\) and \(y_{i}\), respectively. We will use linear regression as our first baseline, where \(X\) is the level of progesterone, and \(Y\) is the hippocampal surface discretized as a mesh, which takes values in \(\mathbb{R}^{N\times 3}\) where \(N\) is the number of mesh vertices. ### Geodesic Regression Model_Geodesic regression_ (GR) [6, 19] models the relationship between an independent variable \(X\in\mathbb{R}\) and the dependent variable \(Y\), whose values lie on a manifold \(\mathcal{N}\), as: \[Y=\operatorname{Exp}(\operatorname{Exp}(p,Xv),\epsilon), \tag{6}\] where \(\epsilon\) is noise in the tangent space at \(\operatorname{Exp}(p,Xv)\), and \(\operatorname{Exp}\) is the operation defined in the previous section. Note that when the manifold of interest is \(\mathcal{N}=\mathbb{R}^{D}\), the exponential operator simplifies to addition: \(\operatorname{Exp}(p,v)=p+v\). Consequently, the geodesic regression generative model simplifies to the linear regression generative model of Eq. (4) with \(p=\alpha\) and \(v=\beta\). We also note that the exponential operation appears twice: to model the geodesic itself, and to model the noise \(\epsilon\). In what follows, we consider geodesic regression on the manifold \(\mathcal{N}=\mathcal{I}\) equipped with a second-order Sobolev metric from Eq. (2). LossGiven data \((x_{i},y_{i})\in\mathbb{R}\times\mathcal{I}\), for \(i=1,\ldots,n\), we seek to learn estimates of the intercept and slope \((p,v)\in\mathcal{I}\times T_{p}\mathcal{I}\). In the manifold setting, the loss function associated with the geodesic given by \((p,v)\) is minimized as: \[(\hat{p},\hat{v}) =\arg\min_{(p,v)}\frac{1}{2}\sum_{i=1}^{n}d\left(y_{i},\hat{y}_{i }\right)^{2}, \tag{7}\] \[=\arg\min_{(p,v)}\frac{1}{2}\sum_{i=1}^{n}\left\|\text{Log}(\hat{ y}_{i},y_{i})\right\|_{\hat{y}_{i}}^{2},\] (8) \[\text{for }\hat{y}_{i}=\operatorname{Exp}\left(p,x_{i}v\right). \tag{9}\] We compute estimates for intercept and slope \((\hat{p},\hat{v})\) which minimize the summed squared magnitude of the (geodesic) _residuals_\(\text{Log}(\operatorname{Exp}\left(p,x_{i}v\right),y_{i})\) for \(i=1,...,n\). The Figure 4: Overview: Approximation schemes for geodesic regression approximation. \(\delta\)-test: if the residual magnitudes are small compared to the curvature of the manifold, we can use geodesic regression with linear residuals (GRLR). \(\Delta\)-test: if the distance covered by the data set is small compared to the curvature, we use linear regression (LR). Reducing geodesic regression (GR) to either GRLR or LR provides up to four orders of magnitude speed-up, while sacrificing little accuracy. geodesic residuals differ from the linear residuals as they are calculated with exponentials and logarithms instead of additions and subtractions. LearningIn contrast to linear regression, the least squares problem of Eq. (7) above does not have an analytical solution for general manifolds \(\mathcal{I}\). Instead, we need to compute the estimates of the intercept and slope with gradient descent, which is typically computationally expensive. Gradient descent comes in two flavors depending on the strategy used to compute the gradient, which can be either a Riemannian gradient as originally proposed in [6] or an extrinsic gradient. The Riemannian gradient writes [6]: \[\nabla_{p}l =-\sum_{i=1}^{N}d_{p}\operatorname{Exp}\left(p,x_{i}v\right)^{ \dagger}\epsilon_{i},\] \[\nabla_{v}l =-\sum_{i=1}^{N}x_{i}d_{v}\operatorname{Exp}\left(p,x_{i}v\right) ^{\dagger}\epsilon_{i},\] where \(l\) is the loss function, \(\epsilon_{i}=\text{Log}\left(\operatorname{Exp}\left(p,x_{i}v\right),y_{i}\right)\) are the residuals, \(d_{v}\) and \(d_{p}\) are derivatives, and \(\dagger\) denotes the adjoint. In the general case, the expression of these derivatives and their respective adjoint operators are not known although they can be derived analytically for some manifolds as in [19]. However, to the best of our knowledge, no such formula exists for shape spaces of parameterized surfaces, so we use a numerical approach. ### Why is Geodesic Regression Slow? The geodesic regression optimization is slow due to the Exp and Log maps in Eq. (7). Computation of the exponential and logarithm maps do not enjoy an analytical expression for the manifold that we are interested in, and neither do their differentials. Consequently, we compute them only numerically, as implemented in Geomstats [14] as follows. For the _exponential map_, we consider the geodesic equation as a coupled system of first-order ODEs: \[\left\{\begin{array}{l}v(t)=\dot{\gamma}(t)\\ \dot{v}(t)=f(v(t),\gamma(t),t)\end{array}\right.\] where \(f\) is a smooth function given by Eq. (1) and the state variable is \((\gamma(t),\dot{\gamma}(t))\). Given initial conditions, we use a first-order forward Euler scheme to integrate this system. For a given step \(dt\) we compute: \[v(t+dt)=v(t)+\dot{v}(t)dt=v(t)+f(v(t),\gamma(t),t)dt. \tag{10}\] Introducing the parameter \(n_{steps}\), if we integrate this geodesic equation between \(t=0\) and \(t=1\) in \(n_{\text{steps}}\) then we use \(dt=(n_{\text{steps}})^{-1}\). Consequently, the parameter \(n_{\text{steps}}\) controls the numerical precision of the computation of the exponential map. The computation of Exp is slow due to this numerical integration. For the _logarithm map_, we solve the optimization problem in \(v\): \[\min d^{2}\left(\operatorname{Exp}(p,v),q\right),\] that represents the fact that Log is the inverse map of Exp. This minimization is solved by gradient descent (GD) until a convergence tolerance is reached. It uses scipy for minimization method and computes the gradient of the exponential map with automatic differentiation. The computation of Log is slow due to this optimization process. ### Approximations Schemes with Rules of Thumb Curved spaces are locally linear. Thus, if a data set falls on a "small" portion the shape space, addition and subtraction will offer excellent approximations of exponentials and logarithms and avoid costly computations. To speed up geodesic regression, we propose two approximation schemes shown in Fig. 4: (i) linear regression, and (ii) geodesic regression with linear residuals, where (ii) represents a novel approach for geometric machine learning--which we describe in the next section. We also propose rules of thumb to determine when each of these methods will yield sufficiently accurate approximations of geodesic regression. For (i), we propose the \(\Delta\)-Test, which explores when the magnitude of the geodesic length of the data set (\(\Delta\)) is small at the scale of the curvature of the manifold (Fig. 4 middle). For (ii), we propose the \(\delta\)-Test, which explores when the magnitude of the noise (\(\delta\)) is small at the scale of the curvature of the manifold (Fig. 4 right). In the experiments section, we explore the curvature of the space of 3D discrete surfaces to give numerical values to these guidelines. ### Geodesic Regression with Linear Residuals ModelWe propose _geodesic regression with linear residuals_ (GRLR) to model the relationship between an independent variable \(X\in\mathbb{R}\), the noise-free dependent variable taking values in a manifold, and the (noisy) dependent variable \(Y\) taking values in \(\mathbb{R}^{D}\). In other words, we propose the following generative model: \[Y=\operatorname{Exp}(p,Xv)+\epsilon, \tag{11}\] where \(\operatorname{Exp}(p,Xv)\) is the noise-free dependent variable and \(\epsilon\) is the noise. The noise-free dependent variable is constrained to be a surface in \(\mathcal{I}\), and thus \(Y\)'s dependency on \(X\) is modelled using the Exp operation. However, in practical applications, the data's noise may push the data off of \(\mathcal{I}\). Thus, in addition to its computational gain, this generative model acknowledges the fact that there is no reason for the noise to be constrained on the manifold. LossGiven data \((x_{i},y_{i})\in\mathbb{R}\times\mathbb{R}^{D}\), for \(i=1,\ldots,n\), we fit this regression model through least squares, i.e., we compute the estimates for the intercept and slope as: \[(\hat{p},\hat{v})=\arg\min_{(p,v)}\frac{1}{2}\sum_{i=1}^{n}\|\hat{y}_{i}-y_{i}\|^ {2}\text{ for }\hat{y}_{i}=\operatorname{Exp}\left(p,x_{i}v\right), \tag{12}\] where the squared _geodesic distance_ of Eq. 7 has been replaced by the _squared Euclidean distance_, but we keep the exponential map defining the geodesic. LearningLike geodesic regression, this least squares problem still requires gradient descent. The gradient can be computed as a Riemannian gradient or as an extrinsic gradient. The Riemannian gradient is given by: \[\nabla_{p}l =-\sum_{i=1}^{N}d_{p}\operatorname{Exp}\left(p,x_{i}v\right)^{ \dagger}\epsilon_{i},\] \[\nabla_{v}l =-\sum_{i=1}^{N}x_{i}d_{v}\operatorname{Exp}\left(p,x_{i}v\right)^ {\dagger}\epsilon_{i},\] where \(l\) is the loss function, \(\epsilon_{i}=y_{i}-\operatorname{Exp}\left(p,x_{i}v\right)\) are the residuals, \(d_{v}\) and \(d_{p}\) are derivatives, and \(\dagger\) is the adjoint. By avoiding the computation of \(n\) logarithms, the learning process enjoys a significant speed-up. Even within the use of the extrinsic gradient, the loss function avoids the computations of these logarithms and is thus accelerated. We note that we still need to compute the \(\operatorname{Exp}\) by numerical integrations and their derivatives which we do by automatic differentiation. Our implementation is publicly available on GitHub. ## 5 Experiments We investigate the curvature of the space of surfaces to quantify which approximation scheme should be used on which dataset. Guided by this analysis, we approximate geodesic regression on 3D hippocampal surfaces, giving the first characterization of hippocampal formation's shape change as a function of progesterone. ### Curvature Estimation with \(\delta-\) and \(\Delta-\) Tests SimulationsWe perform experiments on synthetic meshes to provide rules-of-thumb that help the practitioner decide when linear regression or geodesic regression with linear residuals can be used with little loss in accuracy on the space of discrete surfaces. Specifically, our experiments explore cases when lines can be used to approximate geodesics. First, we compute both a line and a geodesic between two meshes \(q^{\text{start}}\) and \(q^{\text{end}}\). Then, we compare the meshes along the line in \(\mathbb{R}^{N\times 3}\) to the meshes along the geodesic in \(\mathcal{I}\), where \(N\) is the number of vertices in the 3D meshes. We have either \(n=5\) or \(n=10\) meshes along each sequence. The start mesh \(q^{\text{start}}\) is an ellipsoid whose principal axes have length 2, 2, and 3. The end mesh is a deformed version \(q^{\text{end}}\) of the reference \(q^{\text{start}}\), where the amount of deformation is controlled by a factor that we vary in \(\{1\%,10\%,50\%,100\%\}\). This deformation factor indicates by how much each vertex in \(q^{\text{end}}\) has been moved compared to the vertex's position in \(q^{\text{start}}\) and is given as a percentage of the diameter of \(q^{\text{start}}\). \(q^{end}\) is generated by adding isotropic Gaussian noise to each vertex in \(q^{start}\). In other words, the standard deviation of the Gaussian noise is: \(\sigma=\text{deformation}\times D\) where \(D\) is the diameter of the mesh. We determine how much the geodesic and the line between \(q^{\text{start}}\) and \(q^{\text{end}}\) differ by computing the root mean square deviation (RMSD) between the geodesic mesh sequence and the line mesh sequence, which we then normalize by the diameter \(D\) of the mesh: \[\text{RMSD}=\frac{1}{D}\sqrt{\frac{1}{TN}\sum_{t=1}^{T}\sum_{j=1}^{N}\|v_{tj}^ {\text{line}}-v_{tj}^{\text{geodesic}}\|^{2}}, \tag{13}\] where \(N\) is the number of vertices, \(T\) the number of meshes in the sequence (5 or 10) and \(D\) the diameter. We also time the computation of the geodesic and the line and report their ratio. Figure 5: Accuracy-speed trade-off between a geodesic and its linear approximation in joining two meshes \(q^{\text{start}}\) and \(q^{\text{end}}\). The x-axis (error) shows how their meshes differ by computing distances between their vertices. The y-axis (speed) shows the ratio of their computational times. The color represents the deformation factor, i.e. how deformed the mesh \(q^{\text{end}}\) is from the reference mesh \(q^{\text{start}}\). The symbols represent the number of vertices in the meshes. The number of steps is a parameter controlling the numerical integration computing the geodesic. The results are similar when \(n_{steps}\) = 5. \(\delta\)-TestFig. 5 shows that deformation factors of 1% yield errors below 0.05% of the diameter of the shape (below 0.0005 on the figure). This is true across two values of the number of steps used for the numerical computation of the exponential map: \(n_{steps}\) = 20, and \(n_{steps}\) = 5 (see Section ). Consequently, for our \(\delta\)-Test: when the measurement noise on the vertices of the meshed shapes is expected to be less than 1% of the total mesh diameter, we recommend using linear residuals instead of geodesic residuals. In this case, we assess that the shape manifold can be approximated as linear at the scale of residual length: the curvature is low compared to the magnitude of the noise. By using linear residuals, we enjoy a considerable speed-up, up to 14M \(\times\) (for a number of steps of 20), as shown in Fig. 5, and up to 1.5M \(\times\) (for a number of steps of 5). As a real-world example, consider \(q^{\text{start}}\) as the mesh corresponding to a true hippocampal shape at a given level of progesterone, and \(q^{\text{end}}\) as the mesh that we observe in practice after segmenting and extracting the mesh from the MRI data. In this case, the "measurement noise" is the MRI noise and the segmentation error. We expect measurement noise to displace each vertex by around 1% of the total mesh diameter, since MRI images have a good resolution and both segmentation and meshing algorithms are reasonably accurate. Thus, brain MRIs are in the regime where geodesic regression can enjoy considerable speed-ups by utilizing linear residuals. \(\Delta\)-TestAdditionally, Fig. 5 shows that deformation factors of 10-50% yield an error that is less than 10% of the diameter (below 0.1 on the figure). Consequently, for our \(\Delta\)-Test: if the data set's largest deformation between two meshes is 10-50% of the diameter of the mesh, and if practitioners can tolerate a maximum loss of accuracy of 10% in their results, then using linear regression instead of geodesic regression allows them to significantly speed up their pipeline. In this case, we assess that the shape manifold can be approximated as linear on the scale of the data set: the curvature is low compared to the spread of the data. Even more strikingly, the error decreases from 10% to only 3% if practitioners consider meshes with hundreds of vertices, as shown in Fig. 5. ### Hippocampal Shape Change Characterization Data setWe use a time-series of 3D brain images recorded from magnetic resonance imaging (MRI): 11 images from 11 consecutive days, capturing the progesterone peak of a single female subject's natural menstrual cycle [15], as analyzed by volumetric analyses in [18]. We choose to focus on the progesterone peak (11 days), as opposed to the full menstrual cycle (30 days) for simplicity. The female also measured hormone levels in her blood in conjunction with each MRI session. Pre-processingWe align the 3D images to correct for the position and orientation of the subject's head in the MRI scanner, and to extract the surface of each substructure of the hippocampal formation. The results of this preprocessing are shown in Figure 1 where each sub-structure surface is color-coded and shown at two different levels of progesterone (low and high). Here, we can visually observe the hippocampal formation's shape evolution, which our analyses will seek to characterize with the slope and intercept learned from the proposed approximation of geodesic regression. We then use Eq. (3) to give every hippocampal mesh the same parameterization. CharacterizationConsider \(q^{\text{start}}\), \(q^{\text{end}}\) the meshes corresponding to hippocampus shapes at the lowest and highest progesterone levels respectively. Fig. 1 shows that there is a deformation factor of around 10%: each vertex is displaced by around 10% of the total mesh diameter between the two meshes shown. Thus, following \(\Delta\)-Test, we use linear regression to provide a characterization of 3D shape changes in the hippocampal formation during the menstrual cycle. Fig. 6 reveals for the first time that the hippocampal forma Figure 6: Linear regression reveals hippocampal deformation associated with an increase in progesterone during the menstrual cycle. The two rows show two views of each 3D mesh along the sequence. For visualization purposes, a coloring of the mesh is used to represent depth. While volumetric analyses did not capture a volumetric change, our shape analysis reveals that an increase in progesterone corresponds to a shear deformation of the hippocampal formation. tion shears in response to an increase in progesterone. Additionally, this computation provides an important educational tool. Clinical neuroscientists can use the result of our regression model to query, for a given progesterone level, what is the associated hippocampal shape. ## 6 Conclusion We have proposed a shape analysis technique to reveal what volumetric analyses could not: that the overall shape of hippocampal formation changes during progesterone level fluctuation. The implications for women's health are profound. Because each structure of the brain is dedicated to a specific function, and the hippocampal formation is directly related to functions that deteriorate in women after menopause, characterizing how the hippocampal formation changes in response to sex hormones changes is critical. Not only does it provide a diagnostic for disease prediction, but it also offers a method to probe relationships between hormone level, hippocampal shape, and brain health. Here, we provide a practical method to characterize such changes with slopes and intercepts learned through approximation schemes for geodesic regression on the space of 3D discrete surfaces. This work aims to open research avenues for automated, fast, and statistically sound diagnostics of female brain health.
2303.00073
Cross-correlated quantum thermometry using diamond containing dual-defect centers
The contactless temperature measurement at micro/nanoscale is vital to a broad range of fields in modern science and technology. The nitrogen vacancy (NV) center, a kind of diamond defect with unique spin-dependent photoluminescence, has been recognized as one of the most promising nanothermometers. However, this quantum thermometry technique has been prone to a number of possible perturbations, which will unavoidably degrade its actual temperature sensitivity. Here, for the first time, we have developed a cross-validated optical thermometry method using a bulk diamond sample containing both NV centers and silicon vacancy (SiV) centers. Particularly, the latter allowing all-optical method has been intrinsically immune to those influencing perturbations for the NV-based quantum thermometry, hence serving as a real-time cross validation system. As a proof-of-concept demonstration, we have shown a trustworthy temperature measurement under the influence of varying magnetic fields. This multi-modality approach allows a synchronized cross-validation of the measured temperature, which is required for micro/nanoscale quantum thermometry in complicated environments such as a living cell.
Madhav Gupta, Tongtong Zhang, Lambert Yeung, Jiahua Zhang, Yayin Tan, Yau Chuen Yiu, Shuxiang Zhang, Qi Wang, Zhongqiang Wang, Zhiqin Chu
2023-02-28T20:43:25Z
http://arxiv.org/abs/2303.00073v3
# Cross-correlated quantum thermometry using diamond containing dual-defect centers ###### Abstract The contactless temperature measurement at micro/nanoscale is vital to a broad range of fields in modern science and technology. The nitrogen vacancy (NV) center, a kind of diamond defect with unique spin-dependent photoluminescence, has been recognized as one of the most promising nanothermometers. However, this quantum thermometry technique has been prone to a number of possible perturbations, which will unavoidably degrade its actual temperature sensitivity. Here, for the first time, we have developed a cross-validated optical thermometry method using a bulk diamond sample containing both NV centers and silicon vacancy (SiV) centers. Particularly, the latter allowing all-optical method has been intrinsically immune to those influencing perturbations for the NV-based quantum thermometry, hence serving as a real-time cross validation system. As a proof-of-concept demonstration, we have shown a trustworthy temperature measurement under the influence of varying magnetic fields. This multi-modality approach allows a synchronized cross-validation of the measured temperature, which is required for micro/nanoscale quantum thermometry in complicated environments such as a living cell. **Keywords**: Nitrogen-Vacancy Center, diamond, optical thermometry, cross-validation ## Introduction Performing high-resolution thermometry with nanoscale spatial resolution is crucial in studying multiple processes in diverse fields such as electronics, material science, and cell biology[1, 2, 3, 4, 5]. For example, multiple physiological properties in thermal biology are directly revealed by measuring the temperature in biological systems[6]. There are growing interests in developing non-contact luminescent thermometry techniques to explore the measurements of temperature at single cell level[7, 8]. Among the different approaches, the optically addressable nitrogen vacancy (NV) center in diamond nanoparticles remains one of the most-studied and well-adopted high-sensitive nanothermometers[9]. The NV center uses a magnetic spin transition whose resonance frequency is sensitive to thermally induced lattice expansion. This spin-based method has exceptionally high sensitivity, and has been shown to detect temperature variations down to 1.8 mK/WHz in pure diamond and 200 mK/WHz in nanodiamonds with a spatial resolution of 200nm[8]. Several exciting applications of this method have been demonstrated, such as nanoscale thermometry in electronic devices[9], living cells[7], and C. elegans worms[10]. However, this spin-based method has been found susceptible to multiple artefacts such as fluctuating magnetic fields[11], microwave heating[12, 13, 14], and uncontrolled movements[15, 16]. In addition, the required microwave irradiations lead to considerable residual heating effects and might not be suitable for biological samples such as neuron cells[15]. Due to these practical implementation issues, several microwave-free all-optical approaches have been demonstrated using a series of diamond defects such as NV[17], silicon vacancy (SiV)[18], germanium vacancy (GeV)[19], and tin vacancy (SnV)[20, 21, 22] centers, which are relatively immune to those artefacts. Notably, most of the existing studies have relied on one single type of defect centers, and only a few recent studies[23, 24, 25] have started to demonstrate the case of deploying dual-defects simultaneously in one go. For reliable temperature measurements in practical applications, it is highly desirable to develop a multi-modality thermometer allowing cross-validation of each individual readout method[5]. Recently, multi-mode temperature measurements have been demonstrated, e.g., Alkhtani et al. reported the simultaneous use of fluorescent nanodiamonds (FNDs) and lanthanide ion-doped upconverting nanoparticles (UCNPs) in fertilized bovine embryos[22]. They injected both FNDs and UCNPs into the embryos, performing Optically Detected Magnetic Resonance (ODMR) measurements for the FNDs, and the fluorescence spectral measurements for the UCNPs aimed at increased measurement confidence. Similarly, temperature sensing with NV and nickel (Ni) diamond colour centers excited in the biological transparency window in FND crystals was also recently demonstrated[23]. In this work, we demonstrate a cross-validated/dual-mode optical thermometry method using NV and SIV defect centers in the same bulk diamond sample, without any loss in spatial or temporal resolution. Temperature measurements will be obtained simultaneously using two different modalities, i.e., thermally induced ODMR spectrum and PL spectrum shift, corresponding to NV and SIV centers, respectively. We show that these two measurements have a perfect linear dependency, indicating the confidence of the performed temperature measurement can be significantly improved. Furthermore, we have applied this method to demonstrate reliable thermometry, even in the presence of fluctuating electromagnetic fields which mimics a commonly encountered artefact. To the best of our knowledge, this is the first time that people have combined the spin-based with an all-optical approach within the same thermometer to perform reliable temperature measurements. Specifically, by simultaneously measuring temperature using two different modalities having different physics, our approach avoids sensor confusion and improves the measurement confidence. This could certainly help in addressing the recent controversy surrounding the interpretation of the heterogeneous temperature distribution in living cells[24, 25]. ### Co-existence of NV and SIV centers in diamond The NV center is a point defect in the diamond lattice, consisting of an adjacent pair of a nitrogen impurity and a lattice vacancy (Inset, Fig 1(a)). The electronic energy level structure of the NV centers is shown in Fig. 1(a), consisting of spin-1 system (S = 1) with a triplet ground state (\(\mathrm{\SIUnitSymbolMicro A}\)) having electron sublevels of \(\mathrm{m_{S}}\) = 0 and \(\mathrm{m_{S}}\) = \(\pm\) 1. By applying the microwave resonance to the transition between \(\mathrm{m_{S}}\) = 0 and \(\mathrm{m_{S}}\) = \(\pm\) 1 in the ground state, the fluorescence decreases substantially--and this process is called ODMR[8]. ODMR measurements can be performed to measure the Zero Field Splitting (D), which is equal to D = 2.87 GHz at room temperature. NV-ODMR based thermometry is based on the temperature dependence of D, originating from thermal expansion of the diamond lattice temperature dependence of the electron-phonon interaction[16, 26]. The SIV center is formed by replacing two neighbouring carbon atoms in the diamond lattice with one Silicon atom, which places itself between the two vacant lattice sites (Inset, Fig 1(b)). The electronic energy level structure of the SIV centers is shown in Fig. 1(b), consisting of doubly degenerate ground and excited orbital states split by spin-orbit coupling[27]. All four transitions between the two ground and two excited orbital states are dipole allowed with a sharp zero phonon line (ZPL) at 738 nm (1.68 eV)[28, 29] and a minimal phononic sideband in a roughly 20 nm window around 766 nm. SIV based thermometry is based on the temperature dependence of the ZPL parameters such as peak position and linewidth. The SIV center emits much more of its emission into its ZPL, approximately 70% (Debye Waller Factor (DWF) ="0.7) as compared to the NV center (0.04). Moreover, the ZPL is in the Near Infrared Region (NIR) and lies in the biological transparency window. These factors make the SIV center an attractive candidate for a variety of ultrasensitive all-optical thermometry-based applications[30]. A detailed optical characterization for our bulk diamond sample using our custom-built wide-field quantum diamond microscope is performed (see Fig. S1, Supporting Information), by measuring the photoluminescence (PL) spectrum and Continuous Wave (CW) - ODMR spectrum to experimentally confirm the coexistence of dual defect centers. Fig 1 (c) shows the PL spectrum measured at 25 degrees Celsius under 532 nm excitation. The ZPL peaks of NV and SIV at 637nm and 737nm respectively can be clearly observed, indicating their coexistence. Additionally, a standard CW-ODMR spectrum measurement at 25 degrees Celsius is also performed (Fig 1(d)) which further confirms the presence of an ensemble of NV centers in the bulk Figure 1: (a) Energy level diagram and atomic structure of the NV center. The inset shows the NV center crystal structure (b) Energy level diagram and atomic structure of the SiV defect center. The inset shows the SiV crystal structure (c) PL spectrum of sample at 25 \({}^{\circ}\)C, inset is a typical photo of our customized diamond sample. (d) Typical CW-ODMR spectrum at 25 \({}^{\circ}\)C with the relevant parameters extracted from Lorentzian Data fitting. diamond sample. A Lorentzian data fitting is performed to extract the ODMR contrast (~12%) and linewidth (12MHz). Interestingly, the ODMR measurement is performed successfully under the influence of the SIV fluorescence as a background signal (with a 650 nm longpass filter). This demonstrates the mutual independence of the two methods and shows no significant crosstalk between the two defect centers in the diamond sample. A material characterisation of the bulk diamond sample used is also provided in the Supporting Information (XRD/Raman measurements are shown in Figure S2), indicating the high crystalline quality of our sample. Additional measurements need to be performed to quantitatively estimate the exact NV/SV concentrations in the bulk diamond sample. Apart from the PL and ODMR measurements, the performance of the 2 thermometry methods also needs to be characterised, to confirm whether the co-existence of the two defect centers in the sample has any adverse effects on the sensitivity. The photostability of the sample was measured by recording photon counts under 532nm excitation for over 30 minutes at 25 degrees Celsius, and a deviation of less than 0.41% was measured (Fig. 2(a)). To estimate the precision of SIV thermometry, we measure the uncertainty (standard deviation) in the ZPL peak position as a function of integration time at a fixed temperature of 25 degrees Celsius (Fig. 2 (b)). By performing the apporporate shot-noise limited data-fitting(tm) to the obtained curve, a thermal noise floor of 155 mK/Hz can be extracted. Similarly, to estimate the precision of NV-ODMR based thermometry, a Lorentzian Data-fitting on the CW-ODMR spectrum is performed to extract the contrast and linewidth, and then the well-known theoretical relation(tm) is used to calculate the sensitivity of 22 mK/VHz (see Supporting Information). The measured noise floors for both methods are comparable to the sensitivities reported in previous works(tm). ### Cross-validated Temperature Measurement Temperature measurements are demonstrated by measuring the NV-ODMR resonance frequency, SIV ZPL parameters (Peak Position, Peak FWHM) as a function of diamond sample temperature. At each temperature, the ODMR and PL Spectrum measurements are performed, and the parameters mentioned above are extracted by performing the relevant Data fitting. The temperature of the diamond sample was stabilized using an electronic temperature-controlled system (global heating source). Figure 3(a) demonstrates NV-ODMR based thermometry where the resonance frequency is measured as a function of temperature, and a strong linear dependence is observed. A susceptibility of 73.79 kHz/C is derived from linear fitting, which closely matched previously reported values(tm). Similarly, Figure 3(b) demonstrates all-optical SIV-based temperature measurement where the ZPL Peak position and FWHM are shown to have a strong linear dependence on temperature, and the susceptibilities are derived from the corresponding linear fitting. The key result of our project is shown in Figure 3 (c), where the temperature measurement is performed simultaneously using the NV-ODMR and SIV-ZPL shift-based methods. The accurate linear relationship (high RZ value) between the ODMR resonance frequency and SIV ZPL position provides strong evidence that the two independent mechanisms can be used to cross-validate each other simultaneously and synchronously. Additionally, we further demonstrate cross validation temperature measurements by using the excitation laser (532nm) as a local heating source. The NV ODMR resonance frequency and SIV ZPL peak position are measured simultaneously as a function of laser power, in both the static and dynamic settings (Figure S3). A close agreement between the 2 temperature measurement methods is found, which practically demonstrates how the all-optical SIV ZPL measurements can be used to verify the spin-based NV ODMR measurements in a stable and repeatable manner. (See Supporting Information for experimental details). After demonstrating the viability and feasibility of our cross-validation approach, we explore a practical application of this method in the next section. Figure 3: (a) ODMR Resonance Frequency is measured as a function of temperature, and a thermal susceptibility of 73.79 kHz/C is extracted. Error bars represent the standard deviation calculated from 10 individual measurements. (b) The SiV ZPL position and FWHM are measured as a function of temperatures, and thermal susceptibilities of 0.0084 nm/C and 0.0398 nm/C are extracted respectively. (c) The SiV ZPL Position/FWHM measurement is shown as a function of NV-ODMR resonance frequency. Error bars for (b) and (c) are smaller than the size of the datapoint. Figure 2: (a) The monitored time-trace of fluorescence in diamond sample for a period of 30 minutes. The photon counts are measured for an exposure time of 10 ms and averaged over 400 pixels. (b) The measured temperature precision of the all-optical SiV-based temperature measurement as a function of integration time. A noise of 155 mK/VHz is extracted from the shot-noise fitting. ### Practical application of the developed cross-validated method The main driving force of this project is that the NV-ODMR measurement is susceptible to multiple artefacts such as external fluctuating electromagnetic fields, microwave induced heating, etc. This hampers the quality of the measurements and inevitably reduces the accuracy and reliability of the measurement performed. As a comparison, the SIV-based temperature method is all-optical, and is immune to the above-mentioned artefacts. As a proof-of-concept demonstration, we study the effect of a randomly oriented dynamic magnetic field as a measurement artefact for NV ODMR-based measurement. The applied B field is applied manually by the experimenter and fluctuates randomly both in magnitude and direction, with a typical frequency on the order of Hz. This mimics a common measurement artefact (fluctuating electromagnetic fields) found in many systems, e.g., in complex biological environments such as neuron cells and cardiac tissues. While it is possible to accurately measure temperature in the presence of magnetic noise (and vice versa) either by using multi-point methods [56] or by performing a mathematical analysis on the fitted resonance frequencies [8, 9], these methods only work if the B field can be assumed to be static during the measurement time of the ODMR spectrum. Figure **4**(a) and (b)) demonstrates how an external magnetic field leads to the splitting of the ODMR spectrum, resulting in a lower contrast and a higher linewidth. These factors lead to a degraded temperature sensitivity for the NV based ODMR measurement. As a proof-of-concept demonstration, we measure the ODMR resonance frequency as a function of time (Figure **4**(c)), both in the presence and absence of an external fluctuating magnetic field. As expected, the measured temperature change has a large variance and standard deviation when a dynamic external magnetic field is applied. On the contrary, when the same temperature measurement is performed for the all-optical SIV-based spectrum parameters (Figure **4**(d)), there is comparably a significantly lower variance when an external magnetic field is applied. This demonstrates how the SIV-based method can be used to validate the NV-based ODMR method in the presence of measurement artefacts, as shown for fluctuating magnetic fields. ### Discussion In this letter, we have proposed that it is feasible to effectively combine the high sensitivity, accuracy, and stability of NV-ODMR measurement, with the advantages of artefact-free all-optical SIV approach, without any loss in spatial or temporal resolution, to improve the reliability and measurement confidence of temperature measurements. We envision a variety of novel applications in the future by combining multiple defect centers in the same diamond sample, simultaneously utilizing the advantages and use-cases of different colour centers together. For example, dual-defect centers can allow us to decouple the Temperature (T) and B field measurements, significantly simplifying the measurement and making it more reliable. Simultaneous T and B measurements have multiple applications and have performed multiples times before [8, 9]. However, this involves mathematical calculations on the multiple extracted resonance frequencies and can only be accurately and reliably done when T and B are static during the time taken to perform the measurement. Using dual-mode quantum sensing, i.e., SIV for T measurements and NV for B-field measurement, allows us to make precise magnetic field measurements in the presence of temperature fluctuations and vice versa. Furthermore, while the current measurements were performed on a bulk diamond sample which might limit the scope of its practical applications, we propose a sample fabrication procedure to obtain nanodiamonds containing dual defect centers. That would allow cross-validation of temperature measurement with a nanoscale spatial resolution. The nanodiamond sample fabrication procedure has been described in the Supporting Information, and preliminary experimental results indicating the coexistence of NV and SIV are shown in Figure S3. A clear ODMR spectra and SW ZPL peak indicate the Figure 4: (a) Influence of dynamic magnetic fields on the ODMR-based diamond thermometry. (a) CW-ODMR spectra shown absence of B field. (b) CW-ODMR spectra in the presence of randomly oriented dynamic B field (c) Time trace of measured temperature change for NV-ODMR based method. (d) Time-trace of measured temperature change for all-optical SIV ZPL based method. co-existence of NV and SIV ensemble in the nanodiamond sample. Further optimisation in the sample fabrication procedure needs to be performed in order to perform measurements at the single particle level. Additionally, a detailed characterisation of the sensitivity and performance for the nanodiamond case still needs to be performed. Moreover, performing a ball milling process on the currently used bulk diamond sample shall also yield nanodiamonds having a coexistence of NV and SIV. In conclusion, in this work, we have succeeded in measuring temperature simultaneously using 2 independent mechanisms (NV-ODMR and SIV-ZPL) without any substantial loss in sensitivity or temporal resolution. The use of two modalities enables cross-validation and makes these results far more reliable for nanothermometers in complex systems such as living cells. ## 1 Associated content ### Supporting Information The Supporting Information is available free of charge on the ACS Publications website. ## 2 Author information ### Corresponding Author Zhiqin Chu: Department of Electrical and Electronic Engineering, Joint Appointment with School of Biomedical Sciences, The University of Hong Kong, Pokfulam Road, Hong Kong, China. Email: zqchu@ece.hku.hk. ### Conflicts of interest In accordance with our policy on Conflicts of interest please ensure that a conflicts of interest statement is included in your manuscript here. Please note that this statement is required for all submitted manuscripts. If no conflicts exist, please state that "There are no conflicts to declare". ### Notes The authors declare no competing financial interests. ## 3 Acknowledgment Z.Q.C. acknowledges the financial support from Hong Kong SAR Research Grants Council (RGC) Early Career Scheme (ECS; No. 27202919); Hong Kong SAR RGC Research Matching Grant Scheme (RMGS; No. 207300313); Hong Kong SAR Innovation and Technology Fund (ITF) through the Platform Projects of the Innovation and Technology Support Program (ITSP; No. ITS/293/19FP); and HKU Seed Fund (No. 202011159019 and No. 202010160007).
2309.05150
Faster, Lighter, More Accurate: A Deep Learning Ensemble for Content Moderation
To address the increasing need for efficient and accurate content moderation, we propose an efficient and lightweight deep classification ensemble structure. Our approach is based on a combination of simple visual features, designed for high-accuracy classification of violent content with low false positives. Our ensemble architecture utilizes a set of lightweight models with narrowed-down color features, and we apply it to both images and videos. We evaluated our approach using a large dataset of explosion and blast contents and compared its performance to popular deep learning models such as ResNet-50. Our evaluation results demonstrate significant improvements in prediction accuracy, while benefiting from 7.64x faster inference and lower computation cost. While our approach is tailored to explosion detection, it can be applied to other similar content moderation and violence detection use cases as well. Based on our experiments, we propose a "think small, think many" philosophy in classification scenarios. We argue that transforming a single, large, monolithic deep model into a verification-based step model ensemble of multiple small, simple, and lightweight models with narrowed-down visual features can possibly lead to predictions with higher accuracy.
Mohammad Hosseini, Mahmudul Hasan
2023-09-10T21:54:03Z
http://arxiv.org/abs/2309.05150v1
# Faster, Lighter, More Accurate: ###### Abstract To address the increasing need for efficient and accurate content moderation, we propose an efficient and lightweight deep classification ensemble structure. Our approach is based on a combination of simple visual features, designed for high-accuracy classification of violent content with low false positives. Our ensemble architecture utilizes a set of lightweight models with narrowed-down color features, and we apply it to both images and videos. We evaluated our approach using a large dataset of explosion and blast contents and compared its performance to popular deep learning models such as ResNet-50. Our evaluation results demonstrate significant improvements in prediction accuracy, while benefiting from 7.64x faster inference and lower computation cost. While our approach is tailored to explosion detection, it can be applied to other similar content moderation and violence detection use cases as well. Based on our experiments, we propose a "_think small, think many_" philosophy in classification scenarios. We argue that transforming a single, large, monolithic deep model into a verification-based step model ensemble of multiple small, simple, and lightweight models with narrowed-down visual features can possibly lead to predictions with higher accuracy. deep classification, content moderation, ensemble learning, explosion detection, video processing ## I Introduction Automated content moderation has become an essential aspect of online platforms in recent years, with explosive growth in user-generated content. As video-sharing websites and online marketplaces have become popular, they have also become a hub for the dissemination of videos containing explosions, which could be disturbing and harmful to younger audiences. Content moderation, therefore, is crucial to ensure user safety and compliance with laws and regulations, particularly for video broadcasting where scenes of explosions are often depicted. One important aspect of content moderation in this context is the detection of unsafe scenes for kids, which includes identifying explosive content and other forms of violent or disturbing imagery. Automated explosion detection techniques are crucial for enabling quick and effective responses to ensure public safety and security, and to prevent the spread of harmful content on online platforms. In this context, there has been a significant interest in applying deep neural networks to address many important real-world computer vision problems, including automated content moderation and explosion detection. However, to achieve higher accuracy, the complexity of neural networks has increased dramatically over time, leading to a challenge in terms of memory and computational resources. Moreover, larger datasets are required to provide higher prediction accuracy and address false positives. While ensemble learning techniques are designed to improve classification accuracy, they often create a subset dataset from the original dataset, leading to the same results since they are getting the same input. Therefore, ensemble models may not perform well without feature engineering. In this paper, we propose a novel ensemble structure consisting of two lightweight deep models that are designed to perform high-accuracy and fast classification for both image and video classification use cases, specifically targeting explosion detection. Our design uses a verification-based combination of two lightweight deep models, each individual model making predictions on a different color feature: the main color-based model which operates based on 3 RGB color channels, and a secondary structure-oriented model which operates based on a single grayscale channel, focusing more on the shape of the object through intensity than learning about their dominant colors. We implemented and evaluated our approach for explosion detection use case where video scenes containing explosions are identified. Our evaluation results based on experiments on a large test set show considerable improvements in classification accuracy over using a ResNet-50 model, while benefiting from the structural simplicity and significant reduction in inference time as well as the computation cost by a factor of 7.64. While our approach is applied to explosion detection scenarios, it can be generalized to other similar image and video classification use cases. Based on the insights gained from Fig. 1: An abstract structure of our verification-based ensemble model. our evaluations, we further make an argument to "think small, think many" which aims to beat the complexity of large models with the simplicity of smaller ones. Our approach replaces a single, large, monolithic deep model with a verification-based hierarchy of multiple simple, small, and lightweight models with step-wise contracting color spaces, possibly resulting in more accurate predictions. In this paper, we provide a detailed explanation of our approach and its evaluation results, which we believe will be beneficial for researchers and practitioners working in the field of computer vision. As automated content moderation and explosion detection are crucial for user safety and platform compliance with regulations, our proposed approach can play a vital role in making online spaces more secure for everyone. Furthermore, the increased efficiency in automated compliance and moderation can reduce the burden on manual efforts, allowing human moderators to focus on more nuanced content issues. The remainder of the paper is structured as follows. Section II provides an overview of relevant background information and related work. In Section III, we describe the design and architecture of our proposed ensemble approach. Section IV outlines our experimental setup and presents our evaluation results. In Section V, we discuss our findings and explain how our approach can be extended to other content moderation use cases. Finally, in Section VI, we summarize our contributions and offer concluding remarks. ## II Related Work Ensemble models are generally aimed to improve the classification accuracy. Multiple models are used to make predictions for each data point, with each sub-model trained individually through variations in the input data. The predictions by each model are considered as a vote, where all votes can later be fused as a single, unified prediction and classification decision. To achieve this, various techniques exist to combine predictions from multiple models. Voting, averaging, bagging and boosting are among the widely-used ensemble techniques [1, 2, 3]. In max-voting for instance, each individual base model makes a prediction and votes for each sample. Only the sample class with the highest votes is included in the final predictive class. In the averaging approaches, the average predictions from individual models are calculated for each sample. In bagging techniques, the variance of the prediction model is decreased by random sampling and generating additional data in the training phase. In boosting, subsets of the original dataset are used to train multiple models which are then combined together in a specific way to boost the prediction. Unlike bagging, here the subset is not generated randomly. While effective, a requirement in many of these broad approaches is to create a subset dataset from the original dataset, which is used to make predictions on the whole dataset. So there is a high chance that these models will give the same result since they are getting the same input. A body of work have shown the potential of ensemble methods in improving the performance and accuracy of deep learning models for a variety of classification tasks. In [4], the authors propose an ensemble algorithm to address overfitting through looking at the paths between clusters existing in the hidden spaces of neural networks. The authors in [5] present MotherNets, which enable training of large and diverse neural network aimed to reduce the number of epochs needed to train an ensemble. In [6], the authors propose a geometric framework for a priori determining the ensemble size through studying the effect of ensemble size with majority voting and optimal weighted voting aggregation rules. Another approach that has been explored in the literature is the use of ensembles of weak classifiers, such as decision trees, to improve classification performance. For example, in [7, 8], the authors investigated the decision tree-based ensemble learning in the classification of various cancers. In this paper, we argue that transforming a single, large, monolithic deep model into an ensemble of multiple smaller models can potentially enable higher accuracy, while benefiting from reduced training costs and faster inference time. Our proposed ensemble approach differs from these existing methods in several ways. First, we focus on a specific use case of content moderation, specifically the detection of violent or explosive content. Second, we use a set of lightweight models with narrowed-down color features, which reduces the computation cost and enables faster inference compared to larger, more complex models. Third, our ensemble architecture is designed to handle both images and videos, which is an important consideration for real-world content moderation applications. Our approach is independent, and can further be combined with other ensemble techniques discussed above. ## III Methodology Our proposed methodology is designed to address the limitations of using a single model approach for image classification tasks, specifically for explosion detection. Our experimental results showed that using only a color-oriented model, which focuses on RGB color features, can result in false positives due to the misclassification of scenes with light-emitting sources. On the other hand, while a grayscale model can eliminate color-induced false positives, it introduces other false positives that are similar in structural shape to explosions or fires. To overcome these limitations, we propose a verification-based ensemble structure that combines the strengths of both models, Model C and Model L. Model C is used as the primary classifier, and its predictions are Fig. 2: An example false detection when making prediction using a grayscale model. When in grayscale, the image structure resembles an explosion. verified or validated by Model L, which is more structure-oriented and uses grayscale (L) features. This verification step helps to filter out false positives and improve overall prediction accuracy. Figure 1 illustrates the abstract design of our proposed verification-based ensemble structure, which involves step-wise contracting color spaces. In this structure, if Model C predicts an input sample image as negative, the overall prediction would result as negative, and only if Model C predicted an input sample as positive, the prediction will be verified or validated with Model L. Given these facts, our proposed architecture is designed based on two-fold insights: Firstly, for many use cases such as those seen in our explosion image classification use case, we experienced that while the use of a single model operating on RGB-based color features provides a higher accuracy, it can wrongly classify scenes such as sunlight, lamps, or light-emitting sources as explosions. Similarly, while employing an individual model based on grayscale features can eliminate such color-induced false positives, it further introduces other false positives which are similar to explosions or fires in structural shape. Through our experimentation, we realized that a grayscale model can potentially identify bushes or trees, clay grounds, clouds, or steam as fire or explosion simply due to lack of knowledge on the color data. This problem is exacerbated in video frames due to motion blur. Therefore, we experimented with combining both color and grayscale intensity (which focuses more on the structure) to provide a higher classification accuracy. Figure 2 depicts an example false positive identified as an explosion when using only the grayscale model. Secondly, limited features, in this case, a limited number of color spaces through removing chrominance and keeping only luminance as the color feature, would generally lead to lower learnability due to a lower number of features to train on. This means the model would incur higher recall and lower precision. Our evaluation showed that passing predictions from a supposedly higher-precision model to a model with higher-recall can potentially lead to filtering out false positives and therefore increase overall prediction accuracy. Figure 3 illustrates the structural details of an extended version of our ensemble design, which is applied to video frames. To process video frames, each frame is captured and resized to 300x300, and the pixels along with their 3 color channels are forwarded to a frame pre-processing phase. We chose the input dimension of 300x300 based on a trade-off between computation cost and prediction accuracy. To reduce noise, Anti-Aliasing technique is applied on every frame as a part of the pre-processing phase. To extract features, color channels are separated from each frame, producing RGB color features (signified as C). A Color Channel Transformation module transforms the 3 RGB channels to grayscale-only features (signified as L). The original RGB features are passed to Model C directly, while the grayscale features are passed to Model L. Each model produces a binary prediction output, signifying positive (i.e., explosion in our use case) or negative (i.e., non-explosion). After conducting a thorough evaluation of precision and recall trade-offs, we selected a prediction threshold of 90% for both models. This threshold indicates that any detection with a score above 90% will be considered a positive detection. After the predictions from the individual models are made, our proposed post-processing technique applies a validation-based mechanism to the results from Model C (predictions C) and Model L (predictions L). This mechanism involves _veri Fig. 4: The internal architecture of our base models. Model C and L both use the same architecture. Fig. 3: Details of our proposed ensemble extended towards video analytics. C and L represent colored and gray-scale features. fying_ the positive predictions made by Model C by comparing them with the predictions made by Model L. Only if Model L also made a positive prediction, the overall prediction is considered positive. This step helps to filter out false positives and improve the overall accuracy of the ensemble. After the predictions from the individual models are made, we employ a temporal coherence check to further improve the prediction accuracy for video frames. The temporal coherence check ensures that the predicted labels for subsequent frames in a video sequence are consistent. Specifically, for Model C predictions, we compute the majority vote of the labels for a set of three consecutive frames. If two or more frames in the set are predicted as positive, the majority label is considered positive. This majority value is then fused with the prediction from Model L through the same validation-based approach to derive the final outcome using a 1-N validation approach. For every positive frame \(i\) (labeled as "explosion") in Model C predictions, we check the three neighbor frames \(i-1\), \(i\), and \(i+1\) in Model L predictions. If at least one of these frames is predicted as positive (labeled as "explosion"), the final prediction is positive. While our video-specific post-processing approach can be generalized to other numbers of neighbors, such as 1 or 5, We found that our choice of three consecutive frames provides an efficient trade-off between the final precision and recall values. Figure 4 depicts the internal architecture of each of the two models. Both Model C and Model L are feed-forward convolutional neural networks that consist of multiple groups of a 2D convolution layer (Conv2D), followed by a Max-Pooling layer and a Batch Normalization layer. The models have five layers with the number of convolution filters and kernel sizes as illustrated in Figure 4. The first Conv2D layer has 32 filters with a kernel size of 5x5, 64 filters and a kernel size of 3x3 for the second one, 128 filters, 256, and lastly 64 filter with kernel size of 3x3 as well for the other Conv2D layers. To standardize the inputs passed to the next layer and accelerate the training process while reducing the generalization error, we apply batch normalization. We also use a dropout mechanism with a rate of 0.2 to prevent possible overfitting. After passing through a Flatten layer, output features are flattened into a 1-dimensional array for entering the next layer. The produced data is then passed through three dense layers and another dropout layer to achieve the final binary prediction. Rectified Linear Unit (ReLU) is used as the activation function for all the Conv2D and Dense layers. While we designed this specific sequential neural network as our base model, we acknowledge that other lightweight models such as MobileNetV2 [9], SqueezeNet [10], or ShuffleNet [11] can also serve as a base model for our ensemble design. ## IV Evaluation To evaluate our approach, we created a dataset of approximately 14,000 images, consisting of around 8,000 negative and 6,000 positive images obtained from real explosion footage frames of videos. We split the dataset into training and validation sets, with the validation set comprising 20% of the whole dataset. Our models were implemented on an Intel X86 64bit machine running Ubuntu 14.04.5 LTS, using Keras 2.3.1 with Tensorflow 1.13.1 back-end. We trained each of our models for 400 epochs and saved the best model with the lowest validation loss error. Our evaluations of the proposed ensemble methods were conducted against the popular ResNet-50 architecture using a set of 15 test videos of varying contexts, including popular TV series such as MacGyver, Britannia, and NCIS: Los Angeles, with an average duration of around 52 minutes and an average of 78,750 frames per video, encoded in 720p and 1080p resolutions. Human operators inspected the videos in multiple rounds to provide ground truth data with the time intervals of where explosion happened, with an average of 10.75 distinct explosion scenes recorded as ground truth for an average test video. We compared the accuracy results of our proposed ensemble model and the ResNet-50 model used as the back-end of a state-of-the-art Faster R-CNN detection network [12] as part of our evaluation. We measured median precision, recall, and F1 score metrics of the two models on the classification task. Since the ground truth data provided by the human operators were recorded as time intervals, we converted the detection's frames to timestamps and considered a match if a detection was within a second of the recorded ground truth time. Figure 5 illustrates how the number of parameters and inference time of our proposed ensemble method compares with the popular ResNet-50 architecture. Fig. 5: Comparison of our proposed approach against the ResNet-50 model as used as the back-end of a Faster R-CNN detection network. Fig. 6: The number of parameters (left) and the inference runtime (right) of our proposed ensemble relative to the popular ResNet-50 architecture. Figure 6 shows how the number of parameters and inference time of our proposed ensemble method compares with the popular ResNet-50 architecture. On an average video, our proposed approach was able to achieve a 100% precision which is significantly higher than the 67% precision made by the popular ResNet-50 model used as the back-end of a Faster R-CNN detection network. Our approach eliminated many false positives, potentially saving hundreds of hours of manual content moderation and explosion detection checks by removing the need to verify false detections in the reference videos through manual inspections. In addition, our proposed structure is significantly lighter with almost 19x fewer parameters, with the ability to decrease inference run-time by a large factor, almost 7.64x faster compared to the complex ResNet-50 model. These efficiency gains are critical in automated content moderation and explosion detection scenarios, where platforms need to moderate vast amounts of user-generated content quickly and accurately to ensure compliance with regulations and user safety. Figure 7 shows examples of correctly detected explosion scenes on test videos. ## V Discussion We believe that the design and experiments we conducted on the explosion detection use case are generalizable and can be applied to other similar image classification tasks and content moderation use cases as well. We therefore invite researchers to apply our design to other image or video classification tasks, especially those involving detection of non-rigid objects where color might be a dominant specification of the object, such as blood gore detection, smoke detection, fire detection, steam detection, and so on. The proposed ensemble approach can be useful for identifying and removing inappropriate or harmful content from online platforms or during broadcasting content moderation. Based on the insights gained from our experiments, we propose a _"think small, think many"_ strategy for classification scenarios. We argue that transforming a single, large, monolithic deep model into a verification-based step-model ensemble of multiple small, simple, and lightweight models with narrowed-down features, like our shrinking color spaces, can lead to predictions with higher accuracy. In this paper, we demonstrate that our ensemble design was founded upon two base models, a primary and a secondary model, and used color spaces as the notion of features. The secondary model is fed with a limited set of the features delivered to the primary model, validating predictions made by the primary model. We believe that our design can be extended and generalized to a validation-based ensemble of three or more base models. Figure 8 depicts a sample illustration of our validation-based ensemble structure extended towards higher numbers of models. In this example extension, Model 2 validates predictions made by Model 1, and Model 3 further validates predictions made by Model 2, while the features passed get more limited as we iterate from Model 1 to Model 3. In this example, Model 1 operates on all three RGB channels, Model 2 operates on two color channels (RG, GB, or BR), and Model 3 performs validations only on a single color space, whether grayscale, or any of the R, G, or B channels. We believe this abstract concept can be generalized and extended to any feature set beyond only color spaces, which would be an avenue of exploration for the research community to consider. In conclusion, our work demonstrates the effectiveness of ensembling multiple small, simple, and lightweight models with narrowed-down features for image classification tasks. We hope that our proposed design and the open call to apply it to other image or video classification tasks will inspire and benefit the research community. ## VI Conclusion In this paper, we proposed an efficient and lightweight deep classification ensemble structure, designed for "high-accuracy" content moderation and violence detection in videos with low false positives. Our approach is based on a set of simple visual Fig. 8: An abstract extension of our proposed ensemble design. Fig. 7: Examples of a correctly detected explosion scenes on test videos. features and utilizes a combination of lightweight models with narrowed-down color channels. We evaluated our approach on a large dataset of explosion and blast contents in TV movies and videos and demonstrated significant improvements in prediction accuracy compared to popular deep learning models such as ResNet-50, while benefiting from faster inference and lower computation cost. Our proposed approach is not only limited to explosion detection in videos, but can be applied to other similar content moderation and violence detection use cases as well. Based on our experiments, we suggest a _"think small, think many"_ philosophy in deep object classification scenarios. We argue that transforming a single, large, monolithic deep model into a verification-based hierarchy of multiple small, simple, and lightweight models with narrowed-down visual features can potentially lead to predictions with higher accuracy, while maintaining efficiency and reducing computational requirements.
2301.07165
Inferring the rate of technosignatures from 60 yr of nondetection
For about the last 60 years the search for extraterrestrial intelligence has been monitoring the sky for evidence of remotely detectable technological life beyond Earth, with no positive results to date. While the lack of detection can be attributed to the highly incomplete sampling of the search space, technological emissions may be actually rare enough that we are living in a time when none cross the Earth. Here we explore the latter possibility and derive the likelihood of the Earth not being crossed by signals for at least the last 60 years to infer upper bounds on their rate of emission. Under the assumption that technological emitters are distributed uniformly in the Milky Way and that they generate technoemissions at a constant rate, we find less than about one to five emissions generated per century with 95% credible level. This implies optimistic waiting times until the next crossing event of no less than 60-1800 years with a 50% probability. A significant fraction of highly directional signals increases the emission rates upper bounds, but without systematically changing the waiting time. Although these probabilistic bounds are derived from a specific model and their validity depends on the model's assumptions, they are nevertheless quite robust against weak time dependences of the emission rate or nonuniform spatial distributions of the emitters. Our results provide therefore a benchmark for assessing the lack of detection and may serve as a basis to form optimal strategies for the search for extraterrestrial intelligence.
Claudio Grimaldi
2022-11-19T12:59:09Z
http://arxiv.org/abs/2301.07165v2
# Inferring the rate of technosignatures from sixty years of nondetection ###### Abstract For about the last 60 years the search for extraterrestrial intelligence has been monitoring the sky for evidence of remotely detectable technological life beyond Earth, with no positive results to date. While the lack of detection can be attributed to the highly incomplete sampling of the search space, technological emissions may be actually rare enough that we are living in a time when none cross the Earth. This possibility has been considered in the past, but not to quantitatively assess its consequences on the galactic population of technoemissions. Here we derive the likelihood of the Earth not being crossed by signals for at least 60 years to infer upper bounds on their rate of emission. We found less than about one to five emissions per century generated from the Milky Way (95% credible level), implying optimistic waiting times until the next crossing event of no less than 60 to \(1,800\) years with a 50% probability. A significant fraction of highly directional signals increases the emission rates upper bounds, but without systematically changing the waiting time. Our results provide a benchmark for assessing the lack of detection and may serve as a basis to form optimal strategies for the search for extraterrestrial intelligence. ## 1 Introduction Searching for a needle in a "cosmic haystack" is a catchy metaphor that vividly illustrates the difficulties encountered by the search for extraterrestrial intelligence (SETI) due to the vastness of the parameter space to be searched (Tarter et al., 2010; Wright et al., 2018). Hypothetical technological species might indeed manifest themselves, either intentionally or not, through electromagnetic emissions reaching our planet from unknown locations in space and with wavelength, radiated power, duration and other transmission characteristics of which we have no prior knowledge (Forgan, 2019; Lingam & Loeb, 2021). To get an idea of the "sustness of the search space, Tarter et al. (2010) compared the fraction of parameter space explored during the first 50 years of SETI as equivalent to 1.6 cups of water from Earth's oceans. After a decade and many other surveys, Wright et al. (2018) updated this estimate by replacing the 1.6 cups of water with a small swimming pool; still a tiny fraction of Earth's oceans. Despite over 60 years of activity, it is thus not surprising that the search for extraterrestrial intelligence, or more properly the search for remotely detectable manifestations of technology (also known as technosignatures), has so far ended up empty-handed. The strategy behind SETI's ongoing efforts, then, is to continually improve the sampled search space through increasingly comprehensive surveys, such as the Breakthrough Listen initiative (Worden et al., 2017), with the hope of eventually finding the long-sought needle in the cosmic haystack, or at least placing tighter upper limits on its existence (Enriquez et al., 2017; Price et al., 2020; Wlodarczyk-Sroka et al., 2020; Gajjar et al., 2021, 2022; Suazo et al., 2022). Although the elusiveness of extraterrestrial technosignatures might be justified by the aforementioned immense search space to be explored, it is however also consistent with the possibility that there are actually no technosignatures to be detected. This does not necessarily mean that technological exo-civilizations or their emitting artifacts are extremely rare or nonexistent (Tipler, 1980; Ward & Brownlee, 2000), but rather that we are living in a time during which our planet is in a region of the galaxy devoid of technoemissions, even if other regions are illuminated by them. This could be, for example, the case of extraterrestrial emitters that have generated electromagnetic radiations propagating in all directions at the speed of light, but that have not yet reached our planet, or that have ceased radiating in a past sufficiently distant that their signals have already overcome the Earth and continue to move away from it. If there is a non-zero emission rate, then, this scenario implies that while the signals that are moving away from our planet will remain forever invisible to us, others are heading our way and will be potentially detectable in the future when they eventually cross the Earth. Here, we investigate the consequences of assuming that the Earth has not been crossed by any technosignal at least since humanity began to actively search for them. Although sporadic searches for (radio) signals predated the first modern SETI experiment, conducted in 1960 (Drake, 1961), we take a fiducial period of 60 years of non-detection as our working hypothesis. As shown in the following, this strategy allows us to place upper limits on the rate of technoemissions and to infer probabilistic waiting times until the next crossing event, without recurring to additional hypotheses about emission longevities or other emission characteristics. ## 2 The model In what follows, we refer to an "emitter" as any extraterrestrial source of artificial electromagnetic emissions, regardless of whether the source is an actively transmitting technological civilization, a robotic transmitter, or the byproduct of some technological activity. We assume that such emitters are independently and identically distributed in the Milky Way Galaxy with probability distribution function (PDF) \(\rho_{E}(\mathbf{r})\), where \(\mathbf{r}\) is the emitter position relative to the galactic center, and that they generate technosignals independently of each other and at a constant rate per unit volume \(\Gamma\rho(\mathbf{r})\), where \(\Gamma\) is the emission birth rate in the entire Galaxy. Concerning the emissions, we shall not specify their wavelength and intensity because here we are interested in a scenario where the Earth is in a region of space that is not covered by the emissions, hereafter referred to as the void space, regardless of their electromagnetic characteristics. Let us first treat the case of isotropic technoemissions, like the infrared glow generated by hypothetical megastructures such as the Dyson spheres (Dyson, 1960), the radio or optical emissions from beacons sweeping the entire galaxy, or leaked electromagnetic radiations produced by technological activities. In principle, this list could also include remotely detectable industrial pollution in the atmosphere of exoplanets (Lin et al., 2014; Kopparapu et al., 2021), although searches of this kind have not yet been carried out. We model the region of space filled by an isotropic emission process lasting a time interval \(L\) with a spherical shell centered at \(\mathbf{r}\) and having outer radius \(ct\) and thickness \(cL\), where \(c\) is the speed of light and \(t\) is the time elapsed since the beginning of the emission process (Smith, 2009; Grimaldi and Marcy, 2018). As mentioned above, the emissions are generated at a constant rate, so at any given time the galaxy is filled with a certain number of spherical shells with uniformly distributed outer radii. We make the further reasonable assumption that the durations of the emission processes (or, equivalently, the thicknesses of the spherical shells) are independently and identically distributed random variables with PDFs given by \(\rho_{L}(L)\). We now focus on the aforementioned scenario where none of the emissions present in the Galaxy crosses the Earth. As shown in Figure 1, we can identify two types of shells for this to happen: the incoming and the outgoing shells. The shells of the first type have an outer radius that is larger than the distance of the Earth from their point of origin, as the shell generated by the emitter A in Fig. 1. Since the outer shell radii are expanding at the speed of light, the incoming shells will reach the Earth at some time in the future. The second type of shells, the outgoing shells, are such that the Earth is located within their "hole", as in the case of the shell generated by emitter B in Fig. 1. In this case, the outgoing shells are steadily moving away from our planet and have overlapped the Earth at some time in the past. To estimate the typical time interval between two crossing events, and therefore the typical time during which the Earth is located in a void space, we resort to a method similar to that used in soft matter to characterize the void or pore space in porous media or, more generally, in two-component materials (Torquato, 2002). Namely, we treat the Earth as if it were the center of a test sphere of diameter \(\delta\geq 0\) and consider the probability that none of the spherical shells overlaps the test sphere: \[P(\delta)=e^{-\eta(\delta)}, \tag{1}\] where \(\eta(\delta)\) denotes the average number of shells overlapping the test sphere. Since \(P(0)\) is the probability of the Earth being in the void space, \(P(\delta)/P(0)\) gives the expected fraction of the void space available to the test sphere, also known as the cumulative pore-size distribution function (Torquato, 2002). Now, \(P(\delta)/P(0)\) is also equivalent to the probability that the outer radius of the nearest incoming shell and Figure 1: Spherical shell model of isotropic technoemissions. The two annular regions are a two-dimensional representation of the space covered by the isotropic radiations originating from emitters A and B. The thicknesses of the annuli are proportional to the emission longevities, whereas the outer radii are proportional to the time elapsed since the beginning of the emission processes. The arrows indicate the direction of propagation at the speed of light \(c\) of the outer and inner edges of the annuli. The dashed circle represents a test sphere of radius \(\delta/2\) and with center at Earth’s position. The time interval between successive overlap events between the shells and the test sphere is greater than \(\tau=\delta/c\). the inner radius of the nearest outgoing shell are each at a distance not smaller than \(\delta/2\) from the Earth and, consequently, the time interval between successive overlaps has a probability \[P(c\tau)/P(0)=e^{\eta(0)-\eta(c\tau)} \tag{2}\] of being greater than \(\tau=\delta/c\). We calculate \(\eta(c\tau)\) as described in the Appendix A to find: \[\eta(c\tau)=\Gamma\bigg{[}\tau+\bar{L}-\frac{1}{2}\int\!d\mathbf{ r}\,\rho_{E}(\mathbf{r})\theta(\tau-2|\mathbf{r}-\mathbf{r}_{o}|/c)\\ \times(\tau-2|\mathbf{r}-\mathbf{r}_{o}|/c)\bigg{]}, \tag{3}\] where \(\theta\) is the unit step function, \(\mathbf{r}_{o}\) is the vector position of the Earth, and \(\bar{L}=\int\!dL\,\rho_{L}(L)L\) is the average longevity of the emission processes, a key factor in determining the probability of contact (Lares, Funes & Gramajo, 2020; Kipping, Frank & Scharf, 2020; Balbi & Cirkovic, 2021). A first critical result is that, since \(\eta(0)=\Gamma\bar{L}\), the average longevity cancels out in \(P(c\tau)/P(0)\). This is beneficial for the analysis that follows because \(\bar{L}\) is an utterly unknown parameter whose value has been the subject of much speculation since the early days of SETI (Shklovskii & Sagan, 1965; Gott, 1993; Wright et al., 2022). After eliminating \(\bar{L}\), two unknowns are left in \(P(c\tau)/P(0)\): the emission birth rate, \(\Gamma\), and the spatial distribution of the emitters, encoded by \(\rho_{E}(\mathbf{r})\) in Equation (3). In modeling the latter, we assume that the emitters do not occupy a special region of the galaxy and adopt for \(\rho_{E}(\mathbf{r})\) an axisymmetric PDF that reproduces the distribution of stars in the thin disk of the Milky Way. Within this model, the integral in Equation (3) turns out to be negligibly small as long as \(\tau\lesssim\tau^{*}\), where \(\tau^{*}\simeq 5.7\times 10^{4}\) yr (see the Appendix A). We obtain a similar value of \(\tau^{*}\) (\(\tau^{*}\simeq 4\times 10^{4}\) yr) with \(\rho_{E}(\mathbf{r})\) reproducing the annular galactic habitable zone of Lineweaver et al. (2004). Therefore, as long as \(\tau\) is less than a few tens of thousands of years, Equation (2) reduces to: \[P(c\tau)/P(0)=e^{-\Gamma\tau}, \tag{4}\] which is nothing but the probability of the waiting time between events of a Poisson point process with rate parameter \(\Gamma\). Figure 3 of Appendix A shows that for \(\tau\lesssim\tau^{*}\) Equation (4) matches \(P(c\tau)/P(0)\) calculated numerically from Equations (2) and (3). ## 3 Results We now turn to the implications of assuming that the fruitless efforts during the \(\sim 60\)-year history of SETI are actually due to the absence of Earth-shell overlaps for at least \(\tau_{o}=60\) yr, rather than to a highly incomplete sampling of the search space. ### Emission rate We start by inferring the posterior probability distribution of \(\Gamma\) using Bayes' theorem: \[p(\Gamma|\tau_{o})=\frac{P(\tau_{o}|\Gamma)p(\Gamma)}{\int\!d\Gamma P(\tau_{o} |\Gamma)p(\Gamma)}, \tag{5}\] where \(p(\Gamma)\) is the prior PDF of \(\Gamma\) representing some initial hypothesis about the emission birth rate and \(P(\tau_{o}|\Gamma)=P(c\tau_{o})/P(0)\) is the likelihood that the time interval between overlaps is greater than \(\tau_{0}\), given \(\Gamma\). We use Equation (4) for \(P(\tau_{o}|\Gamma)\), which is justified by the small value of \(\tau_{o}\): \[p(\Gamma|\tau_{o})=\frac{e^{-\Gamma\tau_{o}}p(\Gamma)}{\int\!d\Gamma e^{- \Gamma\tau_{o}}p(\Gamma)}, \tag{6}\] which shows that values of \(\Gamma\) much greater than \(1/\tau_{o}\sim 0.02\) yr\({}^{-1}\) are strongly disfavored. This implies that it Figure 2: Posterior probabilities from the 60-year long absence of technosignals at Earth. a, Posterior probability of the emission rate being greater than \(\Gamma\) (solid lines) inferred from three different priors (dashed lines): optimistic (PDF uniform in \(\Gamma\)), moderately optimistic (PDF uniform in \(\sqrt{\Gamma}\)), and marginally optimistic (PDF uniform in \(\log\Gamma\)). b, Posterior probability of the next crossing event occurring not sooner than \(\Delta\tau\) for the three optimistic cases. c, Posterior probability of the Earth being in a void zone for at least 60 years as a function of the average emission longevity. is unlikely that far more than two shells per century are emitted from the Milky Way and that, consequently, an _a priori_ optimistic view asserting a high rate of emissions must be significantly reconsidered. To substantiate this claim more quantitatively, we adopt three different functional forms of the prior that reflect distinct shades of optimism towards the possible emission rate: a prior PDF uniform in \(\Gamma\), a prior uniform in \(\sqrt{\Gamma}\), and a prior uniform in the logarithm of \(\Gamma\). All three priors are defined in the interval \(\Gamma_{\rm min}=10^{-5}\) yr\({}^{-1}\) to \(\Gamma_{\rm max}=10^{2}\) yr\({}^{-1}\) (and 0 otherwise). The uniform in \(\Gamma\) and uniform in \(\sqrt{\Gamma}\) priors represent, respectively, an optimistic and a moderately optimistic belief about the emission birth rate, as they assert, for example, that \(\Gamma<10^{-2}\) yr\({}^{-1}\) is respectively 100 times and 10 times less likely than \(\Gamma<1\) yr\({}^{-1}\). Conversely, the log-uniform prior is in principle uninformative, as it implies almost complete ignorance of even the scale of \(\Gamma\)(Spiegel and Turner, 2012). However, the lower limit of \(\Gamma\) set at \(10^{-5}\) yr\({}^{-1}\) assumes the presence at any time of at least \(\sim 1\) spherical shell within the galaxy (Grimaldi, 2020), making even the log-uniform prior at least marginally optimistic. Figure 2(a) shows the posterior probability of the emission rate being larger than \(\Gamma\), \(P(\Gamma|\tau_{o})\), calculated by integrating Equation (6) from \(\Gamma\) to \(\Gamma_{\rm max}\). Depending on the degree of optimism transpiring from the priors, the assumption that no technoemissions have crossed the Earth during (at least) the entire history of SETI implies that \(\Gamma\) is less than about 0.05 yr\({}^{-1}\) (optimistic), 0.03 yr\({}^{-1}\) (moderately optimistic) and 0.01 yr\({}^{-1}\) (marginally optimistic) with a credible level of 95%. Overall, this translates into an upper bound of about one to five emissions per century generated throughout the galaxy, roughly corresponding to the inferred rate of supernovae in the Milky Way (Rozwadowska, Vissani and Cappellaro, 2021). These probabilistic upper bounds can be considered as least worst-case scenarios for SETI science. Indeed, while our working hypothesis assumes the total absence of crossing events for no less than 60 years, on the other hand this hypothesis is much less pessimistic than claiming extreme rarity or even absence of extraterrestrial signals to explain why they have not been detected so far. ### Waiting time Having established that we can infer information on \(\Gamma\) directly from the 60-year-long absence of Earth-shell overlaps, we now show that this can be used to inform us about the waiting time \(\Delta\tau\) until the next overlap event. To this end, we take the conditional probability of no overlap during a time interval of at least \(\tau_{o}+\Delta\tau\) years, given that no overlap has persisted for at least \(\tau_{o}\) years: \(P(\tau_{o}+\Delta\tau|\Gamma)/P(\tau_{o}|\Gamma)=\exp(-\Gamma\Delta\tau)\). Marginalization over \(p(\Gamma|\tau_{o})\) yields: \[P(\Delta\tau|\tau_{o})=\int\!d\Gamma e^{-\Gamma\Delta\tau}p(\Gamma|\tau_{o})= \frac{\int\!d\Gamma e^{-\Gamma(\Delta\tau+\tau_{o})}p(\Gamma)}{\int\!d\Gamma e ^{-\Gamma\tau_{o}}p(\Gamma)}, \tag{7}\] from which we derive that the median of \(P(\Delta\tau|\tau_{o})\) is about 60 yr (optimistic), 170 yr (moderately optimistic) and \(1,800\) yr (marginally optimistic). Even in the most optimistic case there is a decent 20% probability that the next crossing event will occur not sooner than 240 yr (Figure 2b), whereas we can be confident that in the least optimistic scenario the waiting time does not exceed about \(10^{5}\) yr (95% credible level). This is due to our choice of setting \(\Gamma_{\rm min}=10^{-5}\) yr\({}^{-1}\) for the minimum emission rate, which prevents the log-uniform prior to diverge as \(\Gamma\to 0\). Smaller values of \(\Gamma_{\rm min}\) (hence more pessimistic log-uniform priors) would result in longer waiting times than those inferred from the marginally optimistic case of Figure 2b. A word of caution is in order regarding the fallacy of interpreting \(\Delta\tau\) as the expected waiting time until a possible future detection. In fact, \(P(\Delta\tau|\tau_{o})\) gives the temporal scale associated to the non-overlap with technoemissions, regardless of whether detectors on Earth actively search for them. Because of the aforementioned vastness of the search space, perspectives on the actual detection of technosignatures, therefore, pertain to time scales that are necessarily larger than those predicted by \(P(\Delta\tau|\tau_{o})\), ### Inferred longevities So far we have assumed that no spherical shell has intersected the Earth for at least 60 years. But how likely is this scenario in light of the emission rates inferred in Sec.3.1? To find it out we consider the probability of the test sphere not crossing any shell signal, given in Equation (1). Neglecting again the integral term in Equation (3) we obtain \(P(c\tau_{o})=\exp[-\Gamma(\tau_{o}+\bar{L})]\), where the exponential drop with \(\bar{L}\) reflects the narrowing of the voids as the signal longevity increases. Marginalization over the posterior PDF of \(\Gamma\) gives \[P(\bar{L}|\tau_{o})=\int\!d\Gamma e^{-\Gamma(\tau_{o}+\bar{L})}p(\Gamma|\tau_{ o})=\frac{\int\!d\Gamma e^{-\Gamma(2\tau_{o}+\bar{L})}p(\Gamma)}{\int\!d \Gamma e^{-\Gamma\tau_{o}}p(\Gamma)}, \tag{8}\] which is plotted in Figure 2c for the three different priors considered. We found that for \(\bar{L}\lesssim 2\tau_{o}=120\) yr the non-overlap probability \(P(\bar{L}|\tau_{o})\) is over 25% (optimistic), 48% (moderately optimistic) and 80% (marginally optimistic). Interestingly, technoemissions need not be short-lived to allow for a non-overlap period of \(>60\) years, as their longevity can reach \(1,100\) and \(16,600\) yr with an appreciable 20% probability for the moderately and marginally optimistic cases, respectively (Figure 2c). However, as a consequence of assuming \(\Gamma_{\rm min}=10^{5}\)yr\({}^{-1}\), even the least optimist scenario rules out average longevities greater than about \(10^{5}\) yr. ### Anisotropic emissions Now, we elaborate on the possibility that a fraction \(q\) of technoemissions is given by more or less long-lived directional signals, such as collimated radio beams or optical and infrared laser signals (Townes, 1983; Tellis & Marcy, 2015). In this case, the total emission rate can be written as \(\Gamma=\Gamma_{\rm iso}+\Gamma_{\rm ani}\), where \(\Gamma_{\rm iso}\) and \(\Gamma_{\rm ani}\) are respectively the rates of isotropic and anisotropic technoemissions with corresponding average longevities given by \(\bar{L}_{\rm iso}\) and \(\bar{L}_{\rm ani}\), and \(q=\Gamma_{\rm ani}/\Gamma\). Since the space filled by the radiation of a directional signal is smaller than that occupied by an isotropic emission of similar longevity, we expect an increased average size of the void regions as \(q\neq 0\). To see this, we model the anisotropic emissions by narrow conical beams of angular aperture \(\alpha\ll 2\pi\) and beam axis orientations distributed uniformly over the unit sphere. As shown in Appendix B, Equation (4) still gives the probability of the time interval between overlaps being greater than \(\tau\), provided that we adopt for \(\Gamma\) the effective rate \(\Gamma^{*}=\Gamma\chi\), where \(\chi=[(1-q)+q\alpha^{2}/16]\leq 1\) accounts for the enlarged space available to the test sphere (Grimaldi, 2020). The use of the likelihood function \(P(\tau_{o}|\Gamma^{*})=\exp(-\Gamma^{*}\tau_{o})\) allows us to compute the posterior probabilities along the same lines described above for the isotropic case. As summarized in Figure 4a of Appendix B, the posterior probability of \(\Gamma\) increases as \(q>0\) (with \(\alpha\) held fixed at 2 arcmin \(\simeq 6\times 10^{-4}\) rad) for the three optimistic scenarios considered. For example, assuming that half of the emissions are generated by randomly oriented narrow beams (\(q=50\) %), the inferred total emission rate turns out to be less than 0.02-0.1 yr\({}^{-1}\) (from the least to the most optimistic scenarios with a credible level of 95%, Figure 4a of Appendix B), thus doubling the probabilistic upper bounds found for totally isotropic technoemissions. The increase in the posterior probability of \(\Gamma\), however, has virtually no effect on the posterior probabilities of \(\Delta\tau\) and \(\bar{L}\) in the optimistic and moderately optimistic scenarios (Figures 4b and 4c of Appendix B), because such an increase is almost completely compensated by the decrease of the anisotropy factor \(\chi\). The compensation becomes complete if we take \(\Gamma_{\rm min}=0\) yr\({}^{-1}\) and \(\Gamma_{\rm max}=\infty\), for which we find \(P(\Delta\tau|\tau_{o})=\tau_{o}/(\Delta\tau+\tau_{o})\) and \(P(\bar{L}|\tau_{o})=\tau_{o}/(\bar{L}+2\tau_{o})\) in the optimistic case and \(P(\Delta\tau|\tau_{o})=\sqrt{\tau_{o}/(\Delta\tau+\tau_{o})}\) and \(P(\bar{L}|\tau_{o})=\sqrt{\tau_{o}/(\bar{L}+2\tau_{o})}\) in the moderately optimistic case. On the contrary, the divergence of the log-uniform prior PDF for \(\Gamma\to 0\) makes the posteriors of \(\Delta\tau\) and \(\bar{L}\) still dependent of the anisotropy factor \(\chi\) (Figures 4b and 4c of Appendix B). ## 4 Conclusions We have presented the results of the hypothesis that our planet has not been crossed by extraterrestrial technological emissions for at least 60 years, corresponding to the period when SETI has been actively (albeit intermittently) searching for technosignatures. Although the lack of detection to date can be justified by the highly incomplete sampling of the SETI search space, our working hypothesis is consistent with the available data and represents a possible worst-case scenario for SETI, which is still much less pessimistic than the hypothesis of a complete absence of technological species other than our own. Borrowing a formalism pertaining to soft matter physics and using standard Bayesian methods, we inferred upper bounds on the technoemission rate \(\Gamma\) and corresponding lower bounds on the waiting time until the next crossing event that are remarkably independent of the signal longevity. We have shown that if the lack of detection for the past 60 years happens to be due to our planet being in a region devoid of technosignals, then it follows that SETI will likely find none for the coming several decades (if not centuries or even millenia for the least optimistic case), even if it were to search "all-sky, all-the-time". We do not know whether the premise laid out in this paper is true or not, but if we were to seriously consider this possibility, we would have to rethink current search strategies and perhaps prefer commensal to dedicated searches in order to optimize the cost-benefit ratio of SETI. The author wishes to thank A. Balbi, P. De Los Rios, J. Kuennen, M. Lingam and G. W. Marcy for advice and comments on early drafts. ## Appendix A Derivation of the likelihood function Our model considers a collection of statistically independent spherical shells, each representing a region of space filled by isotropic electromagnetic radiations emitted from a random position in the galaxy, and a test sphere of diameter \(\delta\) and center at Earth's position \(\mathbf{r}_{o}\). The spherical shells can overlap with each other and with the test sphere, so that the probability that \(k\) shells overlap the test sphere follows a Poisson distribution: \(\eta(\delta)^{k}e^{-\eta(\delta)}/k!\), where \(\eta(\delta)\) is the average number of overlaps. Setting \(k=0\) yields the probability that none of the spherical shells overlap the test sphere: \(P(\delta)=e^{-\eta(\delta)}\). To calculate \(\eta(\delta)\) we consider the probability of a single shell overlapping the test sphere: \[p(\delta;t,L)= \int\!\!d{\bf r}\,\rho_{E}({\bf r})\theta(ct+\delta/2-|{\bf r}-{ \bf r}_{o}|)\] \[\times\theta(|{\bf r}-{\bf r}_{o}|-ct+cL+\delta/2), \tag{11}\] where \(\theta(x)=1\) if \(x\geq 0\) and \(\theta(x)=0\) if \(x<0\) is the unit step function, \(\rho_{E}({\bf r})\) is the probability density of an emitter being located in \({\bf r}\), \(ct\) is the outer radius of the spherical shell and \(cL\) its thickness, where \(t\geq 0\) is the elapsed time since the emission started and \(c\) is the speed of light. In the case of multiple shells that are generate with rate \(\Gamma\), the average number of overlaps is obtained by marginalizing (11) over \(t\) and \(L\): \[\eta(\delta)=\Gamma\!\int\!\!dL\,\rho_{L}(L)\int\!\!dt\,p(\delta;t,L). \tag{12}\] Performing the integration over \(t\) and \(L\) gives: \[\eta(\delta)=\Gamma[\delta/c+\bar{L}+K(\delta)], \tag{13}\] where \(\bar{L}=\int\!\!dL\,\rho_{L}(L)L\) is the average longevity of the emissions and \[K(\delta)=-\frac{1}{c}\int\!d{\bf r}\rho_{E}({\bf r})\theta(\delta/2-|{\bf r} -{\bf r}_{o}|)(\delta/2-|{\bf r}-{\bf r}_{o}|). \tag{14}\] Figure 4: Effects of technoemission anisotropy on the posterior probabilities. a, Posterior probability of the emission rate being greater than \(\Gamma\) for different fractions \(q\) of technoemissions given by randomly oriented narrow beams with aperture of 2 arcmin (\(\alpha\simeq 6\times 10^{-4}\) rad). For each prior considered \(q=0\), 0.25, 0.5, 0.75, and 0.95 (from left to right). b, Corresponding posterior probability of the next crossing event occurring not sooner than \(\Delta\tau\). c, Corresponding posterior probability of the Earth being in a void zone for at least 60 years as a function of the average emission longevity \(\bar{L}\). Figure 3: Likelihood of Earth-technosignal noncrossing time being greater than \(\tau\). Black solid lines represent the conditional probability \(P(c\tau)/P(0)\), Equation (15), for different values of the emission birth rate \(\Gamma\) and assuming that the emitters are distributed uniformly over the thin disk of the Milky Way. For \(\tau\) up to about \(5.7\times 10^{4}\) yr, \(P(c\tau)/P(0)\) is well approximated by \(\exp(-\Gamma\tau)\) (blue dashed lines), whereas \(P(c\tau)/P(0)\simeq\exp(-\Gamma\tau/2)\) for \(\tau\gtrsim 10^{6}\) yr (red dot-dashed lines). Setting \(\delta=c\tau\) in Equations (A3) and (A4) yields Equation (3) of the main text. Finally, the probability that the time between overlaps is greater than \(\tau=\delta/c\) reads: \[P(c\tau)/P(0)=e^{-\Gamma[\tau-K(c\tau)]}.\] (A5) It is worth noting that this result remains valid even for emission birth rates that depend on the longevity, provided that \(\Gamma\) in Equation (A5) is replaced by \(\bar{\Gamma}=\int\!dL\rho_{L}(L)\Gamma(L)\). A Taylor expansion of the exponent in (A5) reveals that the second and third order terms in \(\tau\) vanish so that as long as \(\tau\lesssim(2/c)[6/\pi\rho_{E}(\mathbf{r}_{o})]^{1/3}\equiv\tau^{*}\) Equation (A5) is well approximated by \(P(c\tau)/P(0)=\exp(-\Gamma\tau)\), whereas for \(\tau\gtrsim 2d/c\), where \(d=\int\!d\mathbf{r}\rho_{E}(\mathbf{r})|\mathbf{r}-\mathbf{r}_{o}|\) is the average distance of an emitter from the Earth, \(P(c\tau)/P(0)\) tends asymptotically to \(\exp(-\Gamma\tau/2)\). Figure 3 shows \(P(c\tau)/P(0)\) as a function of \(\tau\) for several values of the emission rate \(\Gamma\) calculated numerically from Equation (A5) using \(r_{o}=27\) kly and an axisymmetric distribution of the emitters that reproduces the distribution of stars in the thin disk of the Milky Way: \[\rho_{E}(\mathbf{r})=\lambda(r/r_{s})^{\beta}\exp(-r/r_{s})\exp(-|z|/z_{s}),\] (A6) where \(r\) is the radial distance from the galactic center, \(z\) is the height from the galactic plane, \(\lambda\) is a normalization factor, \(\beta=0\)\(r_{s}=8.15\) kly, and \(z_{s}=0.52\) kly. For this choice of parameters we find \(\tau^{*}\simeq 5.7\times 10^{4}\) yr. The use an emitter distribution that reproduces the main features of the annular galactic habitable zone of Lineweaver et al. (2004), obtained by setting \(\beta=7\) and \(r_{s}=3.26\) kly in Equation (A6), does not yield appreciable changes to the profile of \(P(c\tau)/P(0)\) in the scale of Figure 3 (not shown). ## Appendix B Anisotropic emissions We consider an emitter at \(\mathbf{r}\) that transmits, starting from a time \(t\) and during a time interval \(L\), a conical beam of aperture \(\alpha\) and axis oriented along the direction of the unit vector \(\mathbf{n}\). As done for the isotropic case, we take a test sphere of radius \(\delta/2\) centered at Earth and consider the probability that the beamed emission overlaps the test sphere. For \(\mathbf{n}\) averaged uniformly over the unit sphere, this is given by: \[p(\delta;t,L,\alpha)=\Omega(\alpha)p(\delta;t,L),\] (B7) where \(\Omega(\alpha)=[1-\cos(\alpha/2)]/2\) is the fractional solid angle subtended by the beam and \(p(\delta;t,L)\) is the overlap probability given in Equation (A1). Next, we denote with \(\Gamma_{\text{iso}}\) and \(\Gamma_{\text{ani}}\) the rate of isotropic and anisotropic technoemissions, respectively, so that using Equation (A2) the average total number of emissions overlapping the test sphere reduces to: \[\eta(\delta)= \Gamma_{\text{iso}}\!\int\!\!dL\,\rho_{L}^{\text{iso}}(L)\int\!dt \,p(\delta;t,L)\] \[+\Gamma_{\text{ani}}\Omega(\alpha)\!\int\!\!dL\,\rho_{L}^{\text{ani }}(L)\int\!dt\,p(\delta;t,L),\] (B8) where \(\rho_{L}^{\text{iso}}(L)\) and \(\rho_{L}^{\text{ani}}(L)\) are the longevity PDFs assigned to the isotropic and anisotropic emissions, respectively. The integration over \(t\) and \(L\) yields: \[\eta(\delta)= \Gamma_{\text{iso}}\bar{L}_{\text{iso}}+\Gamma_{\text{ani}}\Omega (\alpha)\bar{L}_{\text{ani}}\] \[+[\Gamma_{\text{iso}}+\Omega(\alpha)\Gamma_{\text{ani}}][\delta/c +K(\delta)],\] (B9) where \(\bar{L}_{i}=\int\!dL\,\rho_{L}^{i}(L)L\) (\(i=\text{iso, ani}\)) and \(K(\delta)\) is defined in Equation (A4). Finally, setting \(\Gamma=\Gamma_{\text{iso}}+\Gamma_{\text{ani}}\), \(q=\Gamma_{\text{ani}}/\Gamma\), and \(\tau=\delta/c\), we obtain: \[P(c\tau)/P(0)=e^{-\Gamma^{*}[\tau+K(c\tau)]},\] (B10) where \(\Gamma^{*}=\Gamma[q+(1-q)\Omega(\alpha)q]\). It is worth noting that \(\bar{L}_{\text{iso}}\) and \(\bar{L}_{\text{ani}}\) do not enter in Eq. (B10), making the irrelevance of the emission longevity for \(P(c\tau)/P(0)\) a general result. Equation (B10) shows that the contribution of the emission anisotropy is absorbed in the effective rate \(\Gamma^{*}\), so that the Bayesian analysis can be performed as in the fully isotropic case by setting \(\Gamma\to\Gamma^{*}\) in the likelihood function. Figure 4 shows the posterior probabilities considered in Figure 2 calculated for different ratios \(q\) of the anisotropic emission rate to the total rate and for beam aperture of 2 arcmin (for simplicity, we have set \(\bar{L}_{\text{iso}}=\bar{L}_{\text{ani}}=\bar{L}\) in the calculation of \(P(\bar{L}|\tau_{o})\) shown in Figure 4c).
2309.00028
Vision-Based Cranberry Crop Ripening Assessment
Agricultural domains are being transformed by recent advances in AI and computer vision that support quantitative visual evaluation. Using drone imaging, we develop a framework for characterizing the ripening process of cranberry crops. Our method consists of drone-based time-series collection over a cranberry growing season, photometric calibration for albedo recovery from pixels, and berry segmentation with semi-supervised deep learning networks using point-click annotations. By extracting time-series berry albedo measurements, we evaluate four different varieties of cranberries and provide a quantification of their ripening rates. Such quantification has practical implications for 1) assessing real-time overheating risks for cranberry bogs; 2) large scale comparisons of progeny in crop breeding; 3) detecting disease by looking for ripening pattern outliers. This work is the first of its kind in quantitative evaluation of ripening using computer vision methods and has impact beyond cranberry crops including wine grapes, olives, blueberries, and maize.
Faith Johnson, Jack Lowry, Kristin Dana, Peter Oudemans
2023-08-31T14:58:11Z
http://arxiv.org/abs/2309.00028v1
# Vision-Based Cranberry Crop Ripening Assessment ###### Abstract Agricultural domains are being transformed by recent advances in AI and computer vision that support quantitative visual evaluation. Using drone imaging, we develop a framework for characterizing the ripening process of cranberry crops. Our method consists of drone-based time-series collection over a cranberry growing season, photometric calibration for albedo recovery from pixels, and berry segmentation with semi-supervised deep learning networks using point-click annotations. By extracting time-series berry albedo measurements, we evaluate four different varieties of cranberries and provide a quantification of their ripening rates. Such quantification has practical implications for 1) assessing real-time overheating risks for cranberry bogs; 2) large scale comparisons of progeny in crop breeding; 3) detecting disease by looking for ripening pattern outliers. This work is the first of its kind in quantitative evaluation of ripening using computer vision methods and has impact beyond cranberry crops including wine grapes, olives, blueberries, and maize. high throughput phenotyping, albedo analysis, semantic segmentation, crop yield estimation, counting methods ## 1 Introduction Machine learning and computer vision methods play an increasingly vital role in facilitating agricultural advancement by giving real time, actionable crop feedback Luo et al. (2023); Yin et al. (2022); Meshram et al. (2021). These methods are enabling farming practices to adapt and evolve to keep up with changing conditions. Cranberry farmers are particularly poised to benefit from vision-based crop monitoring as they face numerous challenges related to fruit quality such as fruit rot and over heating Oudemans et al. (1998); Polashock et al. (2009); Vorsa and Johnson-Cicalese (2012); Vorsa and Zalapa (2019). As cranberries ripen and turn red, they become much more susceptible to overheating, partially because they lose their capacity for evaporative cooling Kerry et al. (2017); Rackso and Schrader (2012); Smart and Sinclair (1976). When this growth stage is reached, the cranberries exposed to direct sunlight can overheat and become unusable. We develop a vision-based method for measuring in-field cranberry albedo to quantify ripening in order to predict when cranberries are nearing this vulnerable stage. Currently, cranberry growers quantify this ripening process by using out-of-field albedo evaluation by imaging harvested cranberries over time Serras (2022). This approach is cumbersome and time-consuming, limiting its utility in larger-scale evaluations. For practical application, only small numbers of berries can be harvested for out-of-field images. Furthermore, since berries can overheat in a short period of time Pelletier et al. (2016); Kerry et al. (2017), irrigation decisions should be coordinated with in-field albedo characterization, informing the grower on the number of vulnerable berries and enabling expedient decision making. The conventional solution to overheating is increased crop irrigation during the growing season. Inadequate or poorly timed irrigation can lead to overheating whereas excessive irrigation encourages fungal fruit rot to develop Oudemans et al. (1998). Irrigation decisions must also consider cost and efficient use of environmental resources. Berries on the top of the canopy with direct sun exposure have a high risk of overheating, while berries underneath the leafy canopy are generally well protected. Therefore, assessing the current albedo of the visible berries is directly relevant to irrigation decisions. Ripening rates vary among cranberry varieties, and ones that ripen early are at the greatest risk. In-field measurement of the ripening rate for a particular cranberry bed significantly informs crop management decisions. Our ripening assessment framework uses cranberry image segmentation to evaluate albedo variation over time to compare cranberry varieties. In recent prior work Akiva et al. (2022), neural networks for segmentation have been used for yield estimation through counting. In our work, we use these segmentation networks to isolate individual berries over time to find temporal patterns in cranberry albedo across varieties. By combining counting and albedo analysis, it becomes possible to evaluate the economic risk to a particular crop on a high temperature day, while also making a long term assessment on which varieties provide the most yield. The cranberries are segmented using a semi-supervised method Akiva et al. (2022) based on the Triple-S network Akiva et al. (2020). Using point-wise annotations instead of pixel-wise labels significantly reduces the labeling cost for the densely populated cranberry images. We provide new point-click labeled cranberry imagery that supports ripening assessments in CRAID-2022, a new dataset covering a larger time period of the growing season with more temporal frequency than prior work. We train a segmentation network to isolate cranberry pixels from drone imagery in both this new dataset and the preexisting CRAID-2019 dataset Akiva et al. (2020). We present a ripening comparison of four cranberry varieties over a two month span and show clear timelines of albedo change indicating when each variety becomes at great risk for overheating. ### Impact for Crop Breeding Screening for the heritability of novel genotypes requires high through-put phenotyping (HTP) methods to discover desirable genetic traits (Araus and Cairns, 2014; Diaz-Garcia et al., 2018). In crop breeding, Figure 1: Cranberry bog at the measurement site. (Left) Cranberry harvesting. (Right) Drone at bog for in-field cranberry measurements during the growing season. there may be hundreds to thousands of progeny/offspring to evaluate, and high throughput methods make this evaluation practical. Computer vision algorithms for segmentation and calibrated albedo measurements enable quantitative comparisons. The methodology we present in this paper fits those requirements well, and HTP is an application domain for this work. The rate of color development is a crop trait that can affect the quality of cranberries at harvest. For consumer appeal, the timing and uniformity of ripening is critical, i.e. asynchronous ripening is a problem. For breeding, uniformity is desirable so HTP is used to look at multiple genotypes. For example, our related current work (unpublished) evaluates 300-400 genotypes planted in small plots (e.g. 3.3 sq.\(\backslash\)m.) where a ripening evaluation is done out-of-field and only a few times (1-2) per season depending on time and labor. To illustrate the scale of these studies, consider that they include 0.5 acre plots with 350 individual small plots and 0.2 hectare (ha) plots with 7000 individual plots (see drone image shown in Figure 2). ## 2 Related Work ### Precision Agriculture Precision agriculture is revolutionizing farming and challenging traditional methods. Future farms will integrate multiple technical advances such as soil sensors Abdollahi et al. (2021), plant wearables Yin et al. (2021), drone aerodynamics Radoglou-Grammatikis et al. (2020), and remote sensing Sishodia et al. (2020). Advances in machine learning and AI have been particularly impactful, enabling significant breakthroughs in agricultural applications in recent years (Wang et al., 2022; Benos et al., 2021; Sharma et al., 2020; Navridou et al., 2019). Computer vision is capable of giving real time, high fidelity feedback to farmers about yield estimation (Van Klompenburg et al., 2020; Darwin et al., 2021; He et al., 2022; Palacios et al., 2023), phenotype identification (Li et al., 2020; Kolhar and Jagtap, 2023; Liu et al., 2022), and crop health assessment (Dhaka et al., 2021; Ahmad et al., 2022; Kattenborn et al., 2021) while also being useful for larger scale applications like farm automation Friha et al. (2021). Figure 2: An example of breeding plots (drone view) that are typically evaluated manually. Planting design permits approx 3500 plots/ha, and this entire block is approximately 2 ha. Convenient quantitative evaluation can be supported by our vision-based ripening assessment framework. _Location removed for blind review._ ### Weakly Supervised Semantic Segmentation Semantic segmentation is an extremely useful tool for applications that require object localization or counting (Jia et al., 2022; Ilyas et al., 2021; Afonso et al., 2020), like crop yield estimation Maheswari et al. (2021). However, obtaining enough pixel-wise labels to effectively train a network on images with high label densities, as is the case in the cranberry domain, is prohibitively expensive. To combat this, some work uses image level labels (Araslanov and Roth, 2020; Cermelli et al., 2022) to guide the segmentation, but performance significantly improves with instance-level information Zhou et al. (2018). A middle ground between these two approaches is point-wise supervision (Cheng et al., 2022; Liu et al., 2021; Song et al., 2021), which provides point-level localization for each class instance. For the specific domain of cranberry cultivation, Akiva et al. (Akiva et al., 2022, 2021, 2020) create a weakly supervised pipeline for semantic segmentation and counting of cranberries from aerial drone images. This method utilizes point-wise labels to supervise their network, negating the need for costly pixel-wise annotations in densely populated scenes. While their work focuses on yield estimation and crop risk assessment, our work utilizes a modification of their pipeline for albedo characterization in order to analyze and compare cranberry albedo over time and across different species. ### Albedo Characterization over Time As cranberries ripen, their risk of spoilage increases due to overheating Pelletier et al. (2016) caused by a decrease in evapotranspiration. This ripening corresponds with visual changes in the berry albedo, which allows us to use albedo characterization over time to predict when a cranberry bog is most at risk. This same phenomena is also found in apples Rackso and Schrader (2012) and grapes Smart and Sinclair (1976). Ripening patterns can also indicate the presence of viruses as occurs in wine grapes (Alabi et al., 2016; Blanco-Ulate et al., 2017). Despite the importance of quantifying color development, automated methods for albedo characterization have received limited attention in the literature. Most existing studies of ripening, do out-of-field measurements that rely on harvested berries for evaluating ripening (Vorsa et al., 2017; Keller, 2010). These methods are time-consuming and do not scale to large evaluations or real-time assessments. The framework of this paper is an important step for using advanced computer vision algorithms (semi-supervised segmentation with low-cost annotation, deep learning networks) as a tool in agriculture. \begin{table} \begin{tabular}{|c|c|c|} \hline Dataset & CRAID-2022 & CRAID-2019 \\ \hline Total Number of Images & 7,198 & 21,436 \\ Number of Labelled Images & 220 & 2368 \\ Date Range of Collection & 07/27/22 - 09/09/22 & 08/20/19 - 10/20/19 \\ Collection Frequency (times per week) & 1 & 3 \\ Sensors & RGB & RGB \\ Image Size \((\mathrm{px}^{2})\) & 456 x 608 & 456 x 608 \\ \hline \end{tabular} \end{table} Table 1: Details of the CRAID-2022 dataset. Images were collected via drone with an RGB sensor of multiple cranberry bogs over the late July - September growing season. In total, there were four varieties (Mullica Queen, Stevens, Crimson Queen, Haines) imaged over six dates roughly a week apart. We compare collection details between CRAID-2022 and CRAID 2019 for convenience. ## 3 Datasets For data collection, we introduce CRAID-2022, obtained from our bog monitoring system at PE Marucci Center for Blueberry and Cranberry Research, a substation of the Rutgers New Jersey Agricultural Experiment Station (Chatsworth, NJ). To the best of our knowledge, this is the first dataset and methodology that support end-to-end berry health assessment from sky and ground data. This framework is a pioneering step in the area of vision/AI for precision agriculture and especially a precision-based method for rapid, short-term decision making that can assist growers to implement irrigation practices in response to complex risk factors. We combine the CRAID-2019 dataset Akiva et al. (2020) with drone-based cranberry bog images from the 2022 growing season. In this paper, we will use the term CRAID\(+\) dataset to refer to the combination of CRAID 2019 and CRAID 2022. In total, this dataset contains four different cranberry varieties over seven bogs. The images include three beds of Mullica Queen, one bed of Stevens, two beds of Haines, and one bed of Crimson Queen cranberries. The new images were taken by drone in weekly increments between the months of July and September. We calibrate each drone image and crop each into 72, non-overlapping \(456\times 608\) sub-images used as training images. A selection of 220 crops representative of the diverse berry appearances in the entire Figure 3: Drone images from multi-temporal drone-scout imaging. Weekly inspection of multiple cranberry bogs over the late July/September growing season for four varieties (Mullica Queen, Stevens, Crimson Queen, Haines). (Left to Right) Imaging Dates for 2022: 7/27, 8/2, 8/16, 8/25, 8/31, 9/9. growing season were manually labelled with point-wise annotations for all berries in the image. This data was combined with the labeled dataset of 2368 images from Akiva et al. (2020) to create a new training dataset comprised of 2588 total images. We train on this combined dataset of 2588 images (CRAID\(+\)) as well as the smaller dataset of 220 images (CRAID-2022 only) and compare the pipeline performance between the two. ## 4 Methods ### Photometric Calibration The images in CRAID-2022 from the 2022 growing season were first photometrically calibrated using the Macbeth Color Checker card and a well-established approach of estimating the optimal radiometric correction using measurements of the card under uniform illumination (Kim and Pollefeys, 2008; Debevec and Malik, 2008; mit, 1999). Radiometric or photometric calibration is needed to calibrate for the effects of the changing camera parameters between imaging sessions. Additionally, the change in sun angle will affect the appearance. Since our goal is to measure an invariant albedo measurement, raw pixels are insufficient since they depend on camera parameters and environment conditions. Reference images of the card were taken from every bog for each day of data collection using the drone camera. For each reference image we extracted intensity values for the 6 grey scale squares on the Macbeth Color Checker. The measured values were used to find a linear transformation to recover the radiometric correction parameters, and the images were corrected accordingly. ### Segmentation Methods We use weakly-supervised semantic segmentation Akiva et al. (2022) to isolate/segment cranberries in aerial images. An image \(X\) is fed into an autoencoder and the output of the decoder is fed into three different branches. The first branch computes segmentation loss on the decoder output using point-wise annotations and a pseudo-mask generated from the features to push the network to localize cranberry instances. The second computes convexity loss on the predicted cranberry instances to make the predicted blobs round in shape. The final branch computes split loss to push the network toward predicting separable cranberry instances. Training semantic segmentation for this task using the traditional pixel-wise labels would require extensive labeling of images with high label densities. Using point-wise annotations (point-clicks instead of full segmentation ground truth) significantly lessens the labeling cost while still allowing for the accurate localization of the cranberries. Figure 4: Images for photometric calibration. The drone imaging protocol includes images of the Macbeth card over multiple days. Although the same camera is used for imaging, camera parameters change. Notice slight variations between the cards from different days, which we remove through photometric calibration. Our investigation of this segmentation architecture for the task at hand led to several questions: Would the method work without training on new data since the cranberry crops were similar in previous and new datasets? Or, could the same architecture and losses be used with a new training set? Our empirical investigation led to the result that the same architecture could be re-used but new training was necessary. However, for the task of identifying cranberry positions, significantly less training data (10x less) was needed (See Section 5). ### Albedo The industry standard for ripeness defines five classes of cranberries based on albedo Serras (2022). These distinct stages of ripeness are hand defined by field experts from the periodic collections of the cranberries throughout the season Serras (2022). We use a similar approach for classifying albedo change in the CRAID data, but opt for using images of the cranberries in the bog instead. Each class is defined by TSNE clustering the RGB pixel values of a collection of randomly sampled cranberry detections from the entirety of the images from the 2022 growing season. We cluster points in this embedding space with k-means, then map those clusters to the 5 "common classes", spanning from green to red, that best align with the industry standard. Once the classes have been determined, we match each berry to its corresponding color class by matching each pixel belonging to a single berry to its closest color cluster. The cluster belonging to the majority of the pixel values is chosen as the label for the berry. We repeat this process for each cranberry and count the number of detections in each image. The change in class density is plotted, as in Figure 7, and clearly shows patterns in the cranberry albedo. From the progression of these plots, we also pinpoint when each cranberry variety becomes most at risk of overheating. Figure 5: 3D RGB plots of the berry colors in a patch around a berry. The color plot illustrates the color variety of a single berry including lighter colors near the specularity. This plot depicts a patch around a cranberry and includes background leaf pixels. Segmentation of berries is used to isolate berry pixels, ideally excluding the contribution of leaf pixels. The observed deep red color of the ripe cranberry indicates a high overheating risk. ## 5 Results ### Cranberry Segmentation The cranberry segmentation network has a mean intersection over union (mIOU) of 62.54% and a mean absolute error (MAE) of 13.46 as reported in Akiva et al. (2021). We show the results of the cranberry segmentation from the model trained on CRAID\(+\) in Figure 5(a) alongside the results from the model trained on only the 2022 growing season data (CRAID-2022) in Figure 5(b). Training on the larger CRAID\(+\) dataset produces smaller predicted cranberry blobs than training on only the 2022 season data in CRAID-2022. This may be due to a scale mismatch between the datasets. The CRAID-2019 dataset contained images of cranberries taken by drone from a higher elevation than in the CRAID-2022 data. Another deficiency of the model trained on the CRAID\(+\) data is that it misses detections of greener berries. Because the original CRAID-2019 dataset contains mostly red berries, it is unable to make the color invariant predictions Figure 6: Example predicted cranberry segmentation maps overlaid on the original input images. (Best viewed zoomed.) The top row was predicted from the model trained on CRAID\(+\). The bottom row was predicted by the model trained solely on the 2022 growing season data, CRAID-2022. The first model (first row) predicts significantly smaller cranberry blobs than the second model (second row). Additionally, the model trained on CRAID\(+\) struggles to segment greener berries due to a lack of early images with green berries in the CRAID-2019 dataset that forms the majority of CRAID\(+\) dataset. necessary to accurately segment green berries in the CRAID-2022 dataset. For this reason, we use the segmentation network trained on CRAID-2022 for our albedo characterization. ### Albedo Analysis Figure 5 illustrates the process by which the cranberries are classified into one of the five common albedo classes. Once the berries are successfully segmented, their constituent pixels are matched to the closest color cluster. The berry is labeled with the cluster that appears most frequently. Figure 5 shows the isolation of a cranberry and the distribution of the RGB values belonging to it. This berry is primarily red, so it would be mapped to class 5, which contains the reddest, most ripe berries. Once we compute the classes of all the berries, we plot the percentage of berries in each class for a particular collection date and bog in Figure 7. The top three rows are the Mullica Queen variety. The next row is the Stevens variety followed by the Crimson Queen variety. The final two rows are the Haines variety. Each column of graphs is made from berries imaged on a specific day. From left to right, the columns were imaged on 8/2, 8/16, 8/25, 8/31, 9/9, and 9/14 in 2022. As the berries redden, the cranberry bog enters the high risk category and the the ripeness ratio, as shown in Table 2, can be used to determine a ripeness threshold (e.g. approximately 0.6) as an indicator. This ripeness ratio is measured as the percentage of red berries (class 4 and 5) on a collection date divided by the percentage of red berries on the final collection date. The Mullica Queen variety has a relatively low risk of overheating based on its albedo class distribution for the first four collection dates. On the fifth collection date of 9/9, the number of red berries significantly increases, indicating that the berries' overheating risk is now high. This pattern is observed with slight variations over all three Mullica Queen cranberry beds. The Stevens variety has a majority of green berries for a significant portion of the collection period. However, by 9/9 it begins to cross over into the category for a high risk of overheating. The Crimson Queen variety crosses into the high risk category by 8/25. (Green albedo values in the late-season graphs are an artifact due to mis-classification of some leaf pixels as berries.) The Haines variety crosses into the high risk category on 8/25 (in the sixth row) or on 8/31 (in the seventh row). From Figure 7, we see that the Haines variety ripens the fastest. The next fastest ripening cranberry variety is Crimson Queen, followed by Mullica Queen. The Stevens variety is the slowest to ripen. These dates indicate a rough timeline indicating when cranberry farmers will need to monitor their crop more \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{Ripeness Ratio} \\ \hline Bog & 8/2 & 8/16 & 8/25 & 8/31 & 9/9 & 9/14 \\ \hline \hline A5 & 0.007 & 0.082 & 0.331 & 0.497 & 0.902 & 1 \\ I15 & 0.001 & 0.108 & 0.167 & 0.409 & 0.874 & 1 \\ J12 & 0.002 & 0.088 & 0.419 & 0.609 & 0.968 & 1 \\ K4 & 0.012 & 0.151 & 0.339 & 0.433 & 0.872 & 1 \\ A4 & 0.127 & 0.453 & 0.926 & 1.118 & 0.808 & 1 \\ B7 & 0.035 & 0.217 & 0.622 & 0.798 & 1.119 & 1 \\ I3 & 0.010 & 0.079 & 0.347 & 0.678 & 1.121 & 1 \\ \hline \end{tabular} \end{table} Table 2: Ripeness ratio for each bog of cranberries over time. We define the ripeness ratio for a bog of cranberries to be the percentage of red berries at the current time over the percentage of red berries on the final collection date. Bog key indicates the following cranberry types: A5 Mullica Queen, I5 Mullica Queen, J12 Mullica Queen, K4 Stevens, A4 Crimson Queen, B7 Haines, I3 Haines. closely. These dates also serve as markers for when to focus more heavily on crop irrigation to mitigate overheating concerns. ## 6 Conclusion Computer vision segmentation of photometrically calibrates images provides a real-time tool to assess crop health, allow for expedient intervention, and compare crop varieties. We show the effectiveness Figure 7: Plots comparing albedo over time for four cranberry varieties. Histograms of pixels in the five main color classes are shown. The four varieties are: Mullica Queen (top three rows), Stevens, Crimson Queen, and Haines (bottom two rows). Residual green pixels at the later dates are artifacts due to some misclassifications of background leaf pixels. of semantic segmentation on albedo characterization over time in cranberry crops. Weakly supervised semantic segmentation enables a convenient, effective localization of cranberries without the need for expensive pixel-wise labeling. We collect a time-series of weekly images from seven cranberry bogs over the course of two months using drones to create a labeled cranberry dataset with images throughout the entire cranberry growing cycle. This framework characterizes color development over time for cranberries and provides key insight into berry overheating risk and crop health. We create a timeline of albedo change that gives farmers the tools to make more informed irrigation choices to prevent crop rot and conserve resources. The resulting temporal signatures give important predictive power to the growers enabling choices among cranberry crop varieties and implications of those choices in best agriculture practices. The methodology can be automated for large scale crop evaluation to support new methods of high throughput phenotyping. ## 7 Acknowledgments This project was sponsored by the USDA NIFA AFRI, United States Award Number: 2019-67022-29922 and the SOCRATES NSF NRT #2021628. We thank Michael King who supervised drone imagery collection for CRAID 2022.
2309.16892
Kernels for the Disjoint Paths Problem on Subclasses of Chordal Graphs
Given an undirected graph $G$ and a multiset of $k$ terminal pairs $\mathcal{X}$, the Vertex-Disjoint Paths (\VDP) and Edge-Disjoint Paths (\EDP) problems ask whether $G$ has $k$ pairwise internally vertex-disjoint paths and $k$ pairwise edge-disjoint paths, respectively, connecting every terminal pair in~$\mathcal{X}$. In this paper, we study the kernelization complexity of \VDP~and~\EDP~on subclasses of chordal graphs. For \VDP, we design a $4k$ vertex kernel on split graphs and an $\mathcal{O}(k^2)$ vertex kernel on well-partitioned chordal graphs. We also show that the problem becomes polynomial-time solvable on threshold graphs. For \textsc{EDP}, we first prove that the problem is $\mathsf{NP}$-complete on complete graphs. Then, we design an $\mathcal{O}(k^{2.75})$ vertex kernel for \EDP~on split graphs, and improve it to a $7k+1$ vertex kernel on threshold graphs. Lastly, we provide an $\mathcal{O}(k^2)$ vertex kernel for \EDP~on block graphs and a $2k+1$ vertex kernel for clique paths. Our contributions improve upon several results in the literature, as well as resolve an open question by Heggernes et al.~[Theory Comput. Syst., 2015].
Juhi Chaudhary, Harmender Gahlawat, Michal Włodarczyk, Meirav Zehavi
2023-09-28T23:26:48Z
http://arxiv.org/abs/2309.16892v1
# Kernels for the Disjoint Paths Problem on Subclasses of Chordal Graphs ###### Abstract Given an undirected graph \(G\) and a multiset of \(k\) terminal pairs \(\mathcal{X}\), the Vertex-Disjoint Paths (VDP) and Edge-Disjoint Paths (EDP) problems ask whether \(G\) has \(k\) pairwise internally vertex-disjoint paths and \(k\) pairwise edge-disjoint paths, respectively, connecting every terminal pair in \(\mathcal{X}\). In this paper, we study the kernelization complexity of VDP and EDP on subclasses of chordal graphs. For VDP, we design a \(4k\) vertex kernel on split graphs and an \(\mathcal{O}(k^{2})\) vertex kernel on well-partitioned chordal graphs. We also show that the problem becomes polynomial-time solvable on threshold graphs. For EDP, we first prove that the problem is NP-complete on complete graphs. Then, we design an \(\mathcal{O}(k^{2.75})\) vertex kernel for EDP on split graphs, and improve it to a \(7k+1\) vertex kernel on threshold graphs. Lastly, we provide an \(\mathcal{O}(k^{2})\) vertex kernel for EDP on block graphs and a \(2k+1\) vertex kernel for clique paths. Our contributions improve upon several results in the literature, as well as resolve an open question by Heggernes et al. [Theory Comput. Syst., 2015]. Kernels for the Disjoint Paths Problem on Subclasses of Chordal Graphs 10.4230/LIPIcs... ## 1 Introduction The Vertex-Disjoint Paths (VDP) and Edge-Disjoint Paths (EDP) problems are fundamental routing problems, having applications in VLSI design and virtual circuit routing [19, 39, 44, 45]. Notably, they have been a cornerstone of the groundbreaking Graph Minors project of Robertson and Seymour [42], and several important techniques, including the _irrelevant vertex technique_, originated in the process of solving disjoint paths [42]. In VDP (respectively, EDP), the input is an undirected graph \(G\) and a multiset of terminal pairs \(\mathcal{X}=\{(s_{1},t_{1}),\ldots,(s_{k},t_{k})\}\), and the goal is to find \(k\) pairwise internally vertex-disjoint (respectively, edge-disjoint) paths \(P_{1},\ldots,P_{k}\) such that \(P_{i}\) is a path with endpoints \(s_{i}\) and \(t_{i}\). Both VDP and EDP are extensively studied in the literature, and have been at the center of numerous results in algorithmic graph theory [5, 14, 17, 21, 31, 34, 36, 48]. Karp [30] proved that VDP is NP-complete (attributing the result to Knuth), and a year later, Even, Itai, and Shamir [15] proved the same for EDP. When \(k\) is _fixed_ (i.e., treated as a constant), Robertson and Seymour [37, 42] gave an \(\mathcal{O}(|V(G)|^{3})\) time algorithm as a part of their famous Graph Minors project. This algorithm is a _fixed-parameter tractable_ (\(\mathsf{FPT}\)) algorithm parameterized by \(k\). Later, the power \(3\) was reduced to \(2\) by Kawarabayashi, Kobayashi, and Reed [32]. In Parameterized Complexity, each problem instance is associated with an integer parameter \(k\). We study both VDP and EDP through the lens of _kernelization_ under the parameterization by \(k\). A _kernelization algorithm_ is a polynomial-time algorithm that takes as input an instance \((I,k)\) of a problem and outputs an _equivalent instance_\((I^{\prime},k^{\prime})\) of the same problem such that the size of \((I^{\prime},k^{\prime})\) is bounded by some computable function \(f(k)\). The problem is said to admit an \(f(k)\) sized kernel, and if \(f(k)\) is polynomial, then the problem is said to admit a polynomial kernel. It is well known that a problem is \(\mathsf{FPT}\) if and only if it admits a kernel. Due to the profound impact of preprocessing, kernelization has been termed "_the lost continent of polynomial time_" [16]. For more details on kernelization, we refer to books [11, 18]. Bodlaender et al. [4] proved that, unless \(\mathsf{NP}\subseteq\mathsf{coNP/poly}\), VDP does not admit a polynomial kernel (on general graphs). On the positive side, Heggernes et al. [26] extended this study to show that VDP and EDP admit polynomial kernels on split graphs with \(\mathcal{O}(k^{2})\) and \(\mathcal{O}(k^{3})\) vertices, respectively. Yang et al. [47] further showed that a restricted version of VDP, where each vertex can appear in at most one terminal pair, admits a \(4k\) vertex kernel. Recently, Ahn et al. [2] introduced so-called _well-partitioned chordal graphs_ (being a generalization of split graphs), and showed that VDP on these graphs admits an \(\mathcal{O}(k^{3})\) vertex kernel. In this paper, we extend the study of kernelization of EDP and VDP on these (and other) subclasses of chordal graphs. ### Our Contribution An overview of our results is given in Table 1. We begin by discussing the results about EDP. First, we observe that the problem remains \(\mathsf{NP}\)-hard even on inputs with a trivial graph structure given by a clique, unlike VDP. This extends the known hardness results for split graphs [26] and graphs of cliquewidth at most \(6\)[24]. Every graph class treated in this paper includes cliques, so EDP is \(\mathsf{NP}\)-hard on each of them. This motivates the study of kernelization algorithms. From now on, we always use \(k\) to denote the number of occurrences of terminal pairs in an instance. We present an \(\mathcal{O}(k^{2.75})\) vertex kernel for EDP on split graphs, improving upon the \(\mathcal{O}(k^{3})\) vertex kernel given by Heggernes et al. [26]. Our main technical contribution is a lemma stating that the length of each path in a minimum-size solution is bounded by \(\mathcal{O}(\sqrt{k})\). This allows us to obtain the following. **Theorem 2**.: EDP _on split graphs admits a kernel with at most \(\mathcal{O}(k^{2.75})\) vertices._ \begin{table} \begin{tabular}{|l|l|l|} \hline **Graph Class** & \(\mathsf{VDP}\) & \(\mathsf{EDP}\) \\ **Well-partitioned Chordal** & \(\mathcal{O}(k^{2})\) vertex kernel [Theorem 7] & \(\mathsf{OPEN}\) \\ **Split** & \(4k\) vertex kernel [Theorem 6] & \(\mathcal{O}(k^{2.75})\) vertex kernel [Theorem 2] \\ **Threshold** & \(\mathbb{P}\) [Theorem 9] & \(7k+1\) vertex kernel [Theorem 3] \\ **Block** & \(\mathbb{P}\) [Observation 8] & \(4k^{2}-2k\) vertex kernel [Theorem 4] \\ **Clique Path** & \(\mathbb{P}\) [Observation 8] & \(2k+1\) vertex kernel [Theorem 5] \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of the kernelization results of VDP and EDP parameterized by the number of occurrences of terminal pairs (\(k\)) on the subclasses of chordal graphs studied in this paper. In the quest to achieve better bounds, we consider a subclass of split graphs. Specifically, we prove that EDP on threshold graphs admits a kernel with at most \(7k+1\) vertices. Here, we exploit the known vertex ordering of threshold graphs that exhibits an inclusion relation concerning the neighborhoods of the vertices. EDP on threshold graphs admits a kernel with at most \(7k+1\) vertices. Another important subclass of chordal graphs is the class of block graphs. For this case, we present a kernel with at most \(4k^{2}-2k\) vertices. Our kernelization algorithm constructs an equivalent instance where the number of blocks can be at most \(4k-2\), and each block contains at most \(k\) vertices. Thus, we have the following theorem. EDP on block graphs admits a kernel with at most \(4k^{2}-2k\) vertices. Whenever a block has more than two cut vertices, decreasing the size of that block below \(\mathcal{O}(k)\) becomes trickier. However, if we restrict our block graph to have at most two cut vertices per block--i.e., if we deal with _clique paths_--then this can be done. The key point in designing our linear kernel in clique paths is that, in the reduced instance, for each block \(B\), the number of vertices in \(B\) is dictated by a linear function of the number of terminal pairs having at least one terminal vertex in \(B\). So, we obtain a \(2k+1\) vertex kernel for this class. EDP on clique paths admits a kernel with at most \(2k+1\) vertices. Now, we switch our attention to kernelization algorithms for VDP. First, we give a \(4k\) vertex kernel for VDP on split graphs. This resolves an open question by Heggernes et al. [26], who asked whether this problem admits a linear vertex kernel. For this purpose, we use the result by Yang et al. [47], who gave a \(4k\) vertex kernel for a restricted variant of VDP, called VDP-Unique by us, where each vertex can participate in at most one terminal pair. In order to obtain a linear vertex kernel for VDP, we give a parameter-preserving reduction to VDP-Unique. Our reduction relies on a non-trivial matching-based argument. In this way, we improve upon the \(4k^{2}\) vertex kernel given by Heggernes et al. [26] as well as generalize the result given by Yang et al. [47]. Specifically, we have the following theorem. VDP on split graphs admits a kernel with at most \(4k\) vertices. Next, we give an \(\mathcal{O}(k^{2})\) vertex kernel for VDP on well-partitioned chordal graphs (see Definition 20). Ahn et al. [2] showed that VDP admits an \(\mathcal{O}(k^{3})\) vertex kernel on this class. We improve their bound by giving a marking procedure that marks a set of at most \(\mathcal{O}(k^{2})\) vertices in \(G\), which "covers" some solution (if it exists). As a result, we arrive at the following theorem. VDP on well-partitioned chordal graphs admits a kernel with \(\mathcal{O}(k^{2})\) vertices. Unlike EDP, the VDP problem turns out easier on the remaining graph classes. In block graphs, for every terminal pair with terminals in different blocks, there is a unique induced path connecting these terminals (all internal vertices of this path are the cut vertices). After adding these paths to the solution, we end up with a union of clique instances, where VDP is solvable in polynomial time. This leads to the following observation about block graphs and its subclass, clique paths. VDP on block graphs (and, in particular, on clique paths) is solvable in polynomial time. Finally, we identify a less restricted graph class on which VDP is polynomial-time solvable, namely the class of threshold graphs. This yields a sharp separation between split graphs and its subclass--threshold graphs--in terms of VDP. VDP on thresholds graphs is solvable in polynomial time. ### Brief Survey of Related Works Both VDP and EDP are known to be NP-complete for planar graphs [35], line graphs [26], and split graphs [26]. Moreover, VDP is known to be NP-complete on interval graphs [38] and grids [35] as well. Both VDP and EDP were studied from the viewpoint of structural parameterization. While VDP is FPT parameterized by treewidth [41], EDP remains NP-complete even on graphs with treewidth at most \(2\)[39]. Gurski and Wange [24] showed that VDP is solvable in linear time on co-graphs but becomes NP-complete for graphs with clique-width at most \(6\). As noted by Heggernes et al. [26], there is a reduction from VDP on general graphs to EDP on line graphs, and from EDP on general graphs to VDP on line graphs. Using this reduction and the fact that a graph with treewidth \(\ell\) has clique-width at most \(2\ell+2\)[24], EDP can also be shown to be NP-complete on graphs with clique-width at most \(6\)[26]. The Graph Minors theory of Robertson and Seymour provides some of the most important algorithmic results of the modern graph theory. Unfortunately, these algorithms, along with the \(\mathcal{O}(n^{3})\) and \(\mathcal{O}(n^{2})\) algorithms for VDP and EDP(when \(k\) is fixed), respectively [42, 32], hide such big constants that they have earned a name for themselves: "_galactic algorithms_". Since then, finding efficient FPT algorithms for VDP and EDP has been a tantalizing question for researchers. Several improvements have been made for the class of planar graphs [1, 36, 40, 43], chordal graphs [29], and bounded-genus graphs [40, 13, 33]. Concerning kernelization (in addition to the works surveyed earlier in the introduction), we note that Ganian and Ordyniak [20] proved that EDP admits a linear kernel parameterized by the feedback edge set number. Recently, Golovach et al. [22] proved that Set-Restricted Disjoint Paths, a variant of VDP where each terminal pair has to find its path from a predefined set of vertices, does not admit a polynomial compression on interval graphs unless NP \(\subseteq\) coNP/poly. The optimization variants of VDP and EDP--MaxVDP and MaxEDP-- are well-studied in the realm of approximation algorithms [5, 14, 17, 21, 34]. Chekuri et al. [5] gave an \(\mathcal{O}(\sqrt{n})\)-approximation algorithm for MaxEDP on general graphs, matching the \(\Omega(\sqrt{n})\) lower bound provided by Garg et al. [21]. Recently, MaxVDP has gained much attention on planar graphs [6, 7, 9, 10, 8]. Highlights of this line of works (MaxVDP on planar graphs) include an approximation algorithm with approximation ratio \(\mathcal{O}(n^{\frac{\omega}{19}}\log^{\mathcal{O}(1)}n)\)[7], and hardness of approximating within a factor of \(n^{\Omega(1/(\log\log n)^{2})}\)[9]. ### Organization of the paper We begin with formal preliminaries, where we gather information about the studied graph classes and the basic algorithmic tools. In Section 3, we prove the kernelization theorems for EDP, which are followed by the NP-hardness proof for EDP on cliques in Section 4. Next, we cover the kernelization results for VDP in Section 5 and present a polynomial-time algorithm for threshold graphs in Section 6. We conclude in Section 7. ## 2 Preliminaries For a positive integer \(\ell\), let \([\ell]\) denote the set \(\{1,\ldots,\ell\}\). ### Graph Notations All graphs considered in this paper are simple, undirected, and connected unless stated otherwise. Standard graph-theoretic terms not explicitly defined here can be found in [12]. For a graph \(G\), let \(V(G)\) denote its vertex set, and \(E(G)\) denote its edge set. For a graph \(G\), the subgraph of \(G\) induced by \(S\subseteq V(G)\) is denoted by \(G[S]\), where \(G[S]=(S,E_{S})\) and \(E_{S}=\{xy\in E(G)\mid x,y\in S\}\). For two sets \(X,Y\subseteq V(G)\), we denote by \(G[X,Y]\) the subgraph of \(G\) with vertex set \(X\cup Y\) and edge set \(\{xy\in E(G):x\in X,y\in Y\}\). The _open neighborhood_ of a vertex \(v\) in \(G\) is \(N_{G}(v)=\{u\in V(G):uv\in E(G)\}\). The _degree_ of a vertex \(v\) is \(|N_{G}(v)|\), and it is denoted by \(d_{G}(v)\). When there is no ambiguity, we do not use the subscript \(G\) in \(N_{G}(v)\) and \(d_{G}(v)\). A vertex \(v\) with \(d(v)=1\) is a _pendant vertex_. The _distance_ between two vertices in a graph \(G\) is the number of edges in the shortest path between them. We use the notation \(d(u,v)\) to represent the distance between two vertices \(u\) and \(v\) in a graph \(G\) (when \(G\) is clear from the context). For a graph \(G\) and a set \(X\subseteq V(G)\), we use \(G-X\) to denote \(G[V(G)\setminus X]\), that is, the graph obtained from \(G\) by deleting \(X\). In a graph \(G\), two vertices \(u\) and \(v\) are _twins_ if \(N_{G}[u]=N_{G}[v]\). An _independent set_ of a graph \(G\) is a subset of \(V(G)\) such that no two vertices in the subset have an edge between them in \(G\). A _clique_ is a subset of \(V(G)\) such that every two distinct vertices in the subset are adjacent in \(G\). Given a graph \(G\), a _matching_\(M\) is a subset of edges of \(G\) that do not share an endpoint. The edges in \(M\) are called _matched edges_, and the remaining edges are called _unmatched edges_. Given a matching \(M\), a vertex \(v\in V(G)\) is _saturated_ by \(M\) if \(v\) is incident on an edge of \(M\), that is, \(v\) is an end vertex of some edge of \(M\). Given a graph \(G\), Max Matching is to find a matching of maximum cardinality in \(G\). [[28]] For a bipartite graph \(G\), Max Matching can be solved in \(\mathcal{O}(\sqrt{|V(G)|}\cdot|E(G)|)\) time. A path \(P=(v_{1},\ldots,v_{n})\) is an _\(M\)-alternating path_ if the edges in \(P\) are matched and unmatched alternatively with respect to \(M\). If both the end vertices of an alternating path are unsaturated, then it is an _\(M\)-augmenting path_. [[3]] A matching \(M\) is maximum if and only if there is no \(M\)-augmenting path in \(G\). A path \(P=(v_{1},v_{2},\ldots,v_{n})\) on \(n\) vertices is a _\((v_{1},v_{n})\)-path_ and \(\{v_{2},\ldots,v_{n-1}\}\) are the _internal vertices_ of \(P\). Moreover, for a path \(P=(v_{1},v_{2},\ldots,v_{n})\), we say that \(P\)_visits_ the vertices \(\{v_{1},v_{2},\ldots,v_{n}\}\). Throughout this paper, let \(P_{uv}\) denote the path containing only the edge \(uv\). Let \(P_{1}\) be an \((s_{1},t_{1})\)-path and \(P_{2}\) be an \((s_{2},t_{2})\)-path. Then, \(P_{1}\) and \(P_{2}\) are _vertex-disjoint_ if \(V(P_{1})\cap V(P_{2})=\emptyset\). Moreover, \(P_{1}\) and \(P_{2}\) are _internally vertex-disjoint_ if \((V(P_{1})\setminus\{s_{1},t_{1}\})\cap V(P_{2})=\emptyset\) and \((V(P_{2})\setminus\{s_{2},t_{2}\})\cap V(P_{1})=\emptyset\), that is, no internal vertex of one path is used as a vertex on the other path, and vice versa. Two paths are said to be _edge-disjoint_ if they do not have any edge in common. Note that two internally vertex-disjoint paths are edge-disjoint, but the converse may not be true. A path \(P\) is _induced_ if \(G[V(P)]\) is the same as \(P\). For a path \(P=(v_{1},\ldots,v_{n})\) on \(n\) vertices and vertices \(v_{i},v_{j}\in V(P)\), let \((v_{i},v_{j})\)-_subpath_ of \(P\) denote the subpath of \(P\) with endpoints \(v_{i}\) and \(v_{j}\). ### Problem Statements Given a graph \(G\) and a set (or, more generally, an ordered multiset) \(\mathcal{X}\) of pairs of distinct vertices in \(G\), we refer to the pairs in \(\mathcal{X}\) as _terminal pairs_. A vertex in \(G\) is a _terminal vertex_ if it appears in at least one terminal pair in \(\mathcal{X}\) (when \(\mathcal{X}\) is clear from context); else, it is a _non-terminal vertex_. For example, if \(G\) is a graph with \(V(G)=\{v_{1},v_{2},\ldots,v_{6}\}\), and \(\mathcal{X}=\{(v_{1},v_{3}),(v_{2},v_{3}),(v_{3},v_{6}),(v_{1},v_{6})\}\) is a set of terminal pairs, then \(\{v_{1},v_{2},v_{3},v_{6}\}\) are terminal vertices in \(G\) and \(\{v_{4},v_{5}\}\) are non-terminal vertices in \(G\). The Vertex-Disjoint Paths problem takes as input a graph \(G\) and a set of \(k\) terminal pairs in \(G\), and the task is to decide whether there exists a collection of \(k\) pairwise internally vertex-disjoint paths in \(G\) such that the vertices in each terminal pair are connected to each other by one of the paths. More formally, Vertex-Disjoint Paths (VDP): **Input:** A graph \(G\) and an ordered multiset \(\mathcal{X}=\{(s_{1},t_{1}),\ldots,(s_{k},t_{k})\}\) of \(k\) pairs of terminals. **Question:** Does \(G\) contain \(k\) distinct and pairwise internally vertex-disjoint paths \(P_{1},\ldots,P_{k}\) such that for all \(i\in[k]\), \(P_{i}\) is an \((s_{i},t_{i})\)-path? The Edge-Disjoint Paths problem takes as input a graph \(G\) and a set of \(k\) terminal pairs in \(G\), and the task is to decide whether there exists a collection of \(k\) pairwise edge-disjoint paths in \(G\) such that the vertices in each terminal pair are connected to each other by one of the paths. More formally, Edge-Disjoint Paths (EDP): **Input:** A graph \(G\) and an ordered multiset \(\mathcal{X}=\{(s_{1},t_{1}),\ldots,(s_{k},t_{k})\}\) of \(k\) pairs of terminals. **Question:** Does \(G\) contain \(k\) pairwise edge-disjoint paths \(P_{1},\ldots,P_{k}\) such that for all \(i\in[k]\), \(P_{i}\) is an \((s_{i},t_{i})\)-path? Note that in both problems (VDP and EDP), we allow different terminal pairs to intersect, that is, it may happen that for \(i\neq j,\{s_{i},t_{i}\}\cap\{s_{j},t_{j}\}\neq\emptyset\). If there are two identical pairs \(\{s_{i},t_{i}\}=\{s_{j},t_{j}\}=\{x,y\}\) in \(\mathcal{X}\) and the edge \(xy\) is present in \(G\) then only one of the paths \(P_{i},P_{j}\) can use the edge \(xy\) if we require them to be edge-disjoint. However, setting \(P_{i}=P_{j}=(x,y)\) does not violate the condition of being internally vertex-disjoint. It is natural though and also consistent with the existing literature to impose the additional condition that all paths in a solution have to be pairwise distinct. Throughout this paper, we assume that the degree of every terminal vertex is at least the number of terminal pairs in which it appears. Else, it is trivially a No-instance. Following the notation introduced in [2] and [26], we have the following definition. An edge \(xy\in E(G)\) is _heavy_ if for some \(w\geq 2\), there exist pairwise distinct indices \(i_{1},\ldots,i_{w}\) such that for each \(j\in[w]\), \(\{x,y\}=\{s_{i_{j}},t_{i_{j}}\}\). We call a terminal pair \((s_{i},t_{i})\) heavy if \(s_{i}t_{i}\) is a heavy edge; else, we call it _light_. Note that calling a terminal pair heavy or light only makes sense when the terminals in the pair have an edge between them. Next, consider the following definition. [Minimum Solution] Let \((G,\mathcal{X},k)\) be a Yes-instance of \(\mathrm{VDP}\) or \(\mathrm{EDP}\). A solution \(\mathcal{P}=\{P_{1},\ldots,P_{k}\}\) for the instance \((G,\mathcal{X},k)\) is _minimum_ if there is no solution \(\mathcal{Q}=\{Q_{1},\ldots,Q_{k}\}\) for \((G,\mathcal{X},k)\) such that \(\sum_{i=1}^{k}|E(Q_{i})|<\sum_{i=1}^{k}|E(P_{i})|\). Since we deal with subclasses of chordal graphs (see Section 2.4), the two following propositions are crucial for us. [[26]] Let \((G,\mathcal{X},k)\) be a Yes-instance of \(\mathsf{VDP}\) such that \(G\) is a chordal graph, and let \(\mathcal{P}=\{P_{1},\ldots,P_{k}\}\) be a minimum solution of \((G,\mathcal{X},k)\). Then, every path \(P_{i}\in\mathcal{P}\) satisfies exactly one of the following two statements: 1. [label=\(()\)] 2. \(P_{i}\) is an induced path. 3. \(P_{i}\) is a path of length \(2\), and there exists a path \(P_{j}\in\mathcal{P}\) of length \(1\) whose endpoints are the same as the endpoints of \(P_{i}\). The next observation follows from Proposition 3. Let \((G,\mathcal{X},k)\) be a Yes-instance of \(\mathsf{VDP}\). If there is a terminal pair \((s,t)\in\mathcal{X}\) such that \(st\in E(G)\), then \(P_{st}\) belongs to every minimum solution of \((G,\mathcal{X},k)\). ### Parameterized Complexity Standard notions in Parameterized Complexity not explicitly defined here can be found in [11]. Let \(\Pi\) be an \(\mathsf{NP}\)-hard problem. In the framework of Parameterized Complexity, each instance of \(\Pi\) is associated with a _parameter_\(k\). We say that \(\Pi\) is _fixed-parameter tractable_ (\(\mathsf{FPT}\)) if any instance \((I,k)\) of \(\Pi\) is solvable in time \(f(k)\cdot|I|^{\mathcal{O}(1)}\), where \(f\) is some computable function of \(k\). [Equivalent Instances] Let \(\Pi\) and \(\Pi^{\prime}\) be two parameterized problems. Two instances \((I,k)\in\Pi\) and \((I^{\prime},k^{\prime})\in\Pi^{\prime}\) are equivalent if: \((I,k)\) is a Yes-instance of \(\Pi\) if and only if \((I^{\prime},k^{\prime})\) is a Yes-instance of \(\Pi^{\prime}\). A parameterized (decision) problem \(\Pi\) admits a _kernel_ of size \(f(k)\) for some computable function \(f\) that depends only on \(k\) if the following is true: There exists an algorithm (called a _kernelization algorithm_) that runs in \((|I|+k)^{\mathcal{O}(1)}\) time and translates any input instance \((I,k)\) of \(\Pi\) into an equivalent instance \((I^{\prime},k^{\prime})\) of \(\Pi\) such that the size of \((I^{\prime},k^{\prime})\) is bounded by \(f(k)\). If the function \(f\) is polynomial (resp., linear) in \(k\), then the problem is said to admit a _polynomial kernel_ (resp., _linear kernel_). It is well known that a decidable parameterized problem is \(\mathsf{FPT}\) if and only if it admits a kernel [11]. To design kernelization algorithms, we rely on the notion of _reduction rule_, defined below. [Reduction Rule] A _reduction rule_ is a polynomial-time procedure that consists of a condition and an operation, and its input is an instance \((I,k)\) of a parameterized problem \(\Pi\). If the condition is true, then the rule outputs a new instance \((I^{\prime},k^{\prime})\) of \(\Pi\) such that \(k^{\prime}\leq k\). Usually, also \(|I^{\prime}|<|I|\). A reduction rule is _safe_, when the condition is true, \((I,k)\) and \((I^{\prime},k^{\prime})\) are equivalent. Throughout this paper, the reduction rules will be numbered, and the reduction rules will be applied exhaustively in the increasing order of their indices. So, if reduction rules \(i\) and \(j\), where \(i<j\), are defined for a problem, then \(i\) will be applied exhaustively before \(j\). Notice that after the application of rule \(j\), the condition of rule \(i\) might become true. In this situation, we will apply rule \(i\) again (exhaustively). In other words, when we apply rule \(j\), we always assume that the condition of rule \(i\) is false. ### Graph Classes A graph \(G\) is a _chordal graph_ if every cycle in \(G\) of length at least four has a _chord_, that is, an edge joining two non-consecutive vertices of the cycle. In what follows, we define several subclasses of the class of chordal graphs, namely, _complete graphs_, _block graphs_, _split graphs_, _threshold graphs_, and _well-partitioned chordal graphs_. A graph whose vertex set is a clique is a _complete graph_. A vertex is a _cut vertex_ in a graph \(G\) if removing it increases the total number of connected components in \(G\). A _block_ of a graph \(G\) is a maximal connected subgraph of \(G\) that does not contain any cut vertex. A graph \(G\) is a _block graph_ if the vertex set of every block in \(G\) is a clique. A block of a block graph \(G\) is an _end block_ if it contains exactly one cut vertex. Note that a block graph that is not a complete graph has at least two end blocks. A graph \(G\) is a _split graph_ if there is a partition \((C,I)\) of \(V(G)\) such that \(C\) is a clique and \(I\) is an independent set. A split graph \(G\) is a _threshold graph_ if there exists a linear ordering of the vertices in \(I\), say, \((v_{1},v_{2},\ldots,v_{|I|})\), such that \(N(v_{1})\subseteq N(v_{2})\subseteq\cdots\subseteq N(v_{|I|})\). An undirected graph in which any two vertices are connected by exactly one path is a _tree_. A tree with at most two vertices or exactly one non-pendant vertex is a _star_. The vertices of degree one in a tree are called _leaves_. Note that a split graph admits a partition of its vertex set into cliques that can be arranged in a star structure, where the leaves are cliques of size one. Motivated by this definition of split graphs, Ahn et al. [2] introduced _well-partitioned chordal graphs_, which are defined by relaxing the definition of split graphs in the following two ways: (i) by allowing the parts of the partition to be arranged in a tree structure instead of a star structure, and (ii) by allowing the cliques in each part to have arbitrary size instead of one. A more formal definition of a well-partitioned chordal graph is given below. [Well-Partitioned Chordal Graph] A connected graph \(G\) is a _well-partitioned chordal graph_ if there exists a partition \(\mathcal{B}\) of \(V(G)\) and a tree \(\mathcal{T}\) having \(\mathcal{B}\) as its vertex set such that the following hold. 1. Each part \(X\in\mathcal{B}\) is a clique in \(G\). 2. For each edge \(XY\in E(\mathcal{T})\), there are subsets \(X^{\prime}\subseteq X\) and \(Y^{\prime}\subseteq Y\) such that \(E(G[X,Y])=X^{\prime}\times Y^{\prime}\). 3. For each pair of distinct \(X,Y\in\mathcal{B}\) with \(XY\notin E(\mathcal{T})\), \(E(G[X,Y])=\emptyset\). The tree \(\mathcal{T}\) in Definition 20 is called a _partition tree_ of \(G\), and the elements of \(\mathcal{B}\) are called its _bags_. Notice that a well-partitioned chordal graph can have multiple partition trees. [[2]] Given a well-partitioned chordal graph \(G\), a partition tree of \(G\) can be found in polynomial time. [Boundary of a Well-Partitioned Chordal Graph] Let \(\mathcal{T}\) be a partition tree of a well-partitioned chordal graph \(G\) and let \(XY\in E(\mathcal{T})\). The _boundary_ of \(X\) with respect to \(Y\), denoted by \(\mathsf{bd}(X,Y)\), is the set of vertices of \(X\) that have a neighbor in \(Y\), i.e., \(\mathsf{bd}(X,Y)=\{x\in X:N_{G}(x)\cap Y\neq\emptyset\}\). Note that all the graph classes considered in this paper are closed under vertex deletion. Therefore, throughout this paper, if \(G\) belongs to class \(\mathcal{Z}\) and \(G^{\prime}\) is obtained from \(G\) after deleting some vertices, then we assume that \(G^{\prime}\) also belongs to \(\mathcal{Z}\) without mentioning it explicitly. The inclusion relationship among various subclasses of chordal graphs discussed in this paper is shown in Figure 1. ## 3 Kernelization Results on EDP We begin with the analysis of the simplest scenario where the input graph is a clique. In this setting, EDP is still NP-hard (see Section4), but we show below that whenever the size of the clique is larger than the parameter \(k\), then we always obtain a Yes-instance. This improves the bound in [26, Lemma 7] by a factor of \(2\), which will play a role in optimizing the constants in our kernels (particularly, the linear ones). Let \((G,\mathcal{X},k)\) be an instance of EDP such that \(G\) is a clique. If \(|V(G)|>k\), then \((G,\mathcal{X},k)\) is a Yes-instance. Proof.: We give a proof by induction on \(k\). For \(k\in\{0,1\}\) the lemma clearly holds. Consider \(k>1\) and define \(H\) as a graph on the vertex set \(V(G)\) with edges given by pairs of vertices \(x,y\) for which \(\{x,y\}\) appears at least twice in \(\mathcal{X}\). Then \(H\) has at most \(\frac{k}{2}\) edges and thus at most \(k\) non-isolated vertices. Since \(|V(H)|=|V(G)|>k\), there exists an isolated vertex \(v\) in \(H\). Let \(\mathcal{X}_{v}\subseteq\mathcal{X}\) denote the set of pairs containing \(v\) (it is not a multiset by the choice of \(v\)) and \(k_{v}=|\mathcal{X}_{v}|\). We distinguish two cases. First, suppose that \(k_{v}>0\). Then \(|\mathcal{X}\setminus\mathcal{X}_{v}|=k-k_{v}\leq k-1<|V(G-v)|\). Hence, the EDP instance \((G-v,\mathcal{X}\setminus\mathcal{X}_{v},k-k_{v})\) satisfies the conditions of the lemma and hence, by the inductive assumption, it is a Yes-instance. Let \(\mathcal{P}\) denote a solution to this instance and \(\mathcal{P}_{v}\) denotes the set of single-edge paths corresponding to the pairs in \(\mathcal{X}_{v}\). Clearly, all edges used in \(\mathcal{P}_{v}\) are incident to \(v\), and so these paths are edge-disjoint with \(\mathcal{P}\). Consequently, \(\mathcal{P}\cup\mathcal{P}_{v}\) is a solution to the instance \((G,\mathcal{X},k)\). Suppose now that \(k_{v}=0\), that is, \(v\) does not appear in \(\mathcal{X}\). Let \(e=\{u,w\}\) be an arbitrary pair from \(\mathcal{X}\). Again, by the inductive assumption, the instance \((G-v,\mathcal{X}\setminus\{e\},k-1)\) admits some solution \(\mathcal{P}\). Then the path \((u,v,w)\) is edge-disjoint with \(\mathcal{P}\), and so \(\mathcal{P}\cup\{(u,v,w)\}\) forms a solution to the instance \((G,\mathcal{X},k)\). In both cases, we were able to construct a solution to \((G,\mathcal{X},k)\), which concludes the inductive argument. The bound above is tight as one can construct a No-instance \((G,\mathcal{X},k)\) where \(G\) is a clique and \(|V(G)|=k\). Consider \(\mathcal{X}\) comprising just \(k\) copies of some pair \(\{u,v\}\). Since the degree of \(u\) is \(k-1\), there cannot be \(k\) edge-disjoint paths having \(u\) as their common endpoint. If \(G\) is a split graph with more than \(k\) vertices in the clique and the degree of each terminal vertex is at least the number of terminals on it, then we can reduce such an instance to the setting of Lemma24 by replacing each terminal \(v\) in the independent set with an arbitrary neighbor of \(v\). As a consequence, we obtain the following corollary, being a quantitative improvement over [26, Lemma 8]. Figure 1: The inclusive relationship among the subclasses of chordal graphs studied in this paper. **Corollary 25**.: _Let \((G,\mathcal{X},k)\) be an instance of \(\mathrm{EDP}\) such that \(G\) is a split graph with split partition \((C,I)\). If \(|C|>k\) and the degree of each terminal vertex is at least the number of terminals on it, then \((G,\mathcal{X},k)\) is a Yes-instance._ ### A Subcubic Vertex Kernel for Split Graphs In this section, we show that \(\mathrm{EDP}\) on split graphs admits a kernel with \(\mathcal{O}(k^{2.75})\) vertices. Let \((G,\mathcal{X},k)\) be an instance of \(\mathrm{EDP}\) where \(G\) is a split graph. Note that given a split graph \(G\), we can compute (in linear time) a partition \((C,I)\) of \(V(G)\) such that \(C\) is a clique and \(I\) is an independent set [25]. We partition the set \(I\) into two sets, say, \(I_{T}\) and \(I_{N}\), where \(I_{T}\) and \(I_{N}\) denote the set of terminal vertices and the set of non-terminal vertices in \(I\), respectively. To ease the presentation of mathematical calculations, for this section (Section 3.1), we assume that \(k^{\frac{1}{4}}\) is a natural number. If this is not the case, then we can easily get a new equivalent instance that satisfies this condition in the following manner. Let \(d=(\lceil k^{\frac{1}{4}}\rceil)^{4}-k\) and \(v\in C\). Now, we add \(d\) terminal pairs \(\{(s_{i_{t}},t_{i_{1}}),\ldots,(s_{i_{d}},t_{i_{d}})\}\) and attach each of these terminals to \(v\). Observe that this does not affect the size of our kernel (\(\mathcal{O}(k^{2.75})\) vertices) since \((\lceil k^{\frac{1}{4}}\rceil)^{4}=\mathcal{O}(k)\). Moreover, we assume that \(k>8\), as otherwise, we can use the \(\mathsf{FPT}\)algorithm for \(\mathrm{EDP}\)[42] to solve it in polynomial time. Before proceeding further, let us first discuss the overall idea leading us to Theorem 2. **Overview.** Heggernes et al. [26] gave an \(\mathcal{O}(k^{3})\) vertex kernel for \(\mathrm{EDP}\) on split graphs. In our kernelization algorithm (in this section), we use their algorithm as a preprocessing step. After the prepossessing step, the size of \(C\) and \(I_{T}\) gets bounded by \(2k\) each, and the size of \(I_{N}\) gets bounded by \(\mathcal{O}(k^{3})\). Therefore, we know that the real challenge in designing an improved kernel for \(\mathrm{EDP}\) on split graphs lies in giving a better upper bound on \(|I_{N}|\). Our kernelization algorithm makes a non-trivial use of a lemma (Lemma 32), which establishes that the length of each path in any minimum solution (of \(\mathrm{EDP}\) on a split graph \(G\)) is bounded by \(\mathcal{O}(\sqrt{k})\). This, in turn, implies that a minimum solution of \(\mathrm{EDP}\) for split graphs contains \(\mathcal{O}(k^{1.5})\) edges. Note that during the preprocessing step (i.e., the kernelization algorithm by Heggernes et al. [26]), for every pair of vertices in \(C\), at most \(4k+1\) vertices are reserved in \(I_{N}\), giving a cubic vertex kernel. In our algorithm, we characterized those vertices (called _rich_ by us) in \(C\) for which we need to reserve only \(\mathcal{O}(k^{1.5})\) vertices in \(I_{N}\). Informally speaking, a vertex \(v\in C\) is _rich_ if there are \(\Omega(k^{0.75})\) vertices in \(C\) that are "reachable" from \(v\), even if we delete all the edges used by a "small" solution (containing \(\mathcal{O}(k^{1.5})\) edges). Then, we show that if two vertices are rich, then even if they do not have any common neighbors in \(I_{N}\), there exist "many" (\(\Omega(k^{1.5})\)) edge-disjoint paths between them even after removing any \(\mathcal{O}(k^{1.5})\) edges of \(G\). Hence, for every rich vertex, we keep only those vertices in \(I_{N}\) that are necessary to make the vertex rich, that is, we keep \(\mathcal{O}(k^{1.5})\) vertices in \(I_{N}\) for every rich vertex. Thus, all rich vertices in \(C\) contribute a total of \(\mathcal{O}(k^{2.5})\) vertices in \(I_{N}\). The vertices in \(C\) that are not rich are termed as _poor_. Finally, we establish that a poor vertex cannot have too many neighbors in \(I_{N}\). More specifically, a poor vertex can have only \(\mathcal{O}(k^{1.75})\) neighbors in \(I_{N}\). So, even if we keep all their neighbors in \(I_{N}\), we store a total of \(\mathcal{O}(k^{2.75})\) vertices in \(I_{N}\) for the poor vertices. This leads us to an \(\mathcal{O}(k^{2.75})\) vertex kernel for \(\mathrm{EDP}\) on split graphs. #### A Bound on the Length of the Paths in a Minimum Solution In this section, we prove that for a minimum solution \(\mathcal{P}\) of an instance \((G,\mathcal{X},k)\) of \(\mathrm{EDP}\) where \(G\) is a split graph, each path \(P\in\mathcal{P}\) has length at most \(4\sqrt{k}+3\). We prove this bound by establishing that if there is a path of length \(4\sqrt{k}+4\) in \(\mathcal{P}\), then \(\mathcal{P}\) contains at least \(k+1\) paths, a contradiction. To this end, we need the concept of _intersecting edges_ (see Definition 27) and _non-compatible edges_ (see Definition 28). Now, consider the following remark. For ease of exposition, throughout this section (Section 3.1.1), we assume (without mentioning explicitly it every time) that \((G,\mathcal{X},k)\) is a Yes-instance of EDP, where \(G\) is a split graph. Moreover, \(\mathcal{P}\) denotes a (arbitrary but fixed) minimum solution of \((G,\mathcal{X},k)\), and \(P\in\mathcal{P}\) is a path such that \(P\) contains \(\ell\) vertices, say, \(v_{1},\ldots,v_{\ell}\), from clique \(C\). Moreover, without loss of generality, let \(v_{1},\ldots,v_{\ell}\) be the order in which these vertices appear in the path \(P\) from some terminal to the other. See Figure 2 for an illustration. Note that if a path, say, \(P^{\prime}\), in a split graph has length \(p\) (i.e. \(|E(P^{\prime})|=p\)), then it contains at least \(\lceil\frac{p}{2}\rceil\) vertices from \(C\). Therefore, to bound the length of \(P\) by \(\mathcal{O}(\sqrt{k})\), it suffices to bound the number of vertices of \(C\) in \(P\) by \(\mathcal{O}(\sqrt{k})\). Assuming the ordering \(v_{1},\ldots,v_{\ell}\) of the vertices in \(V(P)\cap C\) along the path \(P\), we have the following definitions. [Intersecting Edges] Consider two edges \(e_{i}=v_{i}v_{i^{\prime}}\) and \(e_{j}=v_{j}v_{j^{\prime}}\) such that \(i,i^{\prime},j,j^{\prime}\in[\ell]\), and without loss of generality, assume that \(i<i^{\prime},j<j^{\prime}\), and \(i\leq j\). Then, \(e_{i}\) and \(e_{j}\) are non-intersecting if \(j\geq i^{\prime}\); otherwise, they are intersecting. See Figure 3 for an illustration. [Non-compatible Edges] Two edges \(e_{1},e_{2}\in E(G)\) are _non-compatible_ if there does not exist a path \(P^{\prime}\in\mathcal{P}\setminus\{P\}\) (given \(P\in\mathcal{P}\)) such that \(\{e_{1},e_{2}\}\subseteq E(P^{\prime})\). Moreover, a set of edges \(\mathcal{S}=\{e_{1},\ldots,e_{p}\}\subseteq E(G)\) is _non-compatible_ if every distinct \(e_{i},e_{j}\in\mathcal{S}\) are non-compatible. Next, we show that, since \(P\) is a path in a minimum solution \(\mathcal{P}\), most of the edges with both endpoints in \(\{v_{1},\ldots,v_{\ell}\}\) are used by paths in \(\mathcal{P}\) (otherwise, we get a contradiction to the fact that \(\mathcal{P}\) is a minimum solution). In particular, we have the following lemma. Each edge of the form \(v_{i}v_{j}\), where \(i,j\in[\ell]\) and \(j\geq i+2\), is used by some path in \(\mathcal{P}\setminus\{P\}\). Targeting a contradiction, consider \(v_{i}v_{j}\), where \(i,j\in[\ell]\) and \(j\geq i+2\), that is not used by any path in \(\mathcal{P}\). Then, observe that the path we get, say, \(P^{\prime}\), after replacing the Figure 2: Here, \(P\) is a path with endpoints \(u_{1}\) and \(v_{10}\). The vertices \(v_{1},\ldots,v_{10}\) are clique vertices of the path \(P\) (here, \(\ell=10\)) and \(u_{1},\ldots,u_{5}\) are independent set vertices of \(P\). Observe that \(v_{i}v_{i+1}\) is not necessarily an edge in \(P\). Figure 3: Here, edge \(e_{1}\) intersects with \(e_{2}\), edge \(e_{2}\) intersects with edges \(e_{1},e_{3},e_{4}\), edge \(e_{3}\) intersects with \(e_{2}\), edge \(e_{4}\) intersects with \(e_{2}\) and \(e_{5}\), and edge \(e_{5}\) intersects with \(e_{4}\). Note that although edges \(e_{2}\) and \(e_{5}\) share an endpoint; they are non-intersecting. \((v_{i},v_{j})\)-subpath of \(P\), which contains at least two edges, with the edge \(v_{i}v_{j}\), has fewer edges than \(P\). Moreover, \((\mathcal{P}\setminus\{P\})\cup\{P^{\prime}\}\) is also a solution of \((G,\mathcal{X},k)\). This contradicts the fact that \(\mathcal{P}\) is a minimum solution. Now, we show that if two edges are intersecting edges, then they are non-compatible as well. In particular, we have the following lemma. Let \(e_{i}=v_{i}v_{i^{\prime}}\) and \(e_{j}=v_{j}v_{j^{\prime}}\) be two (distinct) intersecting edges. Then, \(e_{i}\) and \(e_{j}\) are non-compatible. Proof.: Without loss of generality, assume that \(i<i^{\prime},j<j^{\prime}\), and \(i\leq j\). Targeting a contradiction, let \(P^{\prime}\in\mathcal{P}\setminus\{P\}\) be the path such that \(e_{i},e_{j}\in E(P^{\prime})\). Moreover, let \(s,t\in\{v_{i},v_{i^{\prime}},v_{j},v_{j^{\prime}}\}\) be the two vertices such that \(v_{i},v_{i^{\prime}},v_{j},v_{j^{\prime}}\) appear in the \((s,t)\)-subpath of \(P^{\prime}\). First, we prove the following claim. \(\rhd\)Claim 3. \(|\{s,t\}\cap\{v_{i},v_{i^{\prime}}\}|\leq 1\). Similarly, \(|\{s,t\}\cap\{v_{j},v_{j^{\prime}}\}|\leq 1\). Proof.: We will show that if \(s\in\{v_{i},v_{i^{\prime}}\}\), then \(t\notin\{v_{i},v_{i^{\prime}}\}\). (The other cases are symmetric.) Without loss of generality, assume that \(s=v_{i}\) and \(t=v_{i^{\prime}}\). Since the \((s,t)\)-subpath of \(P^{\prime}\) contains at least two edges (\(e_{i}\) and \(e_{j}\)), we can replace the \((s,t)\)-subpath with the edge \(e_{i}\) to get a path \(P^{\prime\prime}\) such that \(E(P^{\prime\prime})\subset E(P^{\prime})\) and the endpoints of \(P^{\prime}\) and \(P^{\prime\prime}\) are the same. Thus, we can replace \(P^{\prime}\) with \(P^{\prime\prime}\) in \(\mathcal{P}\) to get a solution with fewer edges, contradicting that \(\mathcal{P}\) is a minimum solution. Next, we will argue that we can reconfigure paths \(P\) to \(\widehat{P}\) and \(P^{\prime}\) to \(\widehat{P^{\prime}}\) such that \(|E(\widehat{P})|+|E(\widehat{P^{\prime}})|<|E(P)|+|E(P^{\prime})|\) and \(E(\widehat{P})\cup E(\widehat{P^{\prime}})\subseteq E(P)\cup E(P^{\prime})\). This will complete the proof, since then \(\widehat{\mathcal{P}}=(\mathcal{P}\setminus\{P,P^{\prime}\})\cup\{\widehat{P},\widehat{P^{\prime}}\}\) is a solution of \((G,\mathcal{X},k)\) having fewer edges than \(\mathcal{P}\), contradicting the fact that \(\mathcal{P}\) is a minimum solution. To this end, consider the following cases depending on the positions of \(i,i^{\prime},j,j^{\prime}\). **Case 1: \(i=j\).** Since \(e_{i}\) and \(e_{j}\) are distinct edges, note that \(i^{\prime}\neq j^{\prime}\). Hence, either \(i^{\prime}<j^{\prime}\) or \(i^{\prime}>j^{\prime}\). These two cases are symmetric, and therefore, we consider only the case when \(i^{\prime}<j^{\prime}\). See Figure 4 for an illustration. Let \(v=v_{i}=v_{j}\). Now, due to Claim 31, we have that \(v\notin\{s,t\}\). Therefore, either \(s=v_{j^{\prime}}\) and \(t=v_{i^{\prime}}\), or \(s=v_{i^{\prime}}\) and \(t=v_{j^{\prime}}\). Since these cases are symmetric, we assume \(s=v_{j^{\prime}}\) and \(t=v_{i^{\prime}}\). In this case, we get \(\widehat{P}\) by replacing the Figure 4: Here, \(P\) and \(P^{\prime}\) are represented in dotted and solid black, respectively, in the left figure. Similarly, \(\widehat{P}\) and \(\widehat{P^{\prime}}\) are represented in dotted and solid red, respectively, in the right figure. Figure 5: Here, \(P\) and \(P^{\prime}\) are represented in dotted and solid black, respectively, in the left figure. Similarly, \(\widehat{P}\) and \(\widehat{P^{\prime}}\) are represented in dotted and solid red, respectively, in the right figure. \((v_{i},v^{\prime}_{j})\)-subpath of \(P\) with the edge \(v_{j}v_{j^{\prime}}\), and we get \(\widehat{P^{\prime}}\) by replacing the \((v_{i^{\prime}},v_{j^{\prime}})\)-subpath of \(P^{\prime}\) by the \((v_{i^{\prime}},v_{j^{\prime}})\)-subpath of \(P\). Observe that \(E(\widehat{P})\cup E(\widehat{P^{\prime}})\subseteq E(P)\cup E(P^{\prime})\), and also \(|E(\widehat{P})|+|E(\widehat{P^{\prime}})|<|E(P)|+|E(P^{\prime})|\) since the \((v_{i},v_{i^{\prime}})\)-subpath of \(P\) is removed. **Case 2: \(i^{\prime}=j^{\prime}\).** This case is symmetric to Case 1. **Case 3: \(i<j<j^{\prime}<i^{\prime}\).** Due to Claim 31, either \(s\in\{v_{i},v_{i^{\prime}}\}\) and \(t\in\{v_{j},v_{j^{\prime}}\}\), or \(s\in\{v_{j},v_{j^{\prime}}\}\) and \(t\in\{v_{i},v_{i^{\prime}}\}\). Since these cases are symmetric, we assume that \(s\in\{v_{i},v_{i^{\prime}}\}\) and \(t\in\{v_{j},v_{j^{\prime}}\}\). Moreover, the case when \(s=v_{i}\) and \(t\in\{v_{j},v_{j^{\prime}}\}\) and the case when \(s=v_{i^{\prime}}\) and \(t\in\{v_{j},v_{j^{\prime}}\}\) are symmetric (since we can just reverse the ordering of the vertices \(v_{1},\ldots,v_{\ell}\) to get the other case). Therefore, without loss of generality, we assume that \(s=v_{i}\) and \(t\in\{v_{j},v_{j^{\prime}}\}\). See Figure 5 for an illustration. We get \(\widehat{P}\) by replacing the \((v_{i},v_{i^{\prime}})\)-subpath of \(P\) with the edge \(v_{i}v_{i^{\prime}}\), and we get \(\widehat{P^{\prime}}\) by replacing the \((v_{i},t)\)-subpath of \(P^{\prime}\) with the \((v_{i},t)\)-subpath of \(P\). Observe that \(E(\widehat{P})\cup E(\widehat{P^{\prime}})\subseteq E(P)\cup E(P^{\prime})\), and also \(|E(\widehat{P})|+|E(\widehat{P^{\prime}})|<|E(P)|+|E(P^{\prime})|\) since the \((t,v_{i^{\prime}})\)-subpath of \(P\) is removed. **Case 4: \(i<j<i^{\prime}<j^{\prime}\).** Due to Claim 31, either \(s\in\{v_{i},v_{i^{\prime}}\}\) and \(t\in\{v_{j},v_{j^{\prime}}\}\), or \(s\in\{v_{j},v_{j^{\prime}}\}\) and \(t\in\{v_{i},v_{i^{\prime}}\}\). Since these cases are symmetric, we assume that \(s\in\{v_{i},v_{i^{\prime}}\}\) and \(t\in\{v_{j},v_{j^{\prime}}\}\). Here, we consider the following two cases: **Subcase 4.1: \(s=v_{i}\) and \(t\in\{v_{j},v_{j^{\prime}}\}\).** See Figure 6 for an illustration. In this case, we obtain \(\widehat{P}\) by replacing the \((v_{i},v_{i^{\prime}})\)-subpath of \(P\) with the edge \(v_{i}v_{i^{\prime}}\), and we obtain \(\widehat{P^{\prime}}\) by replacing the \((v_{i},v_{j})\)-subpath of \(P^{\prime}\) by the \((v_{i},v_{j})\)-subpath of \(P\). Observe that \(E(\widehat{P})\cup E(\widehat{P^{\prime}})\subseteq E(P)\cup E(P^{\prime})\), and also \(|E(\widehat{P})|+|E(\widehat{P^{\prime}})|<|E(P)|+|E(P^{\prime})|\) since the \((v_{j},v_{i^{\prime}})\)-subpath of \(P\) is removed. **Subcase 4.2: \(s=v_{i^{\prime}}\) and \(t\in\{v_{j},v_{j^{\prime}}\}\).** See Figure 7 for an illustration. In this case, we obtain \(\widehat{P}\) by replacing the \((v_{i},v_{i^{\prime}})\)-subpath of \(P\) with the edge \(v_{i}v_{i^{\prime}}\), and we obtain \(\widehat{P^{\prime}}\) by replacing the \((v_{i^{\prime}},v_{j})\)-subpath of \(P^{\prime}\) by the \((v_{i^{\prime}},v_{j})\)-subpath of \(P\). Observe that \(E(\widehat{P})\cup E(\widehat{P^{\prime}})\subseteq E(P)\cup E(P^{\prime})\), and also \(|E(\widehat{P})|+|E(\widehat{P^{\prime}})|<|E(P)|+|E(P^{\prime})|\) since the \((v_{j},v_{i})\)-subpath of \(P\) is removed. This completes our proof. Now, we present the main lemma of this section. Let \((G,\mathcal{X},k)\) be a Yes-instance of \(\mathrm{EDP}\) where \(G\) is a split graph. Moreover, let \(\mathcal{P}\) be a minimum solution of \((G,\mathcal{X},k)\). Then, for every path \(P\in\mathcal{P}\), \(|E(P)|<4\sqrt{k}+4\). Figure 6: Here, \(P\) and \(P^{\prime}\) are represented in dotted and solid black, respectively, in the left figure. Similarly, \(\widehat{P}\) and \(\widehat{P^{\prime}}\) are represented in dotted and solid red, respectively, in the right figure. Figure 7: Here, \(P\) and \(P^{\prime}\) are represented in dotted and solid black, respectively, in the left figure. Similarly, \(\widehat{P}\) and \(\widehat{P^{\prime}}\) are represented in dotted and solid red, respectively, in the right figure. Proof.: Let \(P\) be a path in \(\mathcal{P}\) such that \(P\) contains \(\ell\) vertices from the clique \(C\), and let \(v_{1},\ldots,v_{\ell}\) be an ordering of vertices of \(C\) in \(P\), along \(P\) from one terminal to the other. First, we show that there are \(\Omega(\ell^{2})\) intersecting edges with both endpoints in \(\{v_{1},\ldots,v_{\ell}\}\). Let \(\mathcal{S}=\{v_{i}v_{j}:1\leq i\leq\left\lceil\frac{\ell}{2}\right\rceil-1, \,\left\lceil\frac{\ell}{2}\right\rceil+1\leq j\leq\ell\}\). Observe that \(\mathcal{S}\) is a set of pairwise intersecting edges, and hence, due to Lemma 30, \(\mathcal{S}\) is a set of non-compatible edges. Moreover, it is easy to see that \(|\mathcal{S}|=(\left\lceil\frac{\ell}{2}\right\rceil-1)(\left\lfloor\frac{ \ell}{2}\right\rceil)\). Furthermore, since each edge in \(\mathcal{S}\) is of the form \(v_{i}v_{j}\) such that \(i,j\in[\ell]\) and \(j\geq i+2\), due to Lemma 29, each edge in \(\mathcal{S}\) is used by some path in \(\mathcal{P}\setminus\{P\}\). Therefore, \(|\mathcal{P}|>|\mathcal{S}|\). Now, if \(|E(P)|\geq 4\sqrt{k}+4\), then \(\ell\geq 2\sqrt{k}+2\) (since any three consecutive edges in \(P\) require at least two vertices from \(C\)). In this case, we have that \(|\mathcal{P}|>|\mathcal{S}|=(\left\lceil\frac{2\sqrt{k}+2}{2}\right\rceil-1)( \left\lfloor\frac{2\sqrt{k}+2}{2}\right\rfloor)=(\sqrt{k})(\sqrt{k}+1)>k\), a contradiction (since \(|\mathcal{P}|=k\)). We have the following corollary as a consequence of Lemma 32 (since \(k\geq 9\)). **Corollary 33**.: _Let \(\mathcal{P}\) be a minimum solution of an instance \((G,\mathcal{X},k)\) of EDP where \(G\) is a split graph. Then, \(\sum_{P\in\mathcal{P}}|E(P)|\leq 5k^{1.5}\)._ #### An \(\mathcal{O}(k^{2.75})\) Vertex Kernel for Split Graphs In this section, we use Corollary 33 stating that there can be at most \(5k^{1.5}\) edges in any minimum solution to design a subcubic (\(\mathcal{O}(k^{2.75})\)) vertex kernel for EDP on split graphs. We start with the following preprocessing step, which we apply only once to our input instance. **Preprocessing Step:** First, we use the kernelization for EDP on split graphs provided by Heggernes et al. [26] as a preprocessing step. In their kernel, if \(|C|\geq 2k\), then they report a Yes-instance (due to [26, Lemma 8]), and hence, assume that \(|C|<2k\). Due to Corollary 25, if \(|C|>k\), then we have a Yes-instance, and hence we assume that \(|C|\leq k\). Moreover, in their kernel, for any two vertices \(u,w\in C\), \(|N(u)\cap N(w)\cap I_{N}|\leq 4k+1\) (i.e., \(u\) and \(w\) have at most \(4k+1\) common neighbors in \(I_{N}\)). Furthermore, there are no pendant vertices in \(I_{N}\). Next, we define a Marking Procedure, where we label the vertices in \(C\) as _rich_ or _poor_. Furthermore, we partition the vertices in \(I_{N}\) into two sets, denoted \(U\) (read _unmarked_) and \(M\) (read _marked_), in the following manner. **Marking Procedure:** Let \((G,\mathcal{X},k)\) be an instance of EDP where \(G\) is a split graph. 1. \(M\Leftarrow\emptyset\) and \(U\Leftarrow I_{N}\). (Initially, all vertices in \(I_{N}\) is unmarked.) Moreover, fix an ordering \(v_{1},\ldots,v_{|C|}\) of the vertices of \(C\). 2. For \(1\leq i\leq|C|\): 1. \(A_{v_{i}}\Leftarrow\emptyset\), \(M_{v_{i}}\Leftarrow\emptyset\) (read _marked for \(v_{i}\)_), and \(U_{T}=U\) (read _unmarked temporary_). 2. For \(1\leq j\leq|C|\) such that \(i\neq j\) and \(|A_{v_{i}}|<100k^{0.75}\): 2.2.1 If \(|N(v_{i})\cap N(v_{j})\cap U_{T}|\geq k^{0.75}\), then \(A_{v_{i}}\Leftarrow A_{v_{i}}\cup\{v_{j}\}\). Moreover, select some (arbitrary) subset \(M_{v_{i},v_{j}}\subseteq N(v_{i})\cap N(v_{j})\cap U_{T}\) such that \(|M_{v_{i},v_{j}}|=k^{0.75}\). Then, \(M_{v_{i}}\Leftarrow M_{v_{i}}\cup M_{v_{i},v_{j}}\) and \(U_{T}\Leftarrow U_{T}\setminus M_{v_{i},v_{j}}\). 3. If \(|A_{v_{i}}|=100k^{0.75}\), then label \(v_{i}\) as _rich_. Moreover, \(M\Leftarrow M\cup M_{v_{i}}\) and \(U\Leftarrow U_{T}\). 4. If \(|A_{v_{i}}|<100k^{0.75}\), then label \(v_{i}\) as _poor_. This completes our Marking Procedure. **Remark 34**.: Note that the definition of _rich_ and _poor_ depends on the order in which our Marking Procedure picks and marks the vertices (i.e., being rich or poor is not an intrinsic property of the vertex itself). A different execution of the above procedure can label different vertices as rich and poor. Moreover, note that if \(M_{v,x}\) exists (i.e., \(v\) is rich and \(x\in A_{v}\)) and \(M_{x,v}\) exists (i.e., \(x\) is rich and \(v\in A_{x}\)), then \(M_{x,v}\cap M_{v,x}=\emptyset\). We have the following observation regarding the vertices marked rich by an execution of Marking Procedure on an instance \((G,\mathcal{X},k)\) of EDP where \(G\) is a split graph. Consider an execution of Marking Procedure on an instance \((G,\mathcal{X},k)\) of \(EDP\) where \(G\) is a split graph. Then for a rich vertex \(v\), \(|M_{v}|=100k^{1.5}\) (i.e., the number of vertices marked in \(I_{N}\) for \(v\) are \(100k^{1.5}\)). Proof.: Notice that a vertex \(v\in C\) is rich if \(|A_{v}|=100k^{0.75}\). Moreover, for each vertex \(x\in A_{v}\), we mark exactly \(k^{0.75}\) previously unmarked vertices in \(I_{N}\) (denoted \(M_{v,x}\)). Hence, for any two distinct vertices \(x,y\in A_{v}\), \(M_{v,x}\cap M_{v,y}=\emptyset\). Therefore, \(|M_{v}|=\sum_{w\in A_{v}}|M_{v,w}|=100k^{0.75}\times k^{0.75}=100k^{1.5}\). [Reachable Vertices] Consider an execution of Marking Procedure on an instance \((G,\mathcal{X},k)\) of \(EDP\) where \(G\) is a split graph. Moreover, let \(\mathcal{P}\) be a solution of \((G,\mathcal{X},k)\). Then, for a rich vertex \(v\in C\), let \(R_{v}\subseteq A_{v}\) (read reachable from \(v\)) denote the set of vertices such that for each vertex \(x\in R_{v}\), there is a vertex \(u\in M_{v,x}\) such that \(u\) is not used by any path in \(\mathcal{P}\). Notice that, in Definition 3, a path of the form \((v,u,x)\) is edge-disjoint from every path in \(\mathcal{P}\). Furthermore, we briefly remark that \(R_{v}\) is defined with respect to the execution of Marking Procedure and a solution \(\mathcal{P}\) of \((G,\mathcal{X},k)\), which we will always fix before we use \(R_{v}\). Let \(\mathcal{P}\) be a solution to an instance \((G,\mathcal{X},k)\) of \(EDP\). Informally speaking, in the following lemma, we show that if \(\mathcal{P}\) uses at most \(6k^{1.5}\) edges, then for a rich vertex \(v\), \(R_{v}=\Omega(k^{0.75})\) (i.e., there are "many reachable" vertices in \(A_{v}\) from \(v\) using paths that are edge-disjoint from every path in \(\mathcal{P}\)). In particular, we have the following lemma. Consider an execution of Marking Procedure on an instance \((G,\mathcal{X},k)\) of \(EDP\) where \(G\) is a split graph. Moreover, let \(\mathcal{P}\) be a solution of \((G,\mathcal{X},k)\) (not necessarily minimum) such that the total number of edges used in \(\mathcal{P}\) is at most \(6k^{1.5}\). Then, for any rich vertex \(v\in C\), \(|R_{v}|\geq 94k^{0.75}\). Proof.: Note that in Marking Procedure, for each vertex \(x\in A_{v}\), we mark exactly \(k^{0.75}\) vertices (whose set is denoted by \(M_{v,x}\)). Since \(v\) is a rich vertex, due to Observation 3, \(|M_{v}|=100k^{1.5}\). Since the total number of edges used by \(\mathcal{P}\) are at most \(6k^{1.5}\), the total number of vertices used in \(I\) by \(\mathcal{P}\) can be at most \(6k^{1.5}\) as well. As \(M_{v}\subseteq I\), there are at least \(94k^{1.5}\) vertices in \(M_{v}\) that are not used by any path in \(\mathcal{P}\). Targeting contradiction, assume that \(|R_{v}|<94k^{0.75}\). By definition of \(R_{v}\) (Definition 3), for every vertex \(y\in A_{v}\setminus R_{v}\), each vertex in \(M_{v,y}\) is used by some path in \(\mathcal{P}\). Since for each \(x\in A_{v}\), \(|M_{v,x}|=k^{0.75}\), where the number of vertices in \(M_{v}\) that are not used by any path in \(\mathcal{P}\) is at most \(|R_{v}|\times k^{0.75}<94k^{0.75}\times k^{0.75}=94k^{1.5}\), a contradiction. Next, we provide the following reduction rule. Let \((G,\mathcal{X},k)\) be an instance of \(EDP\) where \(G\) is a split graph. Let \(U\) be the set of unmarked vertices we get after an execution of Marking Procedure on \((G,\mathcal{X},k)\). Moreover, let \(U^{\prime}\subseteq U\) be the set of vertices in \(U\) that do not have a poor neighbor. If \(U^{\prime}\neq\emptyset\), then \(G^{\prime}\Leftarrow G-U^{\prime}\) and \(\mathcal{X}^{\prime}\Leftarrow\mathcal{X}\). The following two lemmas (Lemmas 38 and 39) are essential to prove the safeness of RR1. Let \((G,\mathcal{X},k)\) be an instance of \(\mathrm{EDP}\) where \(G\) is a split graph. Consider an execution of Marking Procedure on \((G,\mathcal{X},k)\). Moreover, let \(\mathcal{P}\) be a solution of \((G,\mathcal{X},k)\) (not necessarily minimum) such that the total number of edges used in \(\mathcal{P}\) is \(\ell\), where \(\ell\leq 6k^{1.5}\). Furthermore, let \(u\in U\) be an unmarked vertex such that \(u\) does not have any poor neighbor. If there is a path \(P\in\mathcal{P}\) such that \(u\in V(P)\), then there exists a solution \(\mathcal{P}^{\prime}=(\mathcal{P}\setminus\{P\})\cup\{P^{\prime}\}\) of \((G,\mathcal{X},k)\) such that \(u\notin V(P^{\prime})\), and the total number of edges in \(\mathcal{P}^{\prime}\) is at most \(\ell+3\). Proof.: First, note that \(u\) is not an endpoint of \(P\) since \(u\in I_{N}\). Let the immediate neighbors of \(u\) in \(P\) be \(v\) and \(w\) (i.e., \(P\) is of the form \((s,\ldots,v,u,w,\ldots,t)\)). Since \(u\) does not have any poor neighbor and \(u\in I\), note that both \(v\) and \(w\) are rich vertices. Therefore, due to Lemma 37 (and the fact that \(\ell\leq 6k^{1.5}\)), \(|R_{v}|\geq 94k^{0.75}\) and \(|R_{w}|\geq 94k^{0.75}\). See Figure 8 for an abstract illustration. Observe that both \(R_{v}\) and \(R_{w}\) are subsets of \(C\). Now, we have one of the following cases. **Case 1: \(\mathbf{R_{v}}\cap\mathbf{R_{w}}\neq\emptyset\).** Let \(x\in R_{v}\cap R_{w}\). Now, due to definition of \(R_{v}\) and \(R_{w}\) (Definition 36), there are vertices \(x^{\prime}\in M_{v,x}\) and \(y^{\prime}\in M_{w,x}\) such that none of \(x^{\prime}\) and \(y^{\prime}\) is used by any path in \(\mathcal{P}\). So, the edges used in the path \(\widehat{P}=(v,x^{\prime},x,y^{\prime},w)\) are not used by any path in \(\mathcal{P}\) (as either \(x^{\prime}\) or \(y^{\prime}\) is an endpoint of each edge in \(\widehat{P}\)). Therefore, we obtain \(P^{\prime}\) be replacing the \((v,w)\)-subpath of \(P\) with the path \(\widehat{P}\). Observe that \(\mathcal{P}^{\prime}=(\mathcal{P}\setminus\{P\})\cup\{P^{\prime}\}\) is a solution of \((G,\mathcal{X},k)\), \(u\notin V(P^{\prime})\), and the total number of edges in \(\mathcal{P}^{\prime}\) is at most \(\ell+2\) (since the \((u,v)\)-subpath of \(P\) has two edges and \(\widehat{P}\) has four edges). **Case 2: \(\mathbf{R_{v}}\cap\mathbf{R_{w}}=\emptyset\).** Recall that, due to Lemma 37, \(|R_{v}|\geq 94k^{0.75}\) and \(|R_{w}|\geq 94k^{0.75}\). Since \(R_{v}\subseteq C\), \(R_{w}\subseteq C\), and \(R_{v}\cap R_{w}=\emptyset\), there are at least \(8836k^{1.5}\) edges of the form \(xy\) such that \(x\in R_{v}\) and \(y\in R_{w}\). Since \(\mathcal{P}\) uses at most \(6k^{1.5}\) edges, there is at least one edge \(xy\) that is not used by any path in \(\mathcal{P}\) such that \(x\in R_{v}\) and \(y\in R_{w}\). Moreover, due to definition of \(R_{v}\) and \(R_{w}\) (Definition 36), there are vertices \(x^{\prime}\in M_{v,x}\) and \(y^{\prime}\in M_{w,y}\) such that none of \(x^{\prime}\) and \(y^{\prime}\) is used by any path in \(\mathcal{P}\). So, the edges used in the path \(\widehat{P}=(v,x^{\prime},x,y,y^{\prime},w)\) are not used by any path in \(\mathcal{P}\). Therefore, we obtain \(P^{\prime}\) by replacing the \((v,w)\)-subpath of \(P\) with the path \(\widehat{P}\). Observe that \(\mathcal{P}^{\prime}=(\mathcal{P}\setminus\{P\})\cup\{P^{\prime}\}\) is a solution of \((G,\mathcal{X},k)\), \(u\notin V(P^{\prime})\), and the total number of edges in \(\mathcal{P}^{\prime}\) is at most \(\ell+3\) (since the \((u,v)\)-subpath of \(P\) has two edges and \(\widehat{P}\) has five edges). This completes our proof. Let \((G,\mathcal{X},k)\) be an instance of \(\mathrm{EDP}\) where \(G\) is a split graph. Let \(U\) be the set of unmarked vertices we get after an execution of Marking Procedure on \((G,\mathcal{X},k)\) Figure 8: An abstract illustration of the notations used in the proof of Lemma 38. _and let \(u\in U\) be a vertex such that \(u\) does not have any poor neighbor. Let \(G^{\prime}=G-\{u\}\). Then, \((G,\mathcal{X},k)\) is a Yes-instance if and only if \((G^{\prime},\mathcal{X},k)\) is a Yes-instance._ Proof.: We claim that \((G,\mathcal{X},k)\) and \((G^{\prime},\mathcal{X},k)\) are equivalent instances of EDP. In one direction, if \((G^{\prime},\mathcal{X},k)\) is a Yes-instance of EDP, then, because \(G^{\prime}\) is a subgraph of \(G\), \((G,\mathcal{X},k)\) is also a Yes-instance of EDP. In the other direction, suppose \((G,\mathcal{X},k)\) is a Yes-instance of EDP. Furthermore, let \(\mathcal{P}\) be a minimum solution of \((G,\mathcal{X},k)\). If \(u\) does not participate in any path in \(\mathcal{P}\), then note that \(\mathcal{P}\) is a solution to \((G^{\prime},\mathcal{X},k)\) as well. So, let \(u\) be a vertex in paths \(P_{1},\ldots,P_{r}\) in \(\mathcal{P}\). Observe that \(u\) is an internal vertex of all these paths (since \(u\in I_{N}\)) and \(u\) does not have any poor neighbor (due to the definition of \(u\)). Now, for each path \(P_{i}\) (\(i\in[r]\)), we will create a path \(P_{i}^{\prime}\) such that \((\mathcal{P}\setminus\{P_{1},\ldots,P_{r}\})\cup\{P_{1}^{\prime},\ldots,P_{r}^ {\prime}\}\) is a solution to \((G^{\prime},\mathcal{X},k)\), and the endpoints of \(P_{i}^{\prime}\) are the same as those of \(P_{i}\). Since \(\mathcal{P}\) is a minimum solution, due to Corollary 33, \(\mathcal{P}\) uses at most \(5k^{1.5}\) edges. Therefore, due to Lemma 38, we can get a solution \(\mathcal{P}_{1}=(\mathcal{P}\setminus\{P_{1}\})\cup\{P_{1}^{\prime}\}\) of \((G,\mathcal{X},k)\) such that \(\mathcal{P}_{1}\) uses at most \(5k^{1.5}+3\) edges. Similarly, for \(2\leq i\leq r\), we can get a solution \((\mathcal{P}_{i-1}\setminus\{P_{i}\})\cup\{P_{i}^{\prime}\}\) of \((G,\mathcal{X},k)\) such that \(\mathcal{P}_{i}\) uses at most \(5k^{1.5}+3i\) edges (which is always less than \(6k^{1.5}\) since \(i\leq r\leq k\) and \(k\geq 9\)) due to Lemma 38. Observe that no path in \(\mathcal{P}_{r}\) contains the vertex \(u\). Therefore, \(\mathcal{P}_{r}\) is a solution to \((G^{\prime},\mathcal{X},k)\) as well (as \(\mathcal{P}_{r}\) is a solution of \((G,\mathcal{X},k)\)). Hence, \((G^{\prime},\mathcal{X},k)\) is a Yes-instance. We have the following lemma to prove the safeness of RR1. **RR1 is safe.** Proof.: By Remark 23 (in Section 2.1), note that \(G^{\prime}\) is a split graph. The rest of the proof follows from Lemma 39. Next, we show that after an exhaustive application of RR1, we get a subcubic vertex kernel. **Lemma 41**.: _Let \((G,\mathcal{X},k)\) be an instance of EDP where \(G\) is a split graph. If we cannot apply RR1 on \((G,\mathcal{X},k)\), then \(|V(G)|=\mathcal{O}(k^{2.75})\)._ Proof.: Consider an execution of Marking Procedure on \((G,\mathcal{X},k)\) to get \(U,M\), and rich and poor vertices. First, we count the number of marked vertices in \(I_{N}\), being \(|M|\). Note that we mark vertices in \(I_{N}\) only for rich vertices. Consider a rich vertex \(v\) (in \(C\)). For a rich vertex \(v\), \(M_{v}=100k^{1.5}\) (Observation 35). Therefore, the total number of marked vertices in \(I_{N}\) is at most \(100|C|k^{1.5}\) (if each vertex in \(C\) is rich). Since \(|C|\leq k\) (due to Preprocessing Step), the total number of marked vertices in \(I_{N}\) are at most \(100k^{2.5}=\mathcal{O}(k^{2.5})\). Second, we count the cardinality of unmarked vertices \(U\) in \(I_{N}\). Since we cannot apply RR1, each vertex \(w\in U\) has at least one poor vertex as a neighbor. Therefore, \(|U|\leq\sum_{v:v\text{ is poor}}|N(v)\cap U|\). So, consider a poor vertex \(v_{i}\) in \(C\). We claim that \(|N(v_{i})\cap U|=\mathcal{O}(k^{1.75})\). Targeting contradiction, assume that \(|N(v_{i})\cap U|>400k^{1.75}+100k^{0.75}\). Since any two vertices in \(C\) can have at most \(4k+1\) common neighbors in \(I_{N}\) (due to Preprocessing Step), there are at least \(100k^{0.75}\) vertices in \(A_{v_{i}}\) (since Preprocessing Step ensures that there are no pendant vertices in \(I_{N}\)), a contradiction to the fact that \(v_{i}\) is a poor vertex. Therefore, for each poor vertex \(v_{i}\) (in \(C\)), \(|N(v_{i})\cap U|=\mathcal{O}(k^{1.75})\). Since there are at most \(k\) poor vertices (as \(|C|\leq k\)), we have that \(|U|=\mathcal{O}(k^{2.75})\). Since \(|V(G)|=|C|+|I_{T}|+|U|+|M|\), we have that \(|V(G)|=\mathcal{O}(k^{2.75})\) (as \(|C|\leq k\) and \(|I_{T}|\leq 2k\)). Now, we have the following observation. RR1, Marking Procedure, and Preprocessing Step can be executed in polynomial time. Moreover, our initial parameter \(k\) does not increase during the application of RR1. Finally, due to Preprocessing Step, RR1, Observation 42, and Lemmas 40 and 41, we have the following theorem. EDP on split graphs admits a kernel with at most \(\mathcal{O}(k^{2.75})\) vertices. ### A Linear Vertex Kernel for Threshold Graphs Let \((G,\mathcal{X},k)\) be an instance of EDP where \(G\) is a threshold graph. Note that given a threshold graph \(G\), we can compute (in linear time) a partition \((C,I)\) of \(V(G)\) such that \(C\) is a clique, and \(I\) is an independent set [25]. If \(|C|\geq k+1\), then by Corollary 25, \((G,\mathcal{X},k)\) is a Yes-instance. So, without loss of generality, we assume that \(|C|\leq k\). Furthermore, let us partition the set \(I\) into two sets, say, \(I_{T}\) and \(I_{N}\), where \(I_{T}\) contains the terminal vertices and \(I_{N}\) contains the non-terminal vertices. Since there are at most \(2k\) terminal vertices in \(G\), we have \(|I_{T}|\leq 2k\). If \(|I_{N}|\leq 4k+1\), then we have a kernel of at most \(7k+1\) vertices, and we are done. So, assume otherwise (i.e., \(|I_{N}|\geq 4k+2\)). By the definition of a threshold graph, there exists an ordering, say, \((v_{1},v_{2},\ldots,v_{|I_{N}|})\), of \(I_{N}\) such that \(N(v_{|I_{N}|})\subseteq N(v_{|I_{N}|-1})\subseteq\cdots\subseteq N(v_{1})\), which can be computed in linear time [27]. Let \(R=\{v_{4k+2},\ldots,v_{|I_{N}|}\}\). Note that, since \(|I_{N}|\geq 4k+2\), \(R\neq\emptyset\). Now, consider the following reduction rule. [RR2] If \(R\neq\emptyset\), then \(G^{\prime}\Leftarrow G-R\), \(\mathcal{X}^{\prime}\Leftarrow\mathcal{X}\), \(k^{\prime}\Leftarrow k\). We have the following lemma to establish that RR2 is safe. RR2 is safe. Proof.: First, note by Remark 23 that \(G^{\prime}\) is also a threshold graph. Now, we claim that \((G,\mathcal{X},k)\) and \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\) are equivalent instances of EDP on threshold graphs. In one direction, if \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\) is a Yes-instance of EDP, then, because \(G^{\prime}\) is a subgraph of \(G\), \((G,\mathcal{X},k)\) is also a Yes-instance of EDP. In the other direction, suppose \((G,\mathcal{X},k)\) is a Yes-instance of EDP. Furthermore, let \(\mathcal{P}\) be a solution for which \(\sum_{P\in\mathcal{P}}|V(P)\cap R|\) (i.e., the total number of visits by all the paths in \(\mathcal{P}\) to vertices in \(R\)) is minimum. We claim that no path in \(\mathcal{P}\) visits a vertex in \(R\). Targeting a contradiction, suppose that there exists a path, say, \(P\in\mathcal{P}\), such that \(P\) visits some vertex \(r\in R\). Since \(r\) is not a terminal vertex, there are two vertices, say, \(v,w\in C\), such that the edges \(vr\) and \(wr\) appear consecutively on the path \(P\). Since \(N_{G}(r)\subseteq N_{G}(x)\) for every \(x\in I_{N}\setminus R\), it is clear that every vertex in \(I_{N}\setminus R\) is adjacent to both \(v\) and \(w\). Now, we claim the following. For every \(x\in I_{N}\setminus R\), at least one of \(vx\) and \(wx\) belongs to some path in \(\mathcal{P}\). In particular, at least \(4k+1\) edges in the paths in \(\mathcal{P}\) are incident with \(v\) or \(w\). Proof.: Let us assume that there exists a vertex, say, \(y\in I_{N}\setminus R\), such that both the edges \(vy\) and \(wy\) are not used by any path in \(\mathcal{P}\). In such a situation, one can replace the vertex \(r\) with vertex \(y\) in the path \(P\). It leads to a contradiction to the choice of \(\mathcal{P}\). Thus, at least \(4k+1\) edges in the paths in \(\mathcal{P}\) are incident with \(v\) or \(w\) Note that any path in \(\mathcal{P}\) can use at most two edges incident on \(v\). The same is true for \(w\). This implies that in total at most \(4k\) edges incident on \(v\) or \(w\) (combined) can belong to the paths in \(\mathcal{P}\), which is a contradiction to Claim 44. Therefore, no path in \(\mathcal{P}\) visits a vertex in \(R\), and thus \(\mathcal{P}\) is a solution of \((G,\mathcal{X},k)\) as well. RR2 can be applied in polynomial time. Note that by applying RR2, we ensure that \(|I_{N}|\leq 4k+1\), which gives us a kernel with at most \(7k+1\) vertices. By Lemma 43 and Observation 45, we have the following theorem. \(\operatorname{EDP}\) on threshold graphs admits a kernel with at most \(7k+1\) vertices. ### A Quadratic Vertex Kernel for Block Graphs In this section, we show that \(\operatorname{EDP}\) on block graphs admits a kernel with at most \(4k^{2}-2k\) vertices. Let us first discuss the overall idea leading us to this result. **Overview.** Let \((G,\mathcal{X},k)\) be an instance of \(\operatorname{EDP}\) where \(G\) is a block graph. First, we aim to construct a reduced instance where the number of blocks can be at most \(4k-2\). We begin by showing that if there is an end block that does not contain any terminal, then we can delete this block from the graph (in RR3), while preserving all solutions. Next, we argue that if there is a block, say, \(B\), with at most two cut vertices that do not contain any terminal, then we can either _contract_ (defined in Definition 49) \(B\) to a single vertex, or answer negatively (in RR5). Thus, each block with at most two cut vertices in the (reduced) graph contains at least one terminal. This bounds the number of blocks with at most two cut vertices to be at most \(2k\) (as \(k\) terminal pairs yield at most \(2k\) terminals). Next, we use the following folklore property of block graphs (For the sake of completeness, we give the proof also). Let \(\ell\) be the number of end blocks in a block graph \(G\). Then, the number of blocks with at least three cut vertices is at most \(\ell-2\). Proof.: Let \(\{B_{1},\ldots,B_{k}\}\) and \(\{f_{1},\ldots,f_{p}\}\) be the set of blocks and the set of cut-vertices, respectively, of the block graph \(G\). Then, the _cut-tree_ of \(G\) is the tree \(T_{G}=(V^{\prime},E^{\prime})\), where \(V^{\prime}=\{B_{1},\ldots,B_{k},f_{1},\ldots,f_{p}\}\) and \(E^{\prime}=\{(B_{i},f_{j})\mid f_{j}\in B_{i},1\leq i\leq k,1\leq j\leq p\}.\) Note that every pendant vertex in \(T_{G}\) corresponds to an end block of \(G\). Thus, the proof follows from the following property of trees [12]. In any tree, if \(D_{1}\) denotes the number of pendant vertices and \(D_{\geq 3}\) denotes the number of vertices of degree at least \(3\), then \(D_{\geq 3}\leq D_{1}-2\). Observation 46, along with the fact that the number of end blocks is at most \(2k\), establishes that the number of blocks with at least three cut vertices in the (reduced) graph is at most \(2k-2\). Therefore, we have at most \(4k-2\) blocks in the (reduced) graph. Finally, due to Lemma 24 and the properties of block graphs, we show that if a block, say, \(B\), is big enough (i.e., \(|V(B)|>k\)), then we can contract \(B\) to a single vertex while preserving the solutions (in RR4). Hence, each block contains at most \(k\) vertices, and thus, the total number of vertices in the (reduced) graph is at most \(4k^{2}-2k\). Now, we discuss our kernelization algorithm, which is based on the application of three reduction rules (RR3-RR5), discussed below, to an instance \((G,\mathcal{X},k)\) of \(\operatorname{EDP}\) where \(G\) is a block graph. If \(B\) is an end block of \(G\) with cut vertex \(v\) such that \(B\) does not contain any terminal, then \(G^{\prime}\Leftarrow G[V(G)\setminus(V(B)\setminus\{v\})]\) and \(\mathcal{X}^{\prime}\Leftarrow\mathcal{X}\). We have the following lemma to establish that RR3 is safe. **Remark 47**.: _RR3 is safe._ Proof.: First, observe that \(G^{\prime}\) is a block graph. Now, we claim that \((G,\mathcal{X},k)\) and \((G^{\prime},\mathcal{X}^{\prime},k)\) are equivalent instances of EDP on block graphs. In one direction, if \((G^{\prime},\mathcal{X}^{\prime},k)\) is a Yes-instance of EDP, then, because \(G^{\prime}\) is a subgraph of \(G\), \((G,\mathcal{X},k)\) is also a Yes-instance of EDP. In the other direction, suppose \((G,\mathcal{X},k)\) is a Yes-instance of EDP, and let \(\mathcal{P}\) be a solution of \((G,\mathcal{X},k)\). Observe that no path between two vertices \(u_{1},u_{2}\in V(G)\setminus V(B)\) passes through a vertex \(u\in V(B)\setminus\{v\}\), as otherwise, the vertex \(v\) appears at least twice in this path, which contradicts the definition of a path. Therefore, since \(B\) does not contain any terminal, no path in \(\mathcal{P}\) passes through a vertex of \(V(B)\setminus\{v\}\). As \(\mathcal{X}^{\prime}=\mathcal{X}\) and each path in \(\mathcal{P}\) is restricted to \(V(G^{\prime})\), observe that \(\mathcal{P}\) is also a solution of \((G^{\prime},\mathcal{X}^{\prime},k)\). **Remark 48**.: _RR3 can be applied in polynomial time._ The following definitions (Definitions 49 and 50) are crucial to proceed further in this section. Informally speaking, we contract a block \(B\) by replacing it with a (new) vertex \(v\) such that "edge relations" are preserved. We have the following formal definition. **Remark 49** (Contraction of a Block).: _Let \((G,\mathcal{X},k)\) be an instance of EDP where \(G\) is a block graph, and let \(B\) be a block of \(G\). The contraction of \(B\) in \((G,\mathcal{X},k)\) yields another instance \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\) of EDP as follows. First, \(V(G^{\prime})=(V(G)\setminus V(B))\cup\{v\}\) (i.e., delete \(V(B)\) and add a new vertex \(v\)). Moreover, define \(f:V(G)\to V(G^{\prime})\) such that \(f(x)=x\) if \(x\in V(G)\setminus V(B)\), and \(f(x)=v\) if \(x\in V(B)\). Second, \(E(G^{\prime})=\{f(x)f(y):xy\in E(G),\ f(x)\neq f(y)\}\). Similarly, \(\mathcal{X}^{\prime}=\{(f(s),f(t)):(s,t)\in\mathcal{X},\ f(s)\neq f(t)\}\). Finally, let \(k^{\prime}=|\mathcal{X}^{\prime}|\). Note that \(k^{\prime}\) might be smaller than \(k\) (in case \(B\) contains a terminal pair). Moreover, \(\bigcup_{u\in V(B)}(N_{G}(u)\setminus B)\subseteq N_{G^{\prime}}(v)\). See Figure 9 for an illustration._ Now, we will exploit the properties of block graphs to show that if a block \(B\) has at least \(k+1\) vertices, then we can contract \(B\) to a single vertex "safely". For this purpose, we define the instance \((G,\mathcal{X},k)\) of EDP restricted to a block \(B\) of the (connected) block graph \(G\) as follows. **Definition 50** (Restriction of an Instance \((G,\mathcal{X},k)\) to a Block).: _Consider a block \(B\) whose set of cut vertices is \(U\). For each cut vertex \(u\in U\), let \(C_{B,u}\) denote the (connected) component of \(G[V(G)\setminus(V(B)\setminus\{u\})]\) containing \(u\). Now, define \(h:V(G)\to V(B)\) such that \(h(x)=x\) if \(x\in V(B)\), and \(h(x)=u\) if \(x\in V(C_{B,u})\). Then, the restriction of \((G,\mathcal{X},k)\) to \(B\), denoted by \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\), is defined as follows: \(\mathcal{X}_{B}=\{(h(s),h(t)):(s,t)\in\mathcal{X},\ h(s)\neq h(t)\}\). See Figure 10 for an illustration._ We have the following trivial observation that we will use for our proofs. **Observation 51**.: _Let \(\mathcal{P}\) be a set of edge-disjoint paths. Consider a set of paths \(\mathcal{P}^{*}\) constructed in the following manner. For each path \(P\in\mathcal{P}\), add a path \(P^{*}\) to \(\mathcal{P}^{*}\) such that \(E(P^{*})\subseteq E(P)\). Then, \(\mathcal{P}^{*}\) is also a set of edge-disjoint paths._ We have the following lemma. **Lemma 52**.: _Let \((G,\mathcal{X},k)\) be an instance of EDP on a block graph \(G\), and let \(B\) be a block of \(G\). If \((G,\mathcal{X},k)\) is a Yes-instance, then \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\) is a Yes-instance._ Proof.: Let \(\mathcal{P}\) be a minimum solution of \((G,\mathcal{X},k)\). Due to the definition of \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\) (Definition 50) and Observation 51, it is sufficient to show that, for each occurrence of a terminal pair \((s,t)\in\mathcal{X}\) such that \(P\) is the path in \(\mathcal{P}\) corresponding to it, if \(h(s)\neq h(t)\), then \(P\) contains an \((h(s),h(t))\)-subpath, say, \(P^{*}\). Recall that \(U\) is the set of cut vertices of \(B\). Since \(h(s)\neq h(t)\), we have the following cases: **Case 1: \(s,t\in V(B)\).** Note that in this case \(h(s)=s\) and \(h(t)=t\). Therefore, \(P^{*}=P\). **Case 2: \(s\in V(C_{B,u})\) and \(t\in V(C_{B,u})\) for distinct \(u,w\in U\).** Note that in this case, every \((s,t)\)-path has to pass through vertices \(u\) and \(w\) as each of them is a cut vertex for \(s\) and \(t\). Moreover, \(h(s)=u\) and \(h(t)=w\). Therefore, the subpath of \(P\) with endpoints \(u\) and \(w\) is \(P^{*}\). **Case 3: \(s\in V(C_{B,u})\) and \(t\in V(B)\setminus\{u\}\) for some \(u\in U\).** Note that in this case, every \((s,t)\)-path has to pass through the vertex \(u\) as it is the cut vertex common to \(C_{B,u}\) and \(B\). Moreover, since \(h(s)=u\) and \(h(t)=t\), the subpath of \(P\) with endpoints \(u\) and \(t\) is \(P^{*}\). **Case 4: \(s\in V(B)\setminus\{u\}\) and \(t\in V(C_{B,u})\) for some \(u\in U\).** This case is symmetric to Case 3. This establishes that \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\) is a Yes-instance, and, thus, completes our proof. Next, we will show that if for a block \(B\), \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\) is a Yes-instance, then we can contract \(B\) "safely". In particular, we have the following lemma. Let \((G,\mathcal{X},k)\) be an instance of \(\mathrm{EDP}\) on a block graph \(G\), and let \(B\) be a block of \(G\) whose set of cut vertices is \(U\). Moreover, let the contraction of \(B\) in \((G,\mathcal{X},k)\) yield \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\). Given that \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\) is a Yes-instance, \((G,\mathcal{X},k)\) is a Yes-instance if and only if \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\) is a Yes-instance. Proof.: Let \(\mathcal{P}_{B}\) be a solution of \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\). In one direction, let \((G,\mathcal{X},k)\) be a Yes-instance, and let \(\mathcal{P}\) be a solution of \((G,\mathcal{X},k)\). Then, for an occurrence of a terminal pair \((s,t)\in\mathcal{X}\) such that \(f(s)\neq f(t)\), let \(P\) be the \((s,t)\)-path in \(\mathcal{P}\) corresponding to it. We construct the path \(P^{\prime}\) in \(G^{\prime}\) with endpoints \(f(s)\) and \(f(t)\) in the following manner, and add it to \(\mathcal{P}^{\prime}\). If \(P\) does not contain any vertex of \(B\), then let \(P^{\prime}=P\). Else, let \(x,y\in V(B)\) be the first and the last vertices of \(B\) to appear in \(P\) while traversing \(P\) from \(s\) to \(t\). Note that \(x\) and \(y\) might be the same vertex (when \(s,t\in C_{B,u}\), for some \(u\in U\)). Then, we replace the \((x,y)\)-subpath of \(P\) with the vertex \(v\) to get the path \(P^{\prime}\). Note that it is always possible since \(N_{G}(U)\subseteq N_{G^{\prime}}(v)\) (see Definition 49). Now, observe that the set of edges used in \(P^{\prime}\) other than the edges with \(v\) as an endpoint is a subset of edges used in \(P\). Therefore, due to Observation 51, if two paths, say, \(P_{1}^{\prime}\) and \(P_{2}^{\prime}\) in \(\mathcal{P}^{\prime}\) share an edge, say, \(e\), then \(v\) is an endpoint of \(e\). Targeting contradiction, assume \(P_{1}^{\prime}\) and \(P_{2}^{\prime}\) (obtained from \(P_{1}\) and \(P_{2}\) as discussed above) share edge \(e=vw\). Now, note that \(v\) corresponds to cut vertices, say, \(x\) and \(y\), in \(V(P_{1})\) and \(V(P_{2})\), respectively. Therefore, there is an edge \(xw\in E(P_{1})\) and edge \(yw\in E(P_{2})\). Clearly, if \(x=y\), then \(P_{1}\) and \(P_{2}\) are not edge-disjoint, a contradiction. If \(x\neq y\), then due to the definition of a block graph, \(x\) and \(y\) cannot be both simultaneously adjacent to a vertex \(u\notin V(B)\), another contradiction. Thus, \(\mathcal{P}^{\prime}\) is a solution to \(\mathrm{EDP}\) on \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\). In the other direction, let \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\) be a Yes-instance, and let \(\mathcal{P}^{\prime}\) be a solution of \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\). Then, for each occurrence of a terminal pair \((s,t)\in\mathcal{X}\), we provide a path \(P\) in the following manner. First, if \(f(s)\neq f(t)\) (respectively, \(h(s)\neq h(t)\)), then let \(P^{\prime}\in\mathcal{P}^{\prime}\) (respectively, \(P_{B}\in\mathcal{P}_{B}\)) be the path associated with the occurrence of the terminal pair \((f(s),f(t))\in\mathcal{X}^{\prime}\) (respectively, \((h(s),h(t))\in\mathcal{X}_{B}\)) corresponding to \((s,t)\). Now, we have the following cases. **Case 1: \(s,t\in V(B)\).** Then, let \(P=P_{B}\). Note that \((h(s),h(t))\in\mathcal{X}_{B}\) since \(h(s)\neq h(t)\). **Case 2: \(s,t\in C_{B,u}\), for some \(u\in U\).** Here, note that any \((s,t)\)-path cannot contain any vertex of \(V(B)\setminus\{u\}\). So, let \(P\) be the path \(P^{\prime}\) with the only change that if \(v\in V(P^{\prime})\), then we replace it with \(u\). Also, note that \((f(s),f(t))\in\mathcal{X}^{\prime}\) since \(f(s)\neq f(t)\). **Case 3: \(s\in C_{B,u}\) and \(t\in C_{B,w}\), for distinct \(u,w\in U\).** In this case, observe that \(v\in V(P^{\prime})\). Let \(P^{\prime}\) be of the form \((s,\ldots,x,v,y,\ldots,t)\). We obtain \(P\) by replacing \(v\) with \(P_{B}\), that is, \(P=(s,\ldots,x,u,\ldots,w,y,\ldots,t)\). **Case 4: \(s\in C_{B,u}\) and \(t\in V(B)\setminus\{u\}\), for some \(u\in U\).** Similarly to Case 3, we replace the vertex \(v\) in path \(P^{\prime}\) with the path \(P_{B}\) to obtain \(P\). **Case 5: \(\mathbf{s\in V(B)\setminus\{u\}}\) and \(\mathbf{t\in C_{B,u}}\), for some \(\mathbf{u\in U}\).** This case is symmetric to Case 4. Finally, we obtain a path \(P\) for every terminal pair \((s,t)\in\mathcal{X}\). Moreover, since for any path \(P^{\prime}\in\mathcal{P}^{\prime}\) and \(P_{B}\in\mathcal{P}_{B}\), \(E(P^{\prime})\cap E(P_{B})=\emptyset\), it is easy to see that the paths in \(\mathcal{P}\) (where \(\mathcal{P}\) is the set of paths obtained as discussed above in Cases 1-5) are edge-disjoint, and therefore \((G,\mathcal{X},k)\) is a Yes-instance. Next, we have the following reduction rule. [RR4] If \(G\) has a block \(B\) such that \(|V(B)|>k\), then contract \(B\) in \((G,\mathcal{X},k)\) to get \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\). The safeness of RR4 is implied by Lemma 4 and Lemma 4. In particular, we have the following corollary. **Corollary 4**.: _RR4 is safe._ **Observation 4**.: _RR4 can be applied in polynomial time._ Finally, we have the following reduction rule. **Reduction Rule 5** (Rrs5).: _Let \(B\) be a block of \(G\) that has exactly two cut vertices, say, \(u\) and \(w\), and there is no terminal vertex in \(V(B)\setminus\{u,w\}\). Consider the instance \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\) restricted to the block \(B\). If \(|V(B)|>|\mathcal{X}_{B}|\), then contract \(B\) in \((G,\mathcal{X},k)\) to get the instance \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\). Else, answer negatively._ We have the following lemma to establish that RR5 is safe. **Problem 5**.: _RR5 is safe._ Proof.: First, observe that \(G^{\prime}\) is a block graph. Lemma 4 implies that if \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\) is a No-instance, then \((G,\mathcal{X},k)\) is a No-instance. Moreover, Lemma 4 establishes that if \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\) is a Yes-instance, then we can contract \(B\) in \((G,\mathcal{X},k)\). Therefore, to prove our lemma, it is sufficient to show that \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\) is a Yes-instance if and only if \(|V(B)|>|\mathcal{X}_{B}|\). Let \(r=|\mathcal{X}_{B}|\). Then, note that in \(\mathcal{X}_{B}\), we have \(r\) many terminal pairs, and each terminal pair has terminals \(u\) and \(w\). In one direction, let \(|V(B)|\leq r\). Then, note that the degree of both \(u\) and \(w\) in \(B\) is at most \(r-1\), and therefore, there cannot be \(r\) edge-disjoint paths with terminals \(u\) and \(w\) in \(B\). Hence, \((B_{i},\mathcal{X}_{B},|\mathcal{X}_{B}|)\) is a No-instance. In the other direction, let \(|V(B)|>r\). We provide \(r\) edge-disjoint paths between \(u\) and \(w\) in the following manner. One path, say, \(P_{r}\), consists only of the edge \(uw\) (i.e., \(P_{r}=P_{uw}\)). To get \(r-1\) additional edge-disjoint paths, we select \(r-1\) vertices, say, \(u_{1},\ldots,u_{r-1}\), from \(V(B)\setminus\{u,w\}\). Now, consider the path \(P_{i}\), for \(i\in[r-1]\), as \((u,u_{i},w)\). Observe that paths \(P_{1},\ldots,P_{r}\) are indeed edge-disjoint. Hence, \((B_{i},\mathcal{X}_{B},|\mathcal{X}_{B}|)\) is a Yes-instance. **Observation 4**.: _RR5 can be applied in polynomial time._ Next, we establish that once we cannot apply RR3-RR5 anymore, the size of the reduced graph is bounded by a quadratic function of \(k\). In particular, we have the following lemma. **Lemma 4**.: _Let \((G,\mathcal{X},k)\) be an instance of \(\mathrm{EDP}\) where we cannot apply reduction rules RR3-RR5 anymore. Then, \(G\) contains at most \(4k-2\) blocks and at most \(4k^{2}-2k\) vertices._ Proof.: First, we prove that the number of blocks in \(G\) is at most \(4k-2\). Observe that each end block contains at least one terminal (due to RR4), and also, each block with two cut vertices contains at least one terminal (due to RR5). Therefore, the number of blocks with at most two cut vertices is at most \(2k\). Hence, due to Observation 46, the number of blocks in \(G\) with at least three cut vertices is at most \(2k-2\). Thus, the total number of blocks in \(G\) is at most \(4k-2\). Since each block contains at most \(k\) vertices (due to RR4), the total number of vertices in \(G\) is at most \(4k^{2}-2k\). Note that throughout this section, our initial parameter \(k\) does not increase during the application of reduction rules RR3-RR5. Moreover, observe that RR3-RR5 can be implemented in polynomial time. Therefore, due to Lemmas 47, 56, 58, and Corollary 54, we have the following theorem. \(\operatorname{EDP}\) on block graphs admits a kernel with at most \(4k^{2}-2k\) vertices. ### A Linear Vertex Kernel for Clique Paths A _clique path_ is a block graph where each block has at most two cut vertices. (In an informal manner, we can think that the blocks are arranged in the form of a path.) In this section, we present a linear vertex kernel for EDP on clique paths. First, we present an overview of the overall idea leading to this result. **Overview.** Let \((G,\mathcal{X},k)\) be an instance of EDP where \(G\) is a clique path. Our kernelization algorithm is based on the application of RR3-RR5 (defined in Section 3.3) along with three new reduction rules (RR6-RR8). In our kernel for block graph in Section 3.3, we established that for a block \(B\), if \(|V(B)|\geq k+1\), then we can contract \(B\) (see RR4). Moreover, we showed that the total number of blocks in a reduced instance can be at most \(4k-2\), thus giving us an \(\mathcal{O}(k^{2})\) vertex kernel. Here, we use the property of clique paths that each block can have at most two cut vertices to improve the kernel size. Since there is no block with more than two cut vertices, each block must contain a terminal after an exhaustive application of RR3-RR5. Let \(B\) be a block of \(G\) with cut vertices \(u\) and \(w\). Consider the instance \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\), that is, the instance \((G,\mathcal{X},k)\) restricted to the block \(B\) (see Definition 50). Any terminal pair \((s,t)\in\mathcal{X}_{B}\) is of one of the following types: * Type-A: \(s,t\in\{u,w\}\). * Type-B: \(s=u\) and \(t\in V(B)\setminus\{u,w\}\), or vice versa. * Type-C: \(s=w\) and \(t\in V(B)\setminus\{u,w\}\), or vice versa. * Type-D: \(s,t\in V(B)\setminus\{u,w\}\). Let \(a,b,c\), and \(d\) denote the cardinality of Type-A, Type-B, Type-C, and Type-D occurrences of terminal pairs in \(\mathcal{X}_{B}\), respectively. We show that if \(|V(B)|>d+2+\max\{b+c-1,0\}\), then we can either contract \(B\) to a single vertex "safely" (when \(|V(B)|>\max\{a+b,a+c\}\)) or report a No-instance. Summing these numbers over all blocks yields an upper bound on the total size of the reduced instance. The Type-A pairs are irrelevant now, each pair of Type-B or Type-C contributes to two blocks, while each Type-D pair appears in only a single block. By grouping the summands in an appropriate way, we are able to eventually attain a bound of \(2k+1\). We have the following reduction rules. **Reduction Rule 6** (Rr6).: _Let \((G,\mathcal{X},k)\) be an instance of \(\mathrm{EDP}\) where \(G\) is a clique path. Moreover, let \(B\) be a block of \(G\) such that \(B\) has two cut vertices, say, \(u\) and \(w\). Consider the instance \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\). If \(|V(B)|\leq\max\{a+b,a+c\}\), then report a No-instance._ We have the following lemma to prove that RR6 is safe. **RR6 is safe.** Proof.: Due to Lemma 52, we know that if \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\) is a No-instance, then \((G,\mathcal{X},k)\) is a No-instance. Therefore, it is sufficient to show that if \(|V(B)|\leq\max\{a+b,a+c\}\), then \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\) is a No-instance. Without loss of generality, assume that \(b\geq c\). Observe that the vertex \(u\) appears in \(a+b\) occurrences of terminal pairs (in \(\mathcal{X}_{\mathcal{B}}\)). If \(|V(B)|\leq a+b\), then \(d_{B}(u)\leq a+b-1\), and therefore, \(u\) cannot be a part of \(a+b\) edge-disjoint paths. Thus, \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\) is a No-instance, and so is \((G,\mathcal{X},k)\). Next, we have the following reduction rule. **Reduction Rule 7** (Rr7).: _Let \((G,\mathcal{X},k)\) be an instance of \(\mathrm{EDP}\) where \(G\) is a clique path. Moreover, let \(B\) be a block of \(G\) that has two cut vertices, say, \(u\) and \(w\). Consider the instance \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\). If \(|V(B)|>d+2+\max\{b+c-1,0\}\), then contract \(B\) in \((G,\mathcal{X},k)\) to obtain the instance \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\)._ We have the following lemma to establish that RR7 is safe. **RR7 is safe.** Proof.: We first remark that \(G^{\prime}\) is also a clique path. Next, we show that if \(|V(B)|>d+2+\max\{b+c-1,0\}\), then \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\) is a Yes-instance, and therefore, due to Lemma 53, we can safely contract \(B\) in \((G,\mathcal{X},k)\) to get \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\). If \(a=0\), then the claim follows directly due to Lemma 24. Hence, we assume that \(a>0\). Due to RR6, we know that \(|V(B)|>\max\{a+b,a+c\}\). To prove that \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\) is a Yes-instance, we consider the following two cases. **Case 1: \(b+c=0\).** Let \(Y\) denote the set of vertices in \(V(B)\setminus\{u,w\}\). Note that \(|Y|\geq d+1\) (since \(|V(B)|\geq d+3\)). Moreover, recall that \(|V(B)|\geq a+1\) (since \(\max\{b,c\}=0\) and RR 6 is not applicable) and hence, \(|Y|\geq a-1\). Now, we assign edge-disjoint paths to the terminal pairs in the following manner. 1. First, we assign edge-disjoint paths to each occurrence of Type-A terminal pairs. Note that there are \(a\) many such occurrences of terminal pairs, and each pair has terminals \(u\) and \(w\). Let \(P^{A}_{a}=P_{uw}\) (i.e., \(P^{A}_{a}\) consists only of a single edge \(uw\)). Next, let \(y_{1},\ldots,y_{|Y|}\) be an ordering of vertices in \(Y\). Recall that \(|Y|\geq a-1\). Now, for \(1\leq i\leq a-1\), consider the path \(P^{A}_{i}=(u,y_{i},w)\). Observe that \(P^{A}_{1},\ldots,P^{A}_{a}\) are indeed edge-disjoint. 2. Finally, we assign edge-disjoint paths to each occurrence of Type-D terminal pairs. Note that till now, no path has used any edge \(xy\) such that \(\{x,y\}\subseteq Y\). Moreover, each terminal of a Type-D terminal pair belongs to \(Y\), the number of Type-D terminal pairs is \(d\), \(Y\) induces a clique, and \(|Y|\geq d+1\). Therefore, due to Lemma 24, all Type-D terminal pairs can be assigned paths only using the edges with both endpoints in \(Y\). Since we have not used any such edge in a path of the form \(P^{A}_{j}\) (for \(j\in[a]\)), \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\) is a Yes-instance. Therefore, \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\) is a Yes-instance. **Case 2: \(b+c>0\).** Without loss of generality, we assume that \(b\geq c\). Since \(|V(B)|>a+b\) (and hence, \(|V(B)\setminus\{u,w\}|\geq a+b-1\)), let the vertex set \(V(B)\setminus\{u,w\}\) be partitioned into two subsets \(X\) and \(Y\) such that (1) \(|X|=b\) and \(X\) contains at least one terminal vertex of Type-B, (2) \(|Y|\geq a-1\). Also, note that by the assumption, we have \(|V(B)|\geq b+c+d+2\). Now, we assign edge-disjoint paths to terminal pairs in the following manner. First, we assign edge-disjoint paths to each occurrence of Type-A terminal pairs. Recall that each Type-A terminal pair has terminals \(u\) and \(w\). Let \(P_{a}^{A}=P_{uw}\) (i.e., \(P_{a}^{A}\) consists only of a single edge \(uw\)). Next, let \(y_{1},\ldots,y_{|Y|}\) be an ordering of vertices in \(Y\). Recall that \(|Y|\geq a-1\). Now, for \(1\leq i\leq a-1\), consider the path \(P_{i}^{A}=(u,y_{i},w)\). Observe that \(P_{1}^{A},\ldots,P_{a}^{A}\) are indeed edge-disjoint. Next, we assign a path to one occurrence of a Type-B terminal pair. Recall that \(X\) contains at least one terminal vertex of a Type-B terminal pair, say \(\chi\in\mathcal{X}_{\mathcal{B}}\), and let \(x_{1}\in X\) be that vertex. Let \(P_{1}^{B}=P_{ux_{1}}\) (i.e., \(P_{1}^{B}\) consists only of a single edge \(ux_{1}\)). Note that \(P_{1}^{B}\) is indeed edge-disjoint with every path in \(\{P_{1}^{A},\ldots,P_{a}^{A}\}\) and till this point, we have not used any edge \(xy\) such that \(\{x,y\}\subseteq X\cup Y\). Now, we assign the rest of the paths in the following manner. Consider the graph \(H\) obtained after removing all the edges used by the paths \(P_{1}^{A},\ldots,P_{a}^{A}\) and \(P_{1}^{B}\). Notice that \(H\) is a split graph where \(X\cup Y\) induces a clique and \(\{u,w\}\) induces an independent set. Moreover, observe that \(d_{H}(u)\geq b-1\) and \(d_{H}(w)\geq c\). Now, let \(\mathcal{X}_{B}^{\prime}\) be the multiset of terminal pairs obtained after removing all Type-A terminal pairs along with the terminal pair \(\chi\) from \(\mathcal{X}_{B}\). Note that these are exactly the terminal pairs in \(\mathcal{X}_{B}\) that are not provided a path yet. Since \(|\mathcal{X}_{B}^{\prime}|\leq b+c+d-1\) and \(|X\cup Y|\geq b+c+d\), due to Corollary 25, we have that \((H,\mathcal{X}_{B}^{\prime},|\mathcal{X}_{B}^{\prime}|)\) is a Yes-instance, and thus, there exists an edge-disjoint path between each terminal pair \(\mathcal{X}_{B}^{\prime}\). Since these paths only use edges present in \(H\), notice that all these paths are indeed edge-disjoint from paths \(P_{1}^{A},\ldots,P_{a}^{A}\) and \(P_{1}^{B}\) (due to construction of \(H\)). Hence \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\) is a Yes-instance. This completes our proof. **Observation 61**.: _RR7 can be applied in polynomial time._ In RR7, we showed that in a reduced instance, the size of each block with two cut vertices is bounded by a linear function of the number of times its vertices appear in some occurrences of terminal pairs. In the next reduction rule (RR8), we consider the end blocks (i.e., blocks with exactly one cut vertex). So, consider an end block \(B\) of \(G\) with cut vertex \(u\), and let \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\) be the restriction of \((G,\mathcal{X},k)\) to \(B\). Any terminal pair \((s,t)\in\mathcal{X}_{B}\) is one of the following types: * Type-B\({}^{\prime}\): \(s=u\) and \(t\in V(B)\setminus\{u\}\), or vice versa. * Type-D\({}^{\prime}\): \(s,t\in V(B)\setminus\{u\}\). Let \(b^{\prime}\) and \(d^{\prime}\) denote the number of occurrences of Type-B\({}^{\prime}\) and Type-D\({}^{\prime}\) terminal pairs in \(\mathcal{X}_{B}\), respectively. We have the following reduction rule. **Reduction Rule 8** (Rrs8).: _Let \((G,\mathcal{X},k)\) be an instance of \(\mathrm{EDP}\) where \(G\) is a clique path. Let \(B\) be an end block of \(G\) with cut vertex \(u\). Consider the instance \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\). If \(|V(B)|>b^{\prime}+d^{\prime}\), then contract \(B\) in \((G,\mathcal{X},k)\) to obtain \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\)._ We have the following lemma to prove that RR8 is safe. **Lemma 62**.: _RR8 is safe._ Proof.: We first remark that \(G^{\prime}\) is a clique path. Since \(B\) is a clique with at least \(|\mathcal{X}_{B}|+1\) (here \(|\mathcal{X}_{B}|=b^{\prime}+d^{\prime}\)) vertices, due to Lemma 24, we have that \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\) is a Yes-instance. Therefore, due to Lemma 53, we can safely contract \(B\) in \((G,\mathcal{X},k)\) Next, we establish that once we cannot apply RR3-RR8, the number of vertices of the reduced graph is bounded by a linear function of \(k\). In particular, we have the following lemma. Let \((G,\mathcal{X},k)\) be an instance of \(\mathrm{EDP}\) where \(G\) is a clique path. If we cannot apply RR3-RR8 on this instance, then \(G\) contains at most \(2k+1\) vertices. Proof.: Let \(B_{1},\ldots,B_{t}\) be the blocks in \(G\). We say that a block \(B\) is a \(D\)-_block_ if the instance \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\) contains neither Type-B nor Type-C terminal pairs (i.e., \(b+c=0\)), and otherwise, we say that \(B\) is a \(C\)-_block_. Let \(\mathcal{D}\) and \(\mathcal{C}\) denote the set of \(D\)-blocks and \(C\)-blocks of \(G\). Note that \(|V(G)|=\sum_{B\in\mathcal{D}}|V(B)|+\sum_{B\in\mathcal{C}}|V(B)|-(t-1)\) because each cut vertex is counted in two blocks and there are \(t-1\) cut vertices. For the ease of presentation of calculations, for a block \(B\), let \(B^{b},B^{c},B^{d}\) denote the cardinality of Type-B, Type-C, and Type-D terminal pairs in \((B,\mathcal{X}_{B},|\mathcal{X}_{B}|)\), respectively. Due to RR 7 and RR 8, each block \(B\in\mathcal{D}\) has cardinality at most \(B^{d}+2\) and each block \(B\in\mathcal{C}\) has cardinality at most \(B^{d}+B^{b}+B^{c}+1\). Therefore, \[|V(G)|\leq\sum_{B\in\mathcal{D}}(B^{d}+2)+\sum_{B\in\mathcal{C}}(B^{b}+B^{c}+B ^{d}+1)-t+1,\] \[|V(G)|\leq\sum_{B\in\mathcal{D}}(B^{d}+1)+|\mathcal{D}|+\sum_{B\in\mathcal{C}} (B^{b}+B^{c}+B^{d}+1)-t+1,\] \[|V(G)|\leq|\mathcal{D}|-t+1+\sum_{B\in\mathcal{C}\cup\mathcal{D}}(B^{b}+B^{c}+ B^{d}+1),\] \[|V(G)|\leq|\mathcal{D}|-t+1+t+\sum_{B\in\mathcal{C}\cup\mathcal{D}}(B^{b}+B^{c}+B ^{d}).\] Now, let \(k_{1}\) be the number of terminal pairs in \(G\) such that both terminals of the terminal pair lie in the same block of \(G\), possibly on the cut vertices of the block. Then, observe that \(\sum_{B\in\mathcal{C}\cup\mathcal{D}}B^{d}\leq k_{1}\). Thus, the total number of terminal pairs such that the endpoints lie in different blocks are \(k-k_{1}\). Now, consider a terminal pair whose endpoints lie in different blocks. Then, this terminal pair will behave as a Type-B terminal pair for at most one unique block in \(G\) and as a Type-C terminal pair for at most one another unique block in \(G\). Thus, \(\sum_{B\in\mathcal{C}\cup\mathcal{D}}(B^{b}+B^{c})\leq 2(k-k_{1})\). Therefore, \(|V(G)|\leq|\mathcal{D}|+1+k_{1}+2(k-k_{1})=|\mathcal{D}|+1+2k-k_{1}\). Finally, observe that \(|\mathcal{D}|\leq k_{1}\) (since each block contains at least one terminal due to RR 5). Therefore, we have that \(|V(G)|\leq 2k+1\). Now, we have the following observation. RR6-RR8 can be applied in polynomial time. Moreover, during the application of RR6-RR8, we never increase the initial \(k\). Finally, due to RR3-RR8, along with Lemmas 47, 56, 60, 62, and 63, Corollary 54, and Observation 64, we have the following theorem. EDP on clique paths admits a kernel with at most \(2k+1\) vertices. ## 4 NP-hardness for Complete Graphs In this section, we prove that EDP is NP-hard on complete graphs by giving a polynomial-time reduction from EDP on general graphs, which is known to be NP-hard [35]. Our reduction is based on the standard technique of adding missing edges and placing a terminal pair on the endpoints of the added edge. This technique was also used to prove the NP-hardness of EDP for split graphs [26]. EDP_is_ NP_-hard on complete graphs._ Proof.: Let \((G,\mathcal{X},k)\) be an instance of EDP, where \(\mathcal{X}=\{(s_{1},t_{1}),\ldots,\,(s_{k},t_{k})\}\). Define the graph \(G^{\prime}\) as follows: Let \(V(G^{\prime})=V(G)\) and \(E(G^{\prime})=E(G)\cup\{uv:uv\notin E(G)\}\). Furthermore, let \(\mathcal{T}=\{(s_{uv},t_{uv}):uv\in E(G^{\prime})\setminus E(G),s_{uv}=u,t_{uv }=v\}\). Note that for ease of notation, we denote the terminal pairs in \(\mathcal{T}\) by \((s_{uv},t_{uv})\) rather than \((u,v)\). Also, for an edge \(uv\in E(G^{\prime})\setminus E(G)\), we introduce either \((s_{uv},t_{uv})\) or \((s_{vu},t_{vu})\) in \(\mathcal{T}\), not both. Let \(\mathcal{X}^{\prime}=\mathcal{X}\cup\mathcal{T}\). Now, we claim that \((G,\mathcal{X},k)\) and \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\) are equivalent instances of EDP, where \(k^{\prime}=k+|\mathcal{T}|\). Let \(\mathcal{P}_{\mathcal{T}}=\{P_{uv}:(s_{uv},t_{uv})\in\mathcal{T}\}\). In one direction, let \(\mathcal{P}=\{P_{1},\ldots,P_{k}\}\) be a solution of \((G,\mathcal{X},k)\). Since for every \((s_{uv},t_{uv})\in\mathcal{T}\), the edge \(uv\) does not belong to any path in \(\mathcal{P}\), \(\mathcal{P}\cup\mathcal{P}_{\mathcal{T}}\) is a set of edge-disjoint paths in \(G^{\prime}\). As \(\mathcal{X}^{\prime}=\mathcal{X}\cup\mathcal{T}\), \(\mathcal{P}\cup\mathcal{P}_{\mathcal{T}}\) is a solution of \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\). In the other direction, let \(\mathcal{P}^{\prime}\) be a solution of \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\) that contains as many paths from \(\mathcal{P}_{\mathcal{T}}\) as possible. Next, we claim that \(\mathcal{P}_{\mathcal{T}}\subseteq\mathcal{P}^{\prime}.\) Targeting a contradiction, suppose that there exists a terminal pair, say, \((s_{uv},t_{uv})\in\mathcal{T}\), such that \(P_{uv}\notin\mathcal{P}^{\prime}\). Let \(Q\) denote the path in \(\mathcal{P}^{\prime}\) connecting \(s_{uv}\) and \(t_{uv}\). If none of the paths in \(\mathcal{P}^{\prime}\) uses the edge \(uv\), then the set \((\mathcal{P}^{\prime}\setminus Q)\cup\{P_{uv}\}\) is a solution of \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\) containing more paths from \(\mathcal{P}_{\mathcal{T}}\) than \(\mathcal{P}^{\prime}\), contradicting the choice of \(\mathcal{P}^{\prime}\). Hence, there must be a unique path \(P^{*}\in\mathcal{P}^{\prime}\) that uses the edge \(uv\). Let \(s^{*}\) and \(t^{*}\) be the two terminals that are connected by the path \(P^{*}\). Let \(W\) denote the walk between \(s^{*}\) and \(t^{*}\) obtained from \(P^{*}\) by replacing the edge \(uv\) with the path \(Q\) (there may be some vertices that are repeated in \(W\)). Since \(E(W)=(E(P^{*})\cup E(Q))\setminus\{uv\}\), \(W\) is edge-disjoint from every path in \(\mathcal{P}^{\prime}\setminus\{P^{*},Q\}\) (as \(\mathcal{P}^{\prime}\) is a set of edge-disjoint paths). Let \(Q^{*}\) be a path between \(s^{*}\) and \(t^{*}\) that uses a subset of edges of \(W\), which again is edge-disjoint from every path in \(\mathcal{P}^{\prime}\setminus\{P^{*},Q\}\). Hence, \(\mathcal{P}=(\mathcal{P}^{\prime}\setminus\{P^{*},Q\})\cup\{P_{uv},Q^{*}\}\) is a solution of \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\) that contains one more path from \(\mathcal{P}_{\mathcal{T}}\) than \(\mathcal{P}^{\prime}\). This contradicts the choice of \(\mathcal{P}^{\prime}\), and thus implies that \(\mathcal{P}_{\mathcal{T}}\subseteq\mathcal{P}^{\prime}.\) Since \(\mathcal{X}^{\prime}=\mathcal{X}\cup\mathcal{T}\) and \(\mathcal{P}_{\mathcal{T}}\) contains the edge-disjoint paths between the terminal pairs present in \(\mathcal{T}\), \(\mathcal{P}^{\prime}\setminus\mathcal{P}_{\mathcal{T}}\) must contain the edge-disjoint paths between the terminal pairs present in \(\mathcal{X}\). Thus, \(\mathcal{P}^{\prime}\setminus\mathcal{P}_{\mathcal{T}}\) is a solution of \((G,\mathcal{X},k)\). ## 5 Kernelization Results on VDP ### A Linear Vertex Kernel for Split Graphs Let \((G,\mathcal{X},k)\) be an instance of VDP where \(G\) is a split graph. Note that given a split graph \(G\), we can compute (in linear time) a partition \((C,I)\) of \(V(G)\) such that \(C\) is a clique and \(I\) is an independent set [25]. We partition the set \(I\) into two sets, say, \(I_{T}\) and \(I_{N}\), where \(I_{T}\) denotes the set of terminal vertices in \(I\) and \(I_{N}\) denotes the set of non-terminal vertices in \(I\). Observe that a terminal pair in a split graph can only be of one of the following types: * Type-I: One of the terminal vertices belongs to \(C\), and the other belongs to \(I\). * Type-II: Both terminal vertices belong to \(C\). * Type-III: Both terminal vertices belong to \(I\). Our kernelization algorithm is based on a preprocessing step and the application of three reduction rules (RR9-RR11). Before proceeding further, let us first discuss the idea behind the proof of Theorem 3.1 in the following overview. **Overview.** Given an instance \((G,\mathcal{X},k)\) of VDP where \(G\) is a split graph, our main goal (in this section) is to convert the given instance of VDP to an instance of VDP-Unique (see Section 1.1) on split graphs in polynomial time. For this purpose, we focus on the vertices that belong to more than one terminal pair (including different occurrences of the same pair). The intuition suggests that one should make copies of the vertex that belongs to multiple terminal pairs. However, in certain cases (in particular for heavy terminal pairs), this approach fails. In simple words, if \((s,t)\) is a heavy terminal pair (note that \(st\in E(G)\), in this case), then creating a copy of either \(s\) or \(t\) creates an illusion that many 1-length paths exist between \(s\) and \(t\), which is not true (see Figure 11). For Type-I heavy terminal pairs, deleting one copy of \((s,t)\) from \(\mathcal{X}\) along with the deletion of edge \(st\) from \(G\) works. However, for Type-II heavy terminal pairs, the situation is more subtle than it appears, as a similar operation cannot be applied here (as the graph no longer remains split after deleting edge \(st\), in this case). Therefore, the idea is to tackle these "problematic" heavy terminal pairs with the help of some reduction rules (RR9 and RR10 in our case). Another interesting thing to note here is that since we are dealing with a restricted graph class (i.e. split graphs), some vertices are more crucial than others. To turn this property in our favor, we carry out a preprocessing step, which we call Clean-Up. The intuition behind applying the Clean-Up operation is the following. Note that the vertices in the set \(I_{N}\) can serve only one purpose, and that is to connect any two vertices from the clique. However, informally speaking, as the non-terminal vertices present in \(C\) serve more purpose than the vertices present in \(I_{N}\), it is important to save them until their use becomes necessary. So, to tackle Type-II heavy terminal pairs (one of the "problematic" terminal pairs as discussed in the previous paragraph), it is desirable for us to use as many vertices from the set \(I_{N}\) as possible because those terminal pairs that do not have a path of length 2 via a vertex from \(I_{N}\) have to necessarily use a vertex from the clique as an internal vertex. Now, let us define everything discussed above in a formal manner. For this purpose, we begin by defining the Clean-Up operation. Note that the Clean-Up operation is a preprocessing step of our kernelization algorithm. Therefore, given an instance \((G,\mathcal{X},k)\) of VDP where \(G\) is a split graph, we apply this operation before applying any of the reduction rules RR9-RR11 (discussed later in this section). **Clean-Up.** First, consider the following construction (Construction \(\mathcal{A}\)), which is crucial to define the Clean-Up operation. This construction was also used by Heggernes et al. [26] Figure 11: Let \((G,\mathcal{X},2)\) be an instance of VDP where \(G\) is a split graph with partition \((C,I)\) such that \(C=\{a,b\}\) and \(I=\{c,d\}\). Let \(\mathcal{X}=\{(a,c),(a,c)\}\). Furthermore, let \(H\) be the graph obtained from \(G\) by making a copy \(c^{\prime}\) of \(c\), and let \(\mathcal{X}^{\prime}=\{(a,c),(a,c^{\prime})\}\). Note that \((H,\mathcal{X}^{\prime},2)\) is a Yes-instance (of VDP) while \((G,\mathcal{X},2)\) is a No-instance (of VDP). who used it to remove a subset of \(I_{N}\) (safely). However, in the Clean-Up operation, we will remove the entire set \(I_{N}\) (safely). In order to do so, we need a few more technical arguments than the ones present in [26]. **Construction \(\mathcal{A}\):** Given an instance \((G,\mathcal{X},k)\) of VDP where \(G\) is a split graph, we construct a bipartite graph, say, \(H\), with bipartition \((A,B)\) as follows: For every terminal pair \((s,t)\) of Type-II such that \(st\) is a heavy edge with weight \(w\geq 2\), we introduce \(w-1\) vertices, say, \(v^{1}_{st},\dots,v^{w-1}_{st}\), to \(A\). The set \(B\) consists of all the vertices from set \(I_{N}\). For each \(v\in I_{N}\), introduce an edge from \(v\) to vertices \(v^{1}_{st},\dots,v^{w-1}_{st}\) if and only if \(v\) is adjacent to both \(s\) and \(t\) in \(G\). See Figure 12 for an illustration of the construction of \(H\) from \((G,\mathcal{X},k)\). Due to Proposition 10, we compute a maximum matching, say, \(M\), in \(H\) in polynomial time. Let \(\widehat{\mathcal{X}}\subseteq\mathcal{X}\) be the multiset of all terminal pairs whose corresponding vertices in \(H\) are saturated by \(M\). For example, in Figure 12, if \(M=\{v^{1}_{ab}f,v^{2}_{ab}g,v^{1}_{de}h\}\) is a maximum matching in \(H\), then \(\widehat{\mathcal{X}}=\{(a,b),(a,b),(d,e)\}\). This ends Construction \(\mathcal{A}\). **Remark 65**.: For intuition, we remark that in Construction \(\mathcal{A}\), we introduce only \(w-1\) copies of a heavy edge of weight \(w\) because one copy will be "taken care" of by the edge between them (see Observation 17). Next, consider the following definition. **Definition 66** (Clean-Up).: _Given an instance \((G,\mathcal{X},k)\) of VDP where \(G\) is a split graph, construct the bipartite graph \(H\) and find a maximum matching \(M\) in \(H\), as described in Construction \(\mathcal{A}\). If \(I_{N}\neq\emptyset\) or \(\widehat{\mathcal{X}}\neq\emptyset\), then \(G^{\prime}\Leftarrow G-I_{N},\mathcal{X}^{\prime}\Leftarrow\mathcal{X} \setminus\widehat{\mathcal{X}},\,k^{\prime}\Leftarrow k-|\widehat{\mathcal{X }}|\)._ **Observation 67**.: Clean-Up _can be done in polynomial time._ Note that due to Remark 23, \(G^{\prime}\) is a split graph. Now, with the help of Definition 69, Observation 70, Proposition 71, and Lemma 72, we will establish that \((G,\mathcal{X},k)\) and \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\), as described in Definition 66, are equivalent (see Lemmas 73-74). Before that, consider the next observation which follows trivially from Proposition 16. **Observation 68**.: _Let \((G,\mathcal{X},k)\) be a Yes-instance of VDP where \(G\) is a split graph. Moreover, let \(\mathcal{P}\) be a minimum solution of \((G,\mathcal{X},k)\). If there exists a path \(P\in\mathcal{P}\) that visits Figure 12: Let \((G,\mathcal{X},k)\) be an instance of VDP where \(G\) is a split graph with a partition \((C,I)\) and \(\mathcal{X}=\{(a,b),(a,b),\,(a,b),(d,e),(d,e),(c,d),(b,d)\}\). The vertices in \(C\) form a clique; to keep the picture clean, we do not show the edges in \(C\). Observe that the terminal pairs \((a,b)\) and \((d,e)\) are heavy. a vertex \(x\in I_{N}\), then \(P\) must be of the form \((u^{\prime},x,v^{\prime})\), where \(u^{\prime},v^{\prime}\in C\) are terminal vertices such that \(P_{u^{\prime}v^{\prime}}\in\mathcal{P}\)._ Let \((G,\mathcal{X},k)\) be a Yes-instance of \(\operatorname{VDP}\) where \(G\) is a split graph. Moreover, let \(M\) and \(H\) be as described in Construction \(\mathcal{A}\). Let \(\mathcal{P}\) be a minimum solution of \((G,\mathcal{X},k)\). Then, \(M^{\prime}\subseteq E(H)\) is said to be _induced_ by \(\mathcal{P}\) in \(H\) if it is constructed as follows: 1. Initialize \(M^{\prime}\Leftarrow\emptyset\). 2. For every path \(P\in\mathcal{P}\) that visits a vertex \(x\in I_{N}\): By Observation 3.2, \(P\) must be of the form \((u^{\prime},x,v^{\prime})\), where \(u^{\prime},v^{\prime}\in C\) are terminal vertices such that \(P_{u^{\prime}v^{\prime}}\in\mathcal{P}\). This further implies that \(v^{\prime}v^{\prime}\) is a heavy edge. Let \(w\geq 2\) be the weight of \(u^{\prime}v^{\prime}\), and consider the vertices \(v^{1}_{u^{\prime}v^{\prime}},\ldots,v^{w-1}_{u^{\prime}v^{\prime}}\) in graph \(H\). By Construction \(\mathcal{A}\), \(x\) is adjacent to every vertex in the set \(\{v^{1}_{u^{\prime}v^{\prime}},\ldots,v^{w-1}_{u^{\prime}v^{\prime}}\}\). So, we can arbitrarily choose an edge \(xv^{j}_{u^{\prime}v^{\prime}}\) of \(H\), for some \(j\in[w-1]\) such that \(xv^{j}_{u^{\prime},v^{\prime}}\) does not appear in \(M^{\prime}\)1, and add it to \(M^{\prime}\). Footnote 1: The existence of such a \(j\) follows from the choice of \(w\), and since \(P_{u^{\prime}v^{\prime}}\in\mathcal{P}\) Let \((G,\mathcal{X},k)\) be a Yes-instance of \(\operatorname{VDP}\) where \(G\) is a split graph. Moreover, let \(M\), \(A\), \(B\), and \(H\) be as described in Construction \(\mathcal{A}\). Let \(\mathcal{P}\) be a minimum solution of \((G,\mathcal{X},k)\). Let \(M^{\prime}\) be induced by \(\mathcal{P}\) in \(H\) as described in Definition 3.2. Then, \(M^{\prime}\) is a matching. Proof.: Note that for every path in \(\mathcal{P}\) that has an internal vertex that belongs to \(I_{N}\), we choose a distinct vertex in set \(A\) to be an endpoint of an edge in \(M^{\prime}\) (due to Definition 3.2). Furthermore, since the paths in \(\mathcal{P}\) are internally vertex-disjoint, for every edge in \(M^{\prime}\), its endpoint in set \(B\) is also unique. Thus, \(M^{\prime}\) is a matching in \(H\). For an illustrative example, consider \(G\), \(H\), and \(\mathcal{X}\) given in Figure 12. Let \(\mathcal{P}=\{(a,f,b),(a,\ell,b),\,P_{ab},\,(d,g,e),P_{de}\,P_{cd},P_{bd}\}\) be a solution of \((G,\mathcal{X},7)\). Then, \(\{v^{1}_{ab}f,v^{1}_{de}g\}\) and \(\{v^{2}_{ab}f,v^{1}_{de}g\}\) are two possible choices for \(M^{\prime}\). [[26]] Let \((G,\mathcal{X},k)\) be a Yes-instance of \(\operatorname{VDP}\) where \(G\) is a split graph. Moreover, let \(M\), \(B\), and \(H\) be as described in Construction \(\mathcal{A}\). Let \(R\)\((\subseteq B)\) be the set of vertices in \(I_{N}\) that are not saturated by \(M\) in graph \(H\). Let \(\mathcal{P}\) be a minimum solution of \((G,\mathcal{X},k)\) for which the number of vertices in \(R\) visited by the paths in \(\mathcal{P}\) is minimum. Then, none of the paths in \(\mathcal{P}\) visits a vertex in \(R\). Let \((G,\mathcal{X},k)\) be a Yes-instance of \(\operatorname{VDP}\) where \(G\) is a split graph. Moreover, let \(M\), \(B\), and \(H\) be as described in Construction \(\mathcal{A}\). Let \(R\)\((\subseteq B)\) be the set of vertices in \(I_{N}\) that are not saturated by \(M\) in graph \(H\), and let \(F=I_{N}\setminus R\). Let \(\mathcal{P}\) be a minimum solution of \((G,\mathcal{X},k)\) for which the number of vertices in \(R\) and \(F\) visited by the paths in \(\mathcal{P}\) is minimum and maximum, respectively (with priority to the minimization). Then, every vertex in \(F\) is visited by some path in \(\mathcal{P}\). Proof.: Let \(v\in F\) such that no path in \(\mathcal{P}\) uses \(v\) as an internal vertex. First, by Observation 3.2, any set induced by \(\mathcal{P}\) in \(H\) is a matching in \(H\). Let \(M^{\prime}\) be one such matching. Note that \(M^{\prime}\) does not saturate any vertex in \(R\) (due to Proposition 3.2), and furthermore, \(M^{\prime}\) does not saturate \(v\) in \(H\) (by the definition of \(M^{\prime}\)). This implies that \(|M^{\prime}|<|F|\). Since \(|M|=|F|\), note that \(M^{\prime}\) is not a maximum matching in \(H\). Thus, by Proposition 3.2 and the fact that none of \(M\) or \(M^{\prime}\) saturates a vertex from \(R\), there exists an \(M^{\prime}\)-augmenting path, say, \(Q\), in \(H-R\). For an example, consider \(G\), \(H\), and \(\mathcal{X}\) given in Figure 12. If \(M=\{v^{1}_{ab}f,v^{2}_{ab}g,v^{1}_{de}h\}\) and \(M^{\prime}=\{v^{1}_{ab}f,v^{1}_{de}g\}\), then \((v^{2}_{ab},f,v^{1}_{ab},\,g,v^{1}_{de},h)\) is an \(M^{\prime}\)-augmenting path in \(H-R\). Next, we discuss how to obtain a solution \(\widehat{\mathcal{P}}\) of \((G,\mathcal{X},k)\) from \(\mathcal{P}\) by using \(Q\), such that \(\widehat{\mathcal{P}}\) visits one more vertex from \(F\) than \(\mathcal{P}\) and no vertex from \(R\) (contradicting the choice of \(\mathcal{P}\)). Let \(M^{\prime\prime}\) be the matching obtained from \(M^{\prime}\) by replacing the saturated edges with unsaturated edges in \(Q\) and vice versa (by augmenting \(M^{\prime}\) with respect to \(Q\)). For example, consider \(G\), \(H\), and \(\mathcal{X}\) given in Figure 12. If \(M=\{v^{1}_{ab}f,v^{2}_{ab}g,v^{1}_{de}h\}\) and \(M^{\prime}=\{v^{1}_{ab}f,v^{1}_{de}g\}\), then \(M^{\prime\prime}=\{v^{1}_{de}h,v^{1}_{ab}g,v^{2}_{ab}f\}\). Since the length of an augmenting path is always odd, one of the endpoints of \(Q\) must belong to \(B\). This implies that \(M^{\prime\prime}\) saturates one more vertex in set \(F\) (as \(Q\) is an augmenting path in \(H-R\)) than \(M^{\prime}\). Initialize \(\widehat{\mathcal{P}}\) as \(\mathcal{P}\). Remove all those paths from \(\widehat{\mathcal{P}}\) that contain vertices from set \(V(Q)\cap F\) as internal vertices. Furthermore, if \(v^{j}_{xy}\) for some \(j\in[w-1]\) is the end point of \(Q\) in \(A\), then also remove the path in \(\mathcal{P}\) between the \(j^{th}\) occurrence of terminal pair \((x,y)\in\mathcal{X}\). Next, for every edge \(v^{j}_{st}b\in M^{\prime\prime}\cap Q\), where \(v^{j}_{st}\in A\) for some \(j\in[w-1]\) and \(b\in F\), introduce the path \((s,b,t)\) in \(\widehat{\mathcal{P}}\). Next, note that \(\widehat{\mathcal{P}}\) is a minimum solution of \((G,\mathcal{X},k)\). Furthermore, the paths in \(\widehat{\mathcal{P}}\) visit one more vertex of \(F\) (and no vertex from \(R\)) than the paths in \(\mathcal{P}\), which contradicts the choice of \(\mathcal{P}\). Thus, every vertex in \(F\) is visited by some path in \(\mathcal{P}\). We are now ready to establish the promised equivalence. Let \((G,\mathcal{X},k)\) be a Yes-instance of \(\mathrm{VDP}\) where \(G\) is a split graph. Moreover, let \(M\), \(B\), \(\widehat{\mathcal{X}}\), and \(H\) be as described in Construction \(\mathcal{A}\). Then, the output \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\) of Clean-Up on \((G,\mathcal{X},k)\) is also a Yes-instance of \(\mathrm{VDP}\). Proof.: Let \((G,\mathcal{X},k)\) be a Yes-instance, and let \(R\) (\(\subseteq B\)) be the set of vertices in \(I_{N}\) that are not saturated by \(M\) in graph \(H\), and let \(F=I_{N}\setminus R\). Let \(\mathcal{P}\) be a solution of \((G,\mathcal{X},k)\) for which the number of vertices in \(R\) visited by the paths in \(\mathcal{P}\) is minimum, and the number of vertices in \(F\) visited by the paths in \(\mathcal{P}\) is maximum. Now, by Proposition 4.2, none of the paths in \(\mathcal{P}\) visits a vertex in \(R\). Furthermore, by Lemma 4.2, for every \(v\in F\), there exists a path, say, \(P_{v}\), in \(\mathcal{P}\) that visits \(v\). Therefore, if \(\widehat{\mathcal{P}}\) denotes the set of internally vertex-disjoint paths between the terminals in \(\widehat{\mathcal{X}}\), then note that none of the paths in \(\mathcal{P}\setminus\widehat{\mathcal{P}}\) visits a vertex in the set \(I_{N}\). Thus, \(\mathcal{P}\setminus\widehat{\mathcal{P}}\) is a solution of \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\). Let \((G,\mathcal{X},k)\) be an instance of \(\mathrm{VDP}\) where \(G\) is a split graph. Moreover, let \(\widehat{\mathcal{X}}\) be as described in Construction \(\mathcal{A}\). Furthermore, let \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\) be the output of Clean-Up on \((G,\mathcal{X},k)\). Then, if \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\) is a Yes-instance of \(\mathrm{VDP}\), then \((G,\mathcal{X},k)\) is also a Yes-instance of \(\mathrm{VDP}\). Proof.: Let \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\) be a Yes-instance, and let \(\mathcal{P}^{\prime}\) be one of its solutions. Note that, in order to prove that \((G,\mathcal{X},k)\) is a Yes-instance, we only need to find additional internally vertex-disjoint paths between the terminal pairs in \(\widehat{\mathcal{X}}\). By the definition of \(\widehat{\mathcal{X}}\), we know that for every occurrence of a terminal pair, say, \((s_{i},t_{i})\), in \(\widehat{\mathcal{X}}\), there exists a vertex, say, \(u_{i}\), in \(I_{N}\) such that \(u_{i}\) is adjacent to both \(s_{i}\) and \(t_{i}\) in \(G\). Also, for distinct \(i\in[k]\) such that \((s_{i},t_{i})\in\widehat{\mathcal{X}}\), we have distinct \(u_{i}\) (i.e., \(u_{i}\neq u_{j}\) for \(i\neq j\in[k]\)). Furthermore, since \(G^{\prime}=G-I_{N}\), it is clear that \(u_{i}\) does not belong to any path in \(\mathcal{P}^{\prime}\). Therefore, for every \((s_{i},t_{i})\in\widehat{\mathcal{X}}\), define \(P_{i}=(s_{i},u_{i},t_{i})\). Note that \(\{P_{i}:(s_{i},t_{i})\in\widehat{\mathcal{X}}\}\) is a set of internally vertex-disjoint paths in \(G\). Thus, \(\mathcal{P}=\mathcal{P}^{\prime}\cup\{P_{i}:(s_{i},t_{i})\in\widehat{\mathcal{X}}\}\) is a solution of \((G,\mathcal{X},k)\). Now, consider the following lemma. **Lemma 75**.: _Let \((G,\mathcal{X},k)\) be an instance of VDP obtained by applying Clean-Up. Let \((s,t)\in\mathcal{X}\) be a heavy terminal pair of Type-II in \(G\) of weight \(w\geq 2\). Then, any minimum solution of \((G,\mathcal{X},k)\) must contain the following internally vertex-disjoint paths: \(\{P_{st}\}\cup\{(s,u^{1},t),\ldots,(s,u^{w-1},t)\}\), where \(\{u^{1},\ldots,u^{w-1}\}\) is a set of non-terminal vertices in \(C\)._ Proof.: First, note that \(P_{st}\) must belong to every minimum solution of \((G,\mathcal{X},k)\) (due to Observation 17). Moreover, by Proposition 16, note that the internally vertex-disjoint paths between the remaining \(w-1\) terminal pairs \((s,t)\) must have length exactly \(2\). Next, observe that after applying the Clean-Up operation, \(I_{N}=\emptyset\). By the definition of internally vertex-disjoint paths, vertices in \(I_{T}\) cannot be used as internal vertices in any path of the solution; so, the only choice is that the internal vertices are from \(C\). Now, we are ready to define the reduction rules. **Reduction Rules**.: Let us start by defining our first reduction rule (RR9). **Reduction Rule 9** (R9).: _If there is a terminal pair \((s,t)\in\mathcal{X}\) of Type-I such that \(st\in E(G)\), then \(V(G^{\prime})\Leftarrow V(G)\), \(E(G^{\prime})\Leftarrow E(G)\setminus\{st\}\), \(\mathcal{X}^{\prime}\Leftarrow\mathcal{X}\setminus\{(s,t)\}\), \(k^{\prime}\Leftarrow k-1\). Furthermore, for every \(x\in\{s,t\}\) that does not appear as a terminal in any terminal pair in \(\mathcal{X}^{\prime}\), update \(V(G^{\prime})\Leftarrow V(G^{\prime})\setminus\{x\}\)._ We have the following lemma to establish that RR9 is safe. Proof.: First, note that since we are only removing an edge between a vertex of \(C\) and a vertex of \(I\), \(G^{\prime}\) is a split graph. Next, we claim that \((G,\mathcal{X},k)\) and \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\) are equivalent instances of VDP on split graphs. In one direction, let \(\mathcal{P}\) be a minimum solution of \((G,\mathcal{X},k)\). By Observation 17, \(P_{st}\in\mathcal{P}\). Since terminal vertices cannot be used as internal vertices in any path of the solution, note that \(x\in\{s,t\}\) can only appear as an endpoint of the paths in \(\mathcal{P}\setminus\{P_{st}\}\) (if \(x\) is a terminal in some terminal pair in \(\mathcal{X}^{\prime}\)). Thus, \(\mathcal{P}\setminus\{P_{st}\}\) is a solution of \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\). In the other direction, let \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\) be a Yes-instance of VDP. This implies that there exists a set \(\mathcal{P}^{\prime}\) of \(k-1\) internally vertex-disjoint paths in \(G^{\prime}\) joining the terminal pairs in \(\mathcal{X}^{\prime}\). Since \(st\notin E(G^{\prime})\), none of the paths in \(\mathcal{P}^{\prime}\) uses the edge \(st\). Furthermore, note that if \(x\in\{s,t\}\) does not appear as a terminal in any terminal pair in \(\mathcal{X}^{\prime}\), then it is necessary to remove \(x\) from \(G^{\prime}\) as otherwise; there may exist a path in \(\mathcal{P}^{\prime}\) that uses \(x\) as an internal vertex, which risks us an invalid solution. Thus, after taking care of \(x\in\{s,t\}\) accordingly, note that \(\mathcal{P}^{\prime}\cup\{P_{st}\}\) is a set of \(k\) internally vertex-disjoint paths in \(G\) joining the terminal pairs in \(\mathcal{X}\). **Observation 77**.: _After applying RR9 exhaustively on \(G\), no Type-I terminal pair in \(G\) has an edge between its terminals. Moreover, RR9 can be applied in polynomial time._ To define the next reduction rule (RR10), we use the following notation: Let \(\{(s,t)\times(w)\}\) denote \(w\) copies of \((s,t)\). **Reduction Rule 10** (Rr10).: _If there is a heavy terminal pair \((s,t)\in\mathcal{X}\) of Type-II in \(G\) of weight \(w\geq 2\), then \(V(G^{\prime})\Leftarrow V(G)\cup\{s^{1},\ldots,s^{w-1},t^{1},\ldots,t^{w-1}\}\), \(E(G^{\prime})\Leftarrow E(G)\cup\{s^{i}v,t^{i}v:v\in C,i\in[w-1]\}\), \(\mathcal{X}^{\prime}\Leftarrow(\mathcal{X}\setminus\overline{\mathcal{X}}) \cup\{(s^{1},t^{1}),\ldots,(s^{w-1},t^{w-1})\}\), where \(\overline{\mathcal{X}}=\{(s,t)\times(w-1)\}\subseteq\mathcal{X}\)._ We have the following lemma to establish that RR10 is safe. **Lemma 78**: _After Clean-Up and the exhaustive application of RR9, RR10 is safe._ First, note that since we are adding an independent set of vertices in \(I\), \(G^{\prime}\) is a split graph. Next, we claim that \((G,\mathcal{X},k)\) and \((G^{\prime},\mathcal{X}^{\prime},k)\) are equivalent instances of VDP on split graphs. In one direction, let \(\mathcal{P}\) be a minimum solution of \((G,\mathcal{X},k)\). By Observation 17, we note that \(P_{st}\in\mathcal{P}\). Let \(\widehat{\mathcal{P}}\subseteq\mathcal{P}\) such that \(\widehat{\mathcal{P}}\) contains internally vertex-disjoint paths between the terminal pairs in \(\overline{\mathcal{X}}\). Observe that the paths in \(\widehat{\mathcal{P}}\) must be of the form \((s,u,t)\), where \(u\) is some non-terminal vertex in \(C\) (due to Lemma 75). In other words, exactly \(w-1\) non-terminal vertices in \(C\) are reserved to be used by the paths in \(\widehat{\mathcal{P}}\). Let \(\{u^{1},\ldots,u^{w-1}\}\) denote such a set. Next, we define a solution \(\mathcal{P}^{\prime}\) of \((G^{\prime},\mathcal{X}^{\prime},k)\) as follows: Let \(\mathcal{P}^{\prime}=(\mathcal{P}\setminus\widehat{\mathcal{P}})\cup\{(s^{1},u^{1},t^{1}),\ldots,(s^{w-1},u^{w-1},t^{w-1})\}\). Since none of \(\{u^{1},\ldots,u^{w-1}\}\) belongs to any path in \(\mathcal{P}\setminus\widehat{\mathcal{P}}\), \(\mathcal{P}^{\prime}\) is a solution of \((G^{\prime},\mathcal{X}^{\prime},k)\). In the other direction, let \(\mathcal{P}^{\prime}\) be a minimum solution of \((G^{\prime},\mathcal{X}^{\prime},k)\). Let \(\widehat{\mathcal{P}}\subseteq\mathcal{P}^{\prime}\) such that \(\widehat{\mathcal{P}}\) contains internally vertex-disjoint paths between the terminal pairs \(\{(s^{1},t^{1}),\ldots,(s^{w-1},t^{w-1})\}\). Observe that the paths in \(\widehat{\mathcal{P}}\) must be of the form \((s^{i},u,t^{i})\), where \(u\) is some non-terminal vertex in \(C\) (due to Proposition 16). Next, we define a solution \(\mathcal{P}\) of \((G,\mathcal{X},k)\) as follows: Let \(\mathcal{P}=(\mathcal{P}^{\prime}\setminus\widehat{\mathcal{P}})\cup\{(s,u^{ 1},t),\ldots,(s,u^{w-1},t)\}\), where \(\{u^{1},\ldots,u^{w-1}\}\) denotes the set of non-terminal vertices in \(C\) that belong to the paths in \(\widehat{\mathcal{P}}\). Since none of \(\{u^{1},\ldots,u^{w-1}\}\) belongs to any path in \(\mathcal{P}\setminus\widehat{\mathcal{P}}\), \(\mathcal{P}\) is a solution of \((G,\mathcal{X},k)\). **Observation 79**: _After applying RR10 exhaustively, there do not exist any heavy Type-II terminal pairs. Moreover, RR10 can be applied in polynomial time._ We apply the next reduction rule (RR11) for every terminal participating in more than one terminal pair. **Reduction Rule 11** (Rr11): _If \(v\in V(G)\) belongs to \(x\geq 2\) terminal pairs \((v,a_{1}),\ldots,(v,a_{x})\), then \(V(G^{\prime})\Leftarrow(V(G)\setminus\{v\})\cup\{v_{1},\ldots,v_{x}\}\), \(E(G^{\prime})\Leftarrow E(G)\cup\{v_{i}u:u\in N(v),i\in[x]\}\), \(\mathcal{X}^{\prime}\Leftarrow(\mathcal{X}\setminus\{(v,a_{1}),\ldots,(v,a_{ x})\})\cup\{(v_{1},a_{1}),\ldots,(v_{x},a_{x})\}\). Moreover, if \(v\in C\), then \(E(G^{\prime})=E(G^{\prime})\cup\{v_{i}v_{j}:i\neq j\in[x]\}\)._ We have the following lemma to establish that RR11 is safe. **Lemma 80**: _After Clean-Up and the exhaustive application of RR9 and RR10, RR11 is safe._ First, note that \(G^{\prime}\) is a split graph because if \(v\in I\), then we are adding an independent set of vertices in \(I\), and if \(v\in C\), then we are adding a clique in \(C\). Next, we claim that \((G,\mathcal{X},k)\) and \((G^{\prime},\mathcal{X}^{\prime},k)\) are equivalent instances of VDP on split graphs. In one direction, let \(\mathcal{P}\) be a minimum solution of \((G,\mathcal{X},k)\). Let \(\widehat{\mathcal{P}}=\{P_{1},\ldots,P_{x}\}\subseteq\mathcal{P}\) be the subset of internally vertex-disjoint paths of \(\mathcal{P}\) such that \(P_{i}\) is a path between \(v\) and \(a_{i}\) for each \(i\in[x]\). Note that there are two possibilities: one where \(v\in I_{T}\), and the other where \(v\in C\). First, assume that \(v\in I_{T}\). On the one hand, if \((v,a_{i})\) is of Type-I (i.e., \(a_{i}\in C\)) for some \(i\in[x]\), then \(va_{i}\notin E(G)\) (by Observation 77). In this case, due to Proposition 16, \(P_{i}\) must be of the form \((v,u_{i},a_{i})\), where \(u_{i}\) is some non-terminal vertex in \(C\). On the other hand, if \((v,a_{i})\) is of Type-III (i.e., \(a_{i}\in I_{T}\)), then due to Proposition 16\((i)\), \(P_{i}\) must be either of the form \((v,u_{i},a_{i})\) or \((v,u_{i},u_{j},a_{i})\), where \(u_{i},u_{j}\) are some non-terminal vertices in \(C\). Let \(\mathcal{P}^{*}\) be the set of internally vertex-disjoint paths obtained from \(\widehat{\mathcal{P}}\) by replacing \(v\) with \(v_{i}\) in each \(P_{i},i\in[x]\). We claim that \(\mathcal{P}^{\prime}=(\mathcal{P}\setminus\widehat{\mathcal{P}})\cup\mathcal{P} ^{*}\) is a solution of \((G^{\prime},\mathcal{X}^{\prime},k)\). Since the paths in \(\widehat{\mathcal{P}}\) are internally vertex-disjoint, the collection of all internal vertices of the paths in \(\widehat{\mathcal{P}}\) are distinct. This implies that all the internal vertices of the paths in \(\mathcal{P}^{*}\) are distinct, and thus the paths in \(\mathcal{P}^{\prime}\) are internally vertex-disjoint. Now, assume that \(v\in C\). On the one hand, if \((v,a_{i})\) is of Type-I for some \(i\in[x]\), then \(P_{i}\) must be of the form \((v,u_{i},a_{i})\), where \(u_{i}\) is some non-terminal vertex in \(C\) (due to Observation 77 and Proposition 16). On the other hand, if \((v,a_{i})\) is of Type-II, then \(P_{i}\) must be of the form \(va_{i}\) (due to Observation 79 and Proposition 16). Let \(\mathcal{P}^{*}\) be the set of internally vertex-disjoint paths obtained from \(\widehat{\mathcal{P}}\) by replacing \(v\) with \(v_{i}\) in each \(P_{i},i\in[x]\). We claim that \(\mathcal{P}^{\prime}=(\mathcal{P}\setminus\widehat{\mathcal{P}})\cup\mathcal{ P}^{*}\) is a solution of \((G^{\prime},\mathcal{X}^{\prime},k)\). Since the paths in \(\widehat{\mathcal{P}}\) are internally vertex-disjoint, the collection of all internal vertices of the paths in \(\widehat{\mathcal{P}}\) are distinct. This implies that all the internal vertices of the paths in \(\mathcal{P}^{*}\) are distinct, and thus the paths in \(\mathcal{P}^{\prime}\) are internally vertex-disjoint. In the other direction, let \(\mathcal{P}^{\prime}\) be a solution of \((G^{\prime},\mathcal{X}^{\prime},k)\). Let \(\widehat{\mathcal{P}}=\{P_{1},\ldots,P_{x}\}\subseteq\mathcal{P}^{\prime}\) be the subset of internally vertex-disjoint paths of \(\mathcal{P}^{\prime}\) such that \(P_{i}\) is a path between \(v_{i}\) and \(a_{i}\) for each \(i\in[x]\). Due to Observations 77 and 79, note that either \(a_{i}\neq a_{j}\) for every distinct \(i,j\in[x]\) or if \(a_{i}=a_{j}\) for distinct \(i,j\in[x]\), then \(va_{i}\notin E(G)\). Let \(\mathcal{P}^{*}\) be the set of internally vertex-disjoint paths obtained from \(\widehat{\mathcal{P}}\) by replacing \(v_{i}\) with \(v\) in each \(P_{i},i\in[x]\). We claim that \(\mathcal{P}=(\mathcal{P}^{\prime}\setminus\widehat{\mathcal{P}})\cup\mathcal{ P}^{*}\) is a solution of \((G,\mathcal{X},k)\). Since the paths in \(\widehat{\mathcal{P}}\) are internally vertex-disjoint, the collection of all internal vertices of the paths in \(\widehat{\mathcal{P}}\) are distinct. This implies that all the internal vertices in the paths in \(\mathcal{P}^{*}\) are distinct, and thus the paths in \(\mathcal{P}\) are internally vertex-disjoint. By Lemma 75 and Observations 77 and 79, we have the following lemma. **Observation 81**.: _After applying reduction rules RR9-RR11 exhaustively, no terminal participates in more than one terminal pair. Moreover, the reduction rules, RR9-RR11, can be applied in polynomial time._ Before concluding this section, we need the following result by Yang et al. [47]. **Proposition 82** ([47]).: _VDP-Unique on split graphs admits a kernel with at most \(4k\) vertices, where \(k\) is the number of occurrences of terminal pairs._ By Observations 67 and 81, we note that every instance \((G,\mathcal{X},k)\) of VDP where \(G\) is a split graph, can be converted to an instance \((G^{\prime},\mathcal{X}^{\prime},k^{\prime})\) of VDP-Unique in polynomial time. Due to Observations 77 and 79 and Lemmas 73, 74, 76, 78, and 80, \((G,\mathcal{X},k)\) and \((G^{\prime},\mathcal{X}^{\prime},k)\) are equivalent. Furthermore, note that throughout this section, our initial parameter \(k\) does not increase during the application of the Clean-Up operation and rules RR9-RR11. Therefore, using Proposition 82, we have the following theorem. VDP on split graphs admits a kernel with at most \(4k\) vertices. ### A Quadratic Vertex Kernel for Well-partitioned Chordal Graphs In this section, we show that VDP on well-partitioned chordal graphs admits a kernel with \(\mathcal{O}(k^{2})\) vertices. Let \((G,\mathcal{X},k)\) be an instance of VDP where \(G\) is a well-partitioned chordal graph. A brief overview of our kernelization algorithm is given below. **Overview.** Ahn et al. [2] showed that VDP on well-partitioned chordal graphs admits a kernel with \(\mathcal{O}(k^{3})\) vertices. Based on their algorithm, we design our kernelization algorithm for VDP on well-partitioned chordal graphs. Since most of our rules (in this section) are borrowed from [2], our contribution can be viewed as an improved analysis of their algorithm. Note that given a well-partitioned chordal graph \(G\), by Proposition 21, we can compute (in polynomial time) a partition tree \(\mathcal{T}\) of \(G\). By Proposition 16 and utilizing the properties of the partition tree \(\mathcal{T}\), first, we define a marking procedure (called Marking Procedure by us) that marks at most \(\mathcal{O}(k^{2})\) vertices in \(V(G)\). Our marking procedure uses the following classification of terminal pairs. For a terminal pair \((s,t)\in\mathcal{X}\), either \(st\in E(G)\) or \(st\notin E(G)\). Furthermore, if \(st\in E(G)\), then \((s,t)\) is either a heavy terminal pair or a light terminal pair (see Definition 14). Accordingly, we have two different sets of rules to mark the vertices in \(V(G)\): one for heavy terminal pairs and one for (every occurrence of) non-heavy terminal pairs (defined below). After marking the desired vertices, we show (in Lemmas 96 and 97) that if our input instance \((G,\mathcal{X},k)\) is a Yes-instance, then there exists a solution of \((G,\mathcal{X},k)\) that uses the vertices from the marked vertices only (as internal vertices). We start the technical part of this section with the following definitions. [Non-heavy Terminal Pairs] Let \((G,\mathcal{X},k)\) be an instance of \(\mathrm{VDP}\) where \(G\) is a well-partitioned chordal graph. A terminal pair \((s,t)\in\mathcal{X}\) is _non-heavy_ if either \(st\notin E(G)\) or \((s,t)\) is light. [Valid Path] Let \((G,\mathcal{X},k)\) be an instance of \(\mathrm{VDP}\) where \(G\) is a well-partitioned chordal graph with a partition tree \(\mathcal{T}\). Also, for a terminal pair \((s,t)\in\mathcal{X}\), let \(s\in V(B)\) and \(t\in V(B^{\prime})\), where \(B,B^{\prime}\in V(\mathcal{T})\). Then, the unique path \((B,\ldots,B^{\prime})\) in \(\mathcal{T}\) is the _valid path_ corresponding to \((s,t)\). In Definition 84, it is possible that \(B=B^{\prime}\). In this case, the valid path consists of the single bag \(B\). [Active Boundary] Let \((G,\mathcal{X},k)\) be an instance of \(\mathrm{VDP}\) where \(G\) is a well-partitioned chordal graph with a partition tree \(\mathcal{T}\). For a bag \(B\in V(\mathcal{T})\) and a bag \(B^{\prime}\in N_{\mathcal{T}}(B)\), the boundary \(\mathsf{bd}(B,B^{\prime})\) is _active_ if both \(B\) and \(B^{\prime}\) belong to the valid path corresponding to some non-heavy terminal pair \((s,t)\in\mathcal{X}\). Furthermore, we say that \((s,t)\) activates \(\mathsf{bd}(B,B^{\prime})\). In Definition 86, note that a terminal pair \((s,t)\) such that \(s,t\in V(B)\) for some \(B\in V(\mathcal{T})\) does not activate any boundary. Also, note that the notion of active boundary is not defined for any heavy terminal pair. For an illustration of valid paths and active boundaries, see Figure 13. Additionally, we have the following straightforward observation that follows from Definitions 84 and 86. In particular, note that a single (non-heavy) terminal pair can activate at most two boundaries in every bag. Let \((G,\mathcal{X},k)\) be an instance of \(\mathrm{VDP}\) where \(G\) is a well-partitioned chordal graph with a partition tree \(\mathcal{T}\). For a bag \(B\) in \(\mathcal{T}\), let \(\mathsf{bd}(B,B_{1}),\ldots,\mathsf{bd}(B,B_{q})\) denote the active boundaries in \(B\). Furthermore, for each \(i\in[q]\), let \(k_{B_{i}}\) denote the number of occurrences of terminal pairs that activate \(\mathsf{bd}(B,B_{i})\). Then, \(\sum_{i=1}^{q}k_{B_{i}}\leq 2k\). In other words, any bag can have at most \(2k\) active boundaries. Consider the following reduction rule. [RR12] Let \((s,t)\in\mathcal{X}\) be a light terminal pair. Then, \(G^{\prime}\Leftarrow G\), \(\mathcal{X}^{\prime}\Leftarrow\mathcal{X}\setminus\{(s,t)\}\), \(k^{\prime}\Leftarrow k-1\). Furthermore, for every \(x\in\{s,t\}\) that does not appear as a terminal in any terminal pair in \(\mathcal{X}^{\prime}\), update \(V(G^{\prime})\Leftarrow V(G^{\prime})\setminus\{x\}\). [[2]] RR12 is safe. **Observation 90**.: _RR12 can be applied in polynomial time._ Using the fact that every bag in \(\mathcal{T}\) is a clique, we have the following proposition. **Proposition 91** ([2]).: _Let \((G,\mathcal{X},k)\) be an instance of VDP where \(G\) is a well-partitioned chordal graph with a partition tree \(\mathcal{T}\), obtained after the exhaustive application of RR12. Furthermore, let \(\widehat{\mathcal{X}}\subseteq\mathcal{X}\) denote the multiset of non-heavy terminal pairs. Let \(\mathcal{P}\) be a minimum solution of \((G,\mathcal{X},k)\), and let \(\widehat{\mathcal{P}}\subseteq\mathcal{P}\) such that \(\widehat{\mathcal{P}}\) contains the internally vertex-disjoint paths between the terminal pairs in \(\widehat{\mathcal{X}}\). Then, for every \(P\in\widehat{\mathcal{P}}\) and every bag \(B\in V(\mathcal{T})\), \(|V(P)\cap V(B)|\leq 2\)._ Note that Proposition 91 is not true (in general) for heavy terminal pairs as in a minimum solution, the vertices in the path corresponding to an occurrence of a heavy terminal pair can have 3 vertices in common with the vertices in a bag (when the terminals themselves belong to that bag). The following proposition is used to characterize the structure of minimum solutions (for non-heavy terminal pairs) in terms of valid paths. **Proposition 92** ([2]).: _Let \((G,\mathcal{X},k)\) be a Yes-instance of VDP where \(G\) is a well-partitioned chordal graph, obtained after the exhaustive application of RR12. Furthermore, let \(\mathcal{T}\) be a partition tree of \(G\), and let \(\widehat{\mathcal{X}}\subseteq\mathcal{X}\) denote the multiset of non-heavy terminal pairs. Let \(\mathcal{P}\) be a minimum solution of \((G,\widehat{\mathcal{X}},|\widehat{\mathcal{X}}|)\). Let \((B_{1},\ldots,B_{\ell})\) be the valid path corresponding to some \((s,t)\in\widehat{\mathcal{X}}\), and let \(P\in\mathcal{P}\) be the path between \(s\) and \(t\). Then, \(V(P)\cap V(B)\neq\emptyset\) if and only if \(B\in\{B_{1},\ldots,B_{\ell}\}\)._ **Remark 93**.: _Let \(V(\mathcal{X})=\{s,t:(s,t)\in\mathcal{X}\}\) throughout this section._ Next, we describe our marking procedure, which is crucial to proceed further. To this end, let \((G,\mathcal{X},k)\) be an instance of VDP where \(G\) is a well-partitioned chordal graph with a partition tree \(\mathcal{T}\) of \(G\), obtained after the exhaustive application of RR12. Let \(\mathcal{P}\) be a minimum solution of \((G,\mathcal{X},k)\). Before formally defining the marking procedure, let us give some intuition as to how to choose which vertices to mark. Note that the light terminal pairs have already been eliminated by RR12. To deal with the remaining non-heavy terminal pairs, we make use of Propositions 91 and 92 to characterize the paths in a minimum solution between every occurrence of a non-heavy terminal pair, and, accordingly, choose which vertices to mark. Next, let \((s,t)\in\mathcal{X}\) be a heavy terminal pair. By Proposition 16 and Observation 17, for every occurrence of \((s,t)\), the \((s,t)\)-path in \(\mathcal{P}\) must be of the form of either \(P_{st}\) or \((s,v,t)\), where \(v\in N(s)\cap N(t)\). Since \(st\in E(G)\), there are two possible cases (based on the positions of \(s\) and \(t\) in \(\mathcal{T}\)). First, let there exist bags \(B,B^{\prime}\in V(\mathcal{T})\) such that \(s\in\mathsf{bd}(B,B^{\prime})\) and \(t\in\mathsf{bd}(B^{\prime},B)\) (note that \(B\in N_{\mathcal{T}}(B^{\prime})\)). In this case, the only vertices that can be used as internal vertices to form length-2 \((s,t)\)-paths have to be contained in \(\mathsf{bd}(B,B^{\prime})\cup\mathsf{bd}(B^{\prime},B)\). Second, let there exist a bag \(B\in V(\mathcal{T})\) such that \(\{s,t\}\subseteq V(B)\). In this case, the only vertices that can be used as internal vertices to form length-2 \((s,t)\)-paths have to be contained in \(B\) or in a bag \(B^{\prime}\in N_{\mathcal{T}}(B)\) such that \(s,t\in\mathsf{bd}(B,B^{\prime})\). Now, we are ready to formally define the Marking Procedure. Note that it is necessary to first perform the Marking Procedure for every occurrence of a non-heavy terminal pair and afterward for every heavy terminal pair. **Marking Procedure:** Let \((G,\mathcal{X},k)\) be an instance of VDP where \(G\) is a well-partitioned chordal graph with a partition tree \(\mathcal{T}\), obtained after the exhaustive application of RR12. For two adjacent bags, say, \(B\) and \(B^{\prime}\), in \(\mathcal{T}\), let \(k_{BB^{\prime}}\) denote the number of occurrences of terminal pairs that activate \(\mathsf{bd}(B,B^{\prime})\). Further, let \(\widetilde{\mathsf{bd}}(B,B^{\prime})\subseteq\mathsf{bd}(B,B^{\prime})\) that do not belong to any other active boundary of \(B\). 1. For every occurrence of a non-heavy terminal pair \((s,t)\in\mathcal{X}\) and for every heavy terminal pair \((s^{\prime},t^{\prime})\in\mathcal{X}\): Initialize \(M_{(s,t)}\Leftarrow\emptyset\) and \(M_{(s^{\prime},t^{\prime})}\Leftarrow\emptyset\). 2. For every bag \(B\in\mathcal{T}\) that has at least one active boundary: 1. Let \(\mathsf{bd}(B,B_{1}),\ldots,\mathsf{bd}(B,B_{q})\) be active in \(B\). 2. For each \(i\in[q]\) and for every \((s,t)\in\mathcal{X}\) that activates \(\mathsf{bd}(B,B_{i})\): 2.2.1 If \(|\widetilde{\mathsf{bd}}(B,B_{i})|\geq 2k_{BB_{i}}\), then add to \(M_{(s,t)}\) a maximal2 subset of \(\widetilde{\mathsf{bd}}(B,B_{i})\setminus V(\mathcal{X})\) of size at most \(2k_{BB_{i}}\). 2.2 Else, add to \(M_{(s,t)}\) the set \(\mathsf{bd}(B,B_{i})\). Footnote 2: Here, when we say that we pick a maximal subset of some set \(A\) of size at most \(a\), we mean that: (i) if \(|A|\geq a\), then, we pick some subset of size \(a\) of \(A\); (ii) otherwise, we simply pick \(A\). 3. For every heavy terminal pair \((s,t)\in\mathcal{X}\): 1. If there exist bags \(B,B^{\prime}\in V(\mathcal{T})\) such that \(s\in\mathsf{bd}(B,B^{\prime})\) and \(t\in\mathsf{bd}(B^{\prime},B)\), then add to \(M_{(s,t)}\) a maximal subset of \((\mathsf{bd}(B,B^{\prime})\cup\mathsf{bd}(B^{\prime},B))\setminus V(\mathcal{X})\) of size at most \(2k\), with preference to unmarked vertices. 2. If there exists a bag \(B\in V(\mathcal{T})\) such that \(\{s,t\}\subseteq V(B)\), then add to \(M_{(s,t)}\) a maximal subset of \((B\cup\bigcup_{B^{\prime}\in N_{\mathcal{T}}(B),\{s,t\}\subseteq\mathsf{bd}(B,B ^{\prime})}\mathsf{bd}(B^{\prime},B))\setminus V(\mathcal{X})\) of size at most \(2k\), with preference to unmarked vertices. 4. For every bag \(B\in\mathcal{T}\) that has at least one active boundary: 1. Let \(\mathsf{bd}(B,B_{1}),\ldots,\mathsf{bd}(B,B_{q})\) be active in \(B\). 2. Let \(F_{i}=\mathsf{bd}(\mathsf{B},\mathsf{B}_{i})\setminus\widetilde{\mathsf{bd}}( \mathsf{B},\mathsf{B}_{i})\) for all \(i\in[q]\). 3. For every fixed \(i\in[q]\): 4.1 Let \(\widehat{B}_{i}^{1},\ldots,\widehat{B}_{i}^{p}\) be neighboring bags of \(B\) such that \(F_{i}\cap\mathsf{bd}(B,\widehat{B}_{i}^{j})\neq\emptyset\) and \(\mathsf{bd}(\widehat{B}_{i}^{j},B)\) are active for all \(j\in[p]\). * If \(|F_{i}|\geq 2k_{B\widehat{B}_{i}^{1}}+\ldots+2k_{B\widehat{B}_{i}^{p}}\), then arbitrarily keep only \(2k_{B\widehat{B}_{i}^{j}}\) distinct vertices in the marked sets of terminal pairs that activate \(\mathsf{bd}(B,\widehat{B}_{i}^{j})\) for all \(j\in[p]\), and unmark all remaining vertices in \(F_{i}\). This completes our Marking Procedure. [Marking Procedure can be executed in polynomial time] Next, we have the following lemma. Let \((G,\mathcal{X},k)\) be an instance of VDP where \(G\) is a well-partitioned chordal graph obtained after the exhaustive application of RR12. Furthermore, let \(\mathcal{T}\) be a partition tree of \(G\), and let \(\widehat{\mathcal{X}}\subseteq\mathcal{X}\) denote the multiset of non-heavy terminal pairs. Let \(I=|\widehat{\mathcal{X}}|\). Let \(M_{1},\ldots,M_{I}\subseteq V(G)\) be sets of marked vertices obtained by applying Marking Procedure to \((G,\widehat{\mathcal{X}},I)\). Then, for each bag \(B\in V(\mathcal{T})\), \(|V(B)\cap\bigcup_{i\in[I]}M_{i}|\in\mathcal{O}(k)\). Proof.: Consider a bag \(B\) in \(\mathcal{T}\). Let \(\mathsf{bd}(B,B_{1}),\ldots,\mathsf{bd}(B,B_{q})\) denote the active boundaries in \(B\). For each \(i\in[q]\), let \(p_{i}\) denote the number of marked vertices in \(\mathsf{bd}(B,B_{i})\). Note that if for every \(i\in[q]\), for which \(\mathsf{bd}(B,B_{i})\subseteq M_{(s,t)}\), we have \(|\mathsf{bd}(B,B_{i})|\leq 2k_{BB_{i}}\) (here, \((s,t)\) is an arbitrary terminal pair that activates \(\mathsf{bd}(B,B_{i})\)), then by Observation 3.3 and the description of Marking Procedure, the total marked vertices in \(B\) are at most \(\sum_{i=1}^{q}p_{i}\leq 4k\). Now consider the case when, for some arbitrary but fixed \(i\in[q]\), we have \(\mathsf{bd}(B,B_{i})\subseteq M_{(s,t)}\) and \(|\mathsf{bd}(B,B_{i})|>2k_{BB_{i}}\) (here, \((s,t)\) is an arbitrary terminal pair that activates \(\mathsf{bd}(B,B_{i})\)). This implies that there exist vertices in \(\mathsf{bd}(B,B_{i})\) that are shared by more than one active boundary. Consider the set \(F_{i}\subseteq\mathsf{bd}(B,B_{i})\) that is shared by boundaries, say, \(B_{1},\ldots,B_{p}\). If \(|F_{i}|\geq 2k_{BB_{1}}+\ldots+2k_{BB_{p}}\), then note that we can uniquely assign the vertices arbitrarily to each active boundary sharing them and unmark the remaining vertices in \(F_{i}\). Otherwise, it is clear that \(|F_{i}|<2k_{BB_{1}}+\ldots+2k_{BB_{p}}\). The following lemma helps us to establish that if \((G,\widehat{\mathcal{X}},I)\) (the instance restricted to the multiset of non-heavy terminal pairs) is a Yes-instance, then there exists a solution of \((G,\widehat{\mathcal{X}},I)\) such that the paths among the non-heavy terminal pairs use only the marked vertices (obtained by applying Marking Procedure) as internal vertices. To prove the lemma, we build on the proof given by Ahn et al. [2]. Let \((G,\mathcal{X},k)\) be an instance of VDP where \(G\) is a well-partitioned chordal graph obtained after the exhaustive application of RR12. Furthermore, let \(\mathcal{T}\) be a partition tree of \(G\), and let \(\widehat{\mathcal{X}}\subseteq\mathcal{X}\) denote the multiset of non-heavy terminal pairs. Let \(I=|\widehat{\mathcal{X}}|\). If \((G,\widehat{\mathcal{X}},I)\) is a Yes-instance, then there exists a minimum solution \(\mathcal{P}\) of \((G,\widehat{\mathcal{X}},I)\) such that if \(P\in\mathcal{P}\) denotes the path between one occurrence of \((s,t)\in\widehat{\mathcal{X}}\) and \(M_{(s,t)}\subseteq V(G)\) denotes the set of marked vertices obtained by applying Marking Procedure, then \(V(P)\subseteq M_{(s,t)}\cup\{s,t\}\). Proof.: Let \((G,\widehat{\mathcal{X}},I)\) be a Yes-instance. For every occurrence of an arbitrary (but fixed) terminal pair \((s,t)\in\widehat{\mathcal{X}}\), let \(M_{(s,t)}\) denote the set of marked vertices obtained by applying Marking Procedure. Furthermore, let \(\mathcal{P}\) be a minimum solution such that \(V(P)\) has maximum vertex intersection with \(M_{(s,t)}\cup\{s,t\}\), where \(P\in\mathcal{P}\) denotes the path between one occurrence of \((s,t)\). Since \((s,t)\) is a non-heavy terminal pair, \(P\) is induced (due to Proposition 3.3). If \(V(P)\subseteq M_{(s,t)}\cup\{s,t\}\), then we are done. So, assume otherwise. This implies that there exists some \(B\in V(\mathcal{T})\) such that \(V(P)\cap V(B)\nsubseteq M_{(s,t)}\cup\{s,t\}\). Let \((B_{1},\ldots,B_{\ell})\) be the valid path in \(\mathcal{T}\) corresponding to this particular occurrence of the terminal pair \((s,t)\). By Proposition 3.3, note that \(B\) is a bag on the valid path \((B_{1},\ldots,B_{\ell})\). From now on, let \(j\in[\ell]\) be such that \(B=B_{j}\), and let \(Y=(V(P)\cap V(B_{j}))\setminus\{s,t\}\). Here, note that by the definition of the Marking Procedure, it is clear that every vertex in \(Y\) either does not belong to any other active boundary, or has been unmarked later. By Proposition 91, \(|Y|\leq 2\). Furthermore, for every \(j^{\prime}\in[\ell]\setminus\{j\}\), let \(k_{B_{j}B_{j^{\prime}}}\) denote the number of occurrences of terminal pairs that activate \(\mathsf{bd}(B_{j},B_{j^{\prime}})\). Now, consider the following cases based on the position of \(j\) in the path \((B_{1},\ldots,B_{\ell})\). **Case 1: \(\boldsymbol{j=1}\).**: In this case, note that \(|Y|=1\) (as \(|Y|\leq 2\) and \(s\notin Y\)). Furthermore, note that \(s\notin\mathsf{bd}(B_{1},B_{2})\) (otherwise \(P\) will not be an induced path). Now, let \(y\in Y\setminus M_{(s,t)}\). Note that \(y\in\mathsf{bd}(\mathsf{B}_{1},\mathsf{B}_{2})\). Since \(y\) was not marked, it is clear that we have not marked the entire set \(\mathsf{bd}(B_{1},B_{2})\) for \(M_{(s,t)}\). This implies that there are \(2k_{B_{1}B_{2}}\) marked vertices in \(\mathsf{bd}(B_{1},B_{2})\) that are reserved only for the paths going from \(B_{1}\) to \(B_{2}\). Since the paths in \(\mathcal{P}\setminus\{P\}\) use at most \(2(k_{B_{1}B_{2}}-1)\) vertices in total from \(\mathsf{bd}(B_{1},B_{2})\) (due to Propositions 91 and 92), there is at least one vertex, say, \(y^{\prime}\), in \(\mathsf{bd}(B_{1},B_{2})\cap M_{(s,t)}\) that is not used by any path in \(\mathcal{P}\setminus\{P\}\). Define \(P^{\prime}\) by replacing \(y\) with \(y^{\prime}\) in \(P\). Note that \(P^{\prime}\) is an induced \((s,t)\)-path in \(G\) (as \(y\) and \(y^{\prime}\) are twins in \(G[V(B_{1})\cup V(B_{2})]\)). Furthermore, since \(y^{\prime}\notin\bigcup_{P\in\mathcal{P}}V(P)\), \(P^{\prime}\) is internally vertex-disjoint from every path in \(\mathcal{P}\setminus\{P\}\). Let \(\mathcal{P}^{\prime}=(\mathcal{P}\setminus\{P\})\cup\{P^{\prime}\}\). Since \(|V(P^{\prime})\cap(M_{(s,t)}\cup\{s,t\})|>|V(P)\cap(M_{(s,t)}\cup\{s,t\})|\), this contradicts our choice of \(\mathcal{P}\). Hence, this case is not possible. **Case 2: \(\boldsymbol{2\leq j<\ell}\).**: First, let \(|Y\setminus M_{(s,t)}|=2\) and let \(y_{1},y_{2}\in Y\setminus M_{(s,t)}\). Without loss of generality, assume that \(y_{1}\in\mathsf{bd}(B_{j},B_{j-1})\) and \(y_{2}\in\mathsf{bd}(B_{j},B_{j+1})\). Due to Propositions 91 and 92, note that the paths in \(\mathcal{P}\setminus\{P\}\) use at most \(2(k_{B_{j}B_{j-1}}-1)\) vertices from \(\mathsf{bd}(B_{j},B_{j-1})\) and at most \(2(k_{B_{j}B_{j+1}}-1)\) vertices from \(\mathsf{bd}(B_{j},B_{j+1})\) (since we have not marked the entire sets \(\mathsf{bd}(B_{j},B_{j-1})\) and \(\mathsf{bd}(B_{j},B_{j+1})\) for \(M_{(s,t)}\), this implies that \(2k_{B_{j}B_{j-1}}\) marked vertices in \(\mathsf{bd}(B_{j},B_{j-1})\) are reserved only for the paths going from \(B_{j-1}\) to \(B_{j}\) and \(2k_{B_{j}B_{j+1}}\) marked vertices in \(\mathsf{bd}(B_{j},B_{j+1})\) are reserved only for the paths going from \(B_{j}\) to \(B_{j+1}\)). This implies that there exist \(y^{\prime}_{1}\in\mathsf{bd}(B_{j},B_{j-1})\cap M_{(s,t)}\) and \(y^{\prime}_{2}\in\mathsf{bd}(B_{j},B_{j+1})\cap M_{(s,t)}\) such that neither \(y^{\prime}_{1}\) nor \(y^{\prime}_{2}\) is used by any path in \(\mathcal{P}\setminus\{P\}\). Note that since \(y_{1}\) and \(y^{\prime}_{1}\) are twins in \(G[V(B_{j-1})\cup V(B_{j})]\) and \(y_{2}\) and \(y^{\prime}_{2}\) are twins in \(G[V(B_{j})\cup V(B_{j+1})]\), we can define a path \(P^{\prime}\) by replacing \(y_{1}\) with \(y^{\prime}_{1}\) and \(y_{2}\) with \(y^{\prime}_{2}\) in \(P\) such that \(P^{\prime}\) is also an induced path. Since \(y^{\prime}_{1},y^{\prime}_{2}\notin\bigcup_{P\in\mathcal{P}}V(P)\), \(P^{\prime}\) is internally vertex-disjoint from every path in \(\mathcal{P}\setminus\{P\}\). Let \(\mathcal{P}^{\prime}=(\mathcal{P}\setminus\{P\})\cup\{P^{\prime}\}\). Since \(|V(P^{\prime})\cap(M_{(s,t)}\cup\{s,t\})|>|V(P)\cap(M_{(s,t)}\cup\{s,t\})|\), this contradicts our choice of \(\mathcal{P}\). Hence, this case is not possible. Second, let \(|Y\setminus M_{(s,t)}|=1\). This case is similar to the case where \(|Y\setminus M_{(s,t)}|=2\). Note that there are two possibilities here: let \(y\in Y\setminus M_{(s,t)}\), then either \(y\in\mathsf{bd}(B_{j},B_{j-1})\cap\mathsf{bd}(B_{j},B_{j+1})\) or we can assume without loss of generality that \(y\in\mathsf{bd}(B_{j},B_{j-1})\) and \(y\notin\mathsf{bd}(B_{j},B_{j+1})\). In either case, it is possible to find a marked vertex, say, \(y^{\prime}\), such that we can replace \(y\) with \(y^{\prime}\) in \(P\) to obtain another minimum solution that has one more vertex in its vertex set among the marked vertices leading to a contradiction to the choice of \(\mathcal{P}\). Hence, this case is also not possible. **Case 3: \(\boldsymbol{j=\ell}\).**: This case is symmetric to Case 1: Replace \(B_{1}\) with \(B_{\ell}\) and \(s\) with \(t\). Hence, this case is not possible. The next lemma helps us to establish that if \((G,\mathcal{X},k)\) is a Yes-instance and \((s,t)\in\mathcal{X}\) is a heavy terminal pair, then there exists a solution of \((G,\mathcal{X},k)\) that uses only the marked vertices obtained by applying the Marking Procedure (as internal vertices) for every occurrence of \((s,t)\). **Lemma 97**.: _Let \((G,\mathcal{X},k)\) be an instance of \(\textsc{VDP}\) where \(G\) is a well-partitioned chordal graph obtained after the exhaustive application of RR12. Furthermore, let \(\mathcal{T}\) be a partition tree of \(G\). If \((G,\mathcal{X},k)\) is a Yes-instance, then there exists a minimum solution \(\mathcal{P}\) of \((G,\mathcal{X},k)\) such that if \(\mathcal{P}^{\prime}\subseteq\mathcal{P}\) denotes a set of internally vertex-disjoint paths for every occurrence of a heavy terminal pair \((s,t)\) and \(M_{(s,t)}\subseteq V(G)\) denotes the set of marked vertices obtained by applying Marking Procedure, then \(\bigcup_{P\in\mathcal{P}^{\prime}}V(P)\subseteq M_{(s,t)}\cup\{s,t\}\)._ Proof.: Let \((G,\mathcal{X},k)\) be a Yes-instance, and let \(\mathcal{P}\) be a minimum solution of \((G,\mathcal{X},k)\). For an arbitrary (but fixed) heavy terminal pair \((s,t)\in\mathcal{X}\), let \(M_{(s,t)}\) denote the set of marked vertices obtained by applying the Marking Procedure (note that \(M_{(s,t)}\) contains marked vertices for every occurrence of \((s,t)\)). Furthermore, let \(\mathcal{P}^{\prime}\subseteq\mathcal{P}\) denote the set of internally vertex-disjoint paths for every occurrence of \((s,t)\) such that \(\bigcup_{P\in\mathcal{P}^{\prime}}V(P)\) has maximum vertex intersection with \(M_{(s,t)}\cup\{s,t\}\). If \(\bigcup_{P\in\mathcal{P}^{\prime}}V(P)\subseteq M_{(s,t)}\cup\{s,t\}\), then we are done. So, assume otherwise. This implies that there exists some \(P\in\mathcal{P}^{\prime}\) such that \(V(P)\nsubseteq M_{(s,t)}\cup\{s,t\}\). By Proposition 16 and Observation 17, the paths in \(\mathcal{P}^{\prime}\) are of the form \(P_{st}\) or \((s,v,t)\), where \(v\in N(s)\cap N(t)\). Since \(P\) has length two, this further implies that there exists a vertex, say, \(v\in V(P)\), such that \(v\notin M_{(s,t)}\). Since the paths in \(\mathcal{P}\setminus\{P\}\) use at most \(2(k-1)\) vertices in total from \(M_{(s,t)}\), there is at least one vertex, say, \(v^{\prime}\), in \(M_{(s,t)}\) that is not used by any path in \(\mathcal{P}\). Define \(P^{\prime}\) by replacing \(v\) with \(v^{\prime}\) in \(P\). Furthermore, since \(v^{\prime}\notin\bigcup_{P\in\mathcal{P}^{\prime}}V(P)\), \(P^{\prime}\) is internally vertex-disjoint from every path in \(\mathcal{P}^{\prime}\setminus\{P\}\). Let \(\mathcal{P}^{\prime}=(\mathcal{P}\setminus\{P\})\cup\{P^{\prime}\}\). Since \(|\bigcup_{P\in\mathcal{P}^{\prime}}V(P)\cap(M_{(s,t)}\cup\{s,t\})|>|\bigcup_{P \in\mathcal{P}}V(P)\cap(M_{(s,t)}\cup\{s,t\})|\), this contradicts our choice of \(\mathcal{P}\). Hence, \(\bigcup_{P\in\mathcal{P}^{\prime}}V(P)\subseteq M_{(s,t)}\cup\{s,t\}\). **Remark**.: From now onwards (throughout this section), let \(M_{1}\) and \(M_{2}\) denote the set of marked vertices obtained by applying the Marking Procedure to each occurrence of every non-heavy terminal pair and each heavy terminal pair, respectively. Before proceeding further, let us discuss in brief what we are planning to do next. Due to Marking Procedure and Lemmas 96 and 97, we know that the bags in \(\mathcal{T}\) that do not contain any marked vertex can be removed without changing the answer to our input instance. Furthermore, note that we mark only \(\mathcal{O}(k^{2})\) vertices for all heavy terminal pairs (combined). However, for non-heavy terminal pairs, we have only established that the number of marked vertices in each bag is \(\mathcal{O}(k)\); we still need to bound the number of bags that have a non-empty intersection with \(M_{1}\). For this purpose, consider the following definition. [Well-Partitioned Forest] Let \((G,\mathcal{X},k)\) be an instance of _VDP_ where \(G\) is a well-partitioned chordal graph obtained after the exhaustive application of RR12. Furthermore, let \(\mathcal{T}\) be a partition tree of \(G\). After applying Marking Procedure, the subgraph of \(\mathcal{T}\) induced by all bags with a nonempty intersection with \(M_{1}\) is called a well-partitioned forest of \((G,\mathcal{X},k)\). The next observation follows from the fact that a well-partitioned forest consists of the union of at most \(k\) paths. Let \((G,\mathcal{X},k)\) be an instance of _VDP_ where \(G\) is a well-partitioned chordal graph obtained after the exhaustive application of RR12. Furthermore, let \(\mathcal{T}^{\prime}\) be a well-partitioned forest of \((G,\mathcal{X},k)\) as described in Definition 99. Then, \(\mathcal{T}^{\prime}\) has at most \(2k\) bags of degree one. By Observation 100 and the fact that the number of vertices of degree at least three in a forest is always bounded by the number of vertices of degree \(1\), it is left to bound the number of degree \(2\) bags in \(\mathcal{T}^{\prime}\). The next reduction rules (RR13 and RR14) help us to remove bags of degree \(2\) in \(\mathcal{T}^{\prime}\) with some additional properties without changing the answer of the given instance. The reduction rules RR13 and RR14 were also used by Ahn et al. [2]. However, the condition that \(V(B)\cap M_{2}=\emptyset\) is necessary in RR14, which was not there. Furthermore, note that the arguments given in Lemma 4.1 are crucial. Let \((G,\mathcal{X},k)\) be an instance of \(\mathrm{VDP}\) where \(G\) is a well-partitioned chordal graph obtained after the exhaustive application of RR12. Furthermore, let \(\mathcal{T}^{\prime}\) be a well-partitioned forest of \(G\). Let \(B\in V(\mathcal{T}^{\prime})\) such that \(V(B)\cap V(\mathcal{X})=\emptyset\), \(d_{\mathcal{T}^{\prime}}(B)=2\), and \(V(B)\cap M_{2}=\emptyset\). Let \(A\) and \(C\) be the two neighbors of \(B\) in \(\mathcal{T}^{\prime}\). Let \(k_{BA}\) and \(k_{BC}\) denote the number of occurrences of terminal pairs that activate \(\mathsf{bd}(B,A)\) and \(\mathsf{bd}(B,C)\), respectively. If \(|\mathsf{bd}(B,A)|<k_{BA}\) or \(|\mathsf{bd}(B,C)|<k_{BC}\), then answer negatively. [RR14] Let \((G,\mathcal{X},k)\) be an instance of \(\mathrm{VDP}\) where \(G\) is a well-partitioned chordal graph obtained after the exhaustive application of RR12. Furthermore, let \(\mathcal{T}^{\prime}\) be a well-partitioned forest of \(G\). Let \(B\in V(\mathcal{T}^{\prime})\) such that \(V(B)\cap V(\mathcal{X})=\emptyset\), \(d_{\mathcal{T}^{\prime}}(B)=2\), and \(V(B)\cap M_{2}=\emptyset\). Let \(A\) and \(C\) be the two neighbors of \(B\) in \(\mathcal{T}^{\prime}\). Let \(k_{BA}\) and \(k_{BC}\) denote the number of occurrences of terminal pairs that activate \(\mathsf{bd}(B,A)\) and \(\mathsf{bd}(B,C)\), respectively. If \(|\mathsf{bd}(B,A)|\geq k_{BA}\) and \(|\mathsf{bd}(B,C)|\geq k_{BC}\), then reduce \((G,\mathcal{X},k)\) to \((G^{\prime},\mathcal{X},k)\), where \(G^{\prime}\) is obtained from \(G\) by removing \(B\) and making all vertices in \(\mathsf{bd}(A,B)\) adjacent to all vertices in \(\mathsf{bd}(C,B)\). The following proposition establishes that RR13 and RR14 are safe. [[2]]RR13 and RR14 are safe. RR13 and RR14 can be applied in polynomial time. Now, we have our final lemma. Let \((G,\mathcal{X},k)\) be an instance of \(\mathrm{VDP}\) where \(G\) is a well-partitioned chordal graph obtained after the exhaustive application of RR12. Furthermore, let \(\mathcal{T}^{\prime}\) be the well-partitioned forest of \(G\) obtained after applying Marking Procedure followed by an exhaustive application of RR13 and RR14 on a partition tree \(\mathcal{T}\) of \(G\). Then, the number of degree 2 bags in \(\mathcal{T}^{\prime}\) is \(\mathcal{O}(k)\). Proof.: Note that after applying RR13 and RR14 exhaustively, if we do not answer negatively and there exists a bag, say, \(B\), with degree 2 in \(\mathcal{T}^{\prime}\), then either \(V(B)\cap V(\mathcal{X})\neq\emptyset\) or \(V(B)\cap M_{2}\neq\emptyset\). Since \(|V(\mathcal{X})|\leq 2k\), the number of degree 2 bags having a non-empty intersection with \(V(\mathcal{X})\) is at most \(2k\). Next, let \(V(B)\cap V(\mathcal{X})=\emptyset\) and \(V(B)\cap M_{2}\neq\emptyset\) for some bag \(B\in V(\mathcal{T}^{\prime})\). Then, due to the description of the Marking Procedure, at least one of the neighboring bags of \(B\) in \(\mathcal{T}^{\prime}\) must contain a heavy terminal pair (as \(V(B)\cap V(\mathcal{X})=\emptyset\) and \(V(B)\cap M_{2}\neq\emptyset\)). Since there can be at most \(\left\lfloor\frac{k}{2}\right\rfloor\) heavy terminal pairs in \(\mathcal{X}\), there are at most \(\left\lfloor\frac{k}{2}\right\rfloor\) bags with \(V(B)\cap M_{2}\neq\emptyset\) and \(V(B)\cap V(\mathcal{X})=\emptyset\) in \(\mathcal{T}^{\prime}\). Thus, the lemma holds. Note that throughout this section, our initial parameter \(k\) does not increase during the application of reduction rules RR12-RR14. Therefore, using the Marking Procedure, Lemmas 4.1, 4.1, Propositions 4.1 and 4.1, and 4.1, we have the following theorem. VDP on well-partitioned chordal graphs admits a kernel with \(\mathcal{O}(k^{2})\) vertices. ## 6 Polynomial-time Algorithm for Threshold Graphs Unlike the case of \(\,\mathrm{EDP}\), the VDP problem becomes easy on highly restricted graph classes. While it remains \(\mathsf{NP}\)-hard on split graphs, we will show that it becomes polynomial-time solvable on threshold graphs. Recall that a split graph with a split partition \((C,I)\) is threshold if we can order the vertices of \(I\) as \(v_{1},v_{2},\ldots,v_{|I|}\) such that \(N(v_{1})\subseteq N(v_{2})\subseteq\cdots\subseteq N(v_{|I|})\). On the intuitive level, \(v_{1}\) is the "weakest" vertex of the graph, so the paths starting at \(v_{1}\) need to be processed first. We present a greedy argument stating that if \(v_{1}\) is a terminal, then we can freely allocate the paths starting at \(v_{1}\) without "worrying" about the other paths, and then remove \(v_{1}\) from the graph. Throughout this section, we abbreviate \((G,\mathcal{X},k)\) as simply \((G,\mathcal{X})\) when referring to a VDP instance because we do not need to keep track of the parameter. Now, let us begin with the following lemma. Let \((G,\mathcal{X})\) be an instance of \(\,\mathrm{VDP}\) such that \(G\) is a threshold graph with a split partition \(V(G)=(C,I)\). Let \(v\in I\) be a vertex for which \(N_{G}(v)\subseteq N_{G}(w)\) for every \(w\in I\) and \(\mathcal{X}_{v}\subseteq\mathcal{X}\) be the subset of pairs containing \(v\). Suppose that \(\mathcal{X}_{v}\neq\emptyset\). Then, in polynomial time, we either detect that \((G,\mathcal{X})\) is a No-instance, or compute an equivalent instance \((G^{\prime},\mathcal{X}\setminus\mathcal{X}_{v})\) such that \(G^{\prime}\) is an induced subgraph of \(G-v\). Proof.: Let \(Y\) be the set of vertices, different than \(v\), that appear in at least one pair in \(\mathcal{X}_{v}\) and \(T\) be the set of vertices appearing in \(\,\mathcal{X}\setminus\mathcal{X}_{v}\). Let \(\widehat{\mathcal{X}}_{v}\subseteq\mathcal{X}_{v}\) be defined as the _set_\(\{\{v,y\}\mid y\in N_{G}(v)\cap Y\}\) (if there are several identical elements in \(\mathcal{X}_{v}\), then we choose one of them). We set \(\ell=|\mathcal{X}_{v}\setminus\widehat{\mathcal{X}}_{v}|\) where \(\mathcal{X}_{v}\setminus\widehat{\mathcal{X}}_{v}\) is considered a multiset. If \(|N_{G}(v)\setminus(Y\cup T)|<\ell\), then \(N_{G}(v)\) cannot accommodate all paths starting at \(v\), so we can report that \((G,\mathcal{X})\) is a No-instance. Otherwise, there exists an injective mapping \(\tau\colon[\ell]\to N_{G}(v)\setminus(Y\cup T)\); we fix an arbitrary one. We construct a solution \(\mathcal{P}_{v}\) to the instance \((G,\mathcal{X}_{v})\) as follows. For each \(u\in N_{G}(v)\cap Y\), add the path \(P_{vu}=(v,u)\) to \(\mathcal{P}_{v}\); this resolves the pairs in \(\widehat{\mathcal{X}}_{v}\). Next, for each \(i\in[\ell]\) we consider the terminal pair \(\{s_{i},t_{i}\}\in\mathcal{X}_{v}\setminus\widehat{\mathcal{X}}_{v}\) (where \(s_{i}=v\)) and insert the path \((v,\tau(i),t_{i})\) to \(\mathcal{P}_{v}\). Observe that due to the choice of \(v\), the set \(N_{G}(v)\setminus\{t_{i}\}\) is contained in \(N_{G}(t_{i})\) so the edge between \(\tau(i)\) and \(t_{i}\) is present in \(G\). Let \(G^{\prime}\) be obtained from \(G\) by removing the vertex set \((\bigcup_{P\in\mathcal{P}_{v}}V(P))\setminus T\). Note that this set contains \(v\). We will show that \((G^{\prime},\mathcal{X}\setminus\mathcal{X}_{v})\) is equivalent to \((G,\mathcal{X})\). The first implication is straightforward: when \(\mathcal{P}^{\prime}\) is a solution to \((G^{\prime},\mathcal{X}\setminus\mathcal{X}_{v})\) then \(\mathcal{P}^{\prime}\cup\mathcal{P}_{v}\) is a family of internally vertex-disjoint paths that forms a solution to \((G,\mathcal{X})\). In order to show the second implication, we argue that \(\mathcal{P}_{v}\) is part of some solution. We claim that if \((G,\mathcal{X})\) is a Yes-instance, then there exists a solution \(\mathcal{P}\) such that \(\mathcal{P}_{v}\subseteq\mathcal{P}\). Let \(\mathcal{P}\) be a minimum solution that additionally maximizes the number of used paths that belong to \(\mathcal{P}_{v}\). Suppose that \(\mathcal{P}_{v}\not\subseteq\mathcal{P}\) and let \(P\) a path from \(\mathcal{P}_{v}\setminus\mathcal{P}\). By Observation 17 we know that every single-edge path corresponding to a pair in \(\widehat{\mathcal{X}}_{v}\) is present in \(\mathcal{P}\), so \(P\) must be of the form \((v,\tau(i),t_{i})\) for some \(i\in[\ell]\). Observe that an induced path in a threshold graph can have length at most two, so by Proposition 16 every path in \(\mathcal{P}\) visits at most three vertices. By the choice of \(\mathcal{P}\), there exists some path \(Q\in\mathcal{P}\setminus\mathcal{P}_{v}\) which uses the vertex \(u=\tau(i)\); recall that \(u\not\in Y\cup T\), so \(u\) is an internal vertex of \(Q\). Then \(Q\) must be of the form \((x,u,y)\) where \(x,y\in Y\cup T\). Let \(u^{\prime}\in N_{G}(v)\setminus(Y\cup T)\) be a vertex visited by some \((v,t_{i})\)-path \(P^{\prime}\) from \(\mathcal{P}\setminus\mathcal{P}_{v}\) (we know that \(P^{\prime}\) exists because some path must be a replacement for \(P\)). Observe that due to choice of \(v\), the vertex \(u^{\prime}\) belongs to both \(N_{G}(x)\) and \(N_{G}(y)\). Therefore, we can replace \(Q\) with \((x,u^{\prime},y)\) and replace \(P^{\prime}\) with \(P\), obtaining a new valid minimum solution. This solution uses more paths from \(\mathcal{P}_{v}\) than \(\mathcal{P}\), yielding a contradiction. Let \(\mathcal{P}\) be the solution to \((G,\mathcal{X})\) satisfying \(\mathcal{P}_{v}\subseteq\mathcal{P}\). Then \(\mathcal{P}\setminus\mathcal{P}_{v}\) cannot use any internal vertex from any path in \(\mathcal{P}_{v}\), nor any terminal vertex from \(\mathcal{P}_{v}\) that does not belong to \(T\). Hence \(\mathcal{P}\setminus\mathcal{P}_{v}\) is a solution to \((G^{\prime},\mathcal{X}\setminus\mathcal{X}_{v})\). The lemma follows. We can now utilize the greedy procedure to repeatedly remove the vertices from the independent set and reduce the graph to a clique. It remains to ensure that the weakest vertex is a terminal, so Lemma 10 can be applied. To this end, we take advantage of the Clean-Up operation from Section 5.1. Vdp on thresholds graphs is solvable in polynomial time. Proof.: We begin with applying the Clean-Up operation (Definition 66) to the given instance, which does not affect the answer (Lemmas 73 and 74) and does not increase the graph size. Since the class of threshold graphs is closed under vertex deletion, we obtain a threshold graph as well. Let \((G,\mathcal{X})\) denote the instance after performing Clean-Up and let \((C,I)\) be the split partition of \(G\). Due to such preprocessing, we can assume that every vertex from \(I\) appears in \(\mathcal{X}\). Let \(v\in I\) be a vertex for which \(N_{G}(v)\subseteq N_{G}(w)\) for every \(w\in I\) (its existence is guaranteed by the definition of a threshold graph) and \(\mathcal{X}_{v}\subseteq\mathcal{X}\) be the subset of pairs containing \(v\). We apply the reduction from Lemma 104 and obtain an equivalent instance \((G^{\prime},\mathcal{X}\setminus\mathcal{X}_{v})\) such that \(G^{\prime}\) is an induced subgraph of \(G-v\). As a consequence, \(G^{\prime}\) is again a threshold graph, and it has fewer vertices than \(G\). We iterate the procedure described above as long as the independent part of the graph is non-empty. At every iteration, the size of the graph decreases, so at some point, the process terminates. We either arrive at a No-instance (and report it as specified in Lemma 104) or obtain a graph being a clique. Such an instance can be solved easily: it suffices to check whether the number of terminal pairs that cannot be resolved by single-edge paths (due to repetitions of terminal pairs) does not exceed the number of non-terminal vertices. This concludes the proof. ## 7 Conclusion In this paper, we studied VDP and EDP, two disjoint paths problems in the realm of Parameterized Complexity. We analyzed these problems with respect to the natural parameter "the number of (occurrences of) terminal pairs". We gave several improved kernelization results as well as new kernelization results for these problems on subclasses of chordal graphs. For VDP, we provided a \(4k\) vertex kernel on split graphs and an \(\mathcal{O}(k^{2})\) vertex kernel on well-partitioned chordal graphs. We also show that VDP becomes polynomial-time solvable on threshold graphs. For EDP, we first proved NP-hardness on complete graphs. Second, we provided an \(\mathcal{O}(k^{2.75})\) vertex kernel on split graphs, a \(7k+1\) vertex kernel on threshold graphs, an \(\mathcal{O}(k^{2})\) vertex kernel for EDP on block graphs, and a \(2k+1\) vertex kernel on clique paths. Apart from the obvious open questions to improve the sizes of the kernels we designed, the following is a natural next step for future work. Do Vdp or/and EDP admit polynomial kernels for chordal graphs? It is worth noting here that Golovach et al. [23] proved that it is unlikely for Set-Restricted Disjoint Paths, a generalization of VDP where each terminal pair has to find its path from a predefined set of vertices, to admit a polynomial kernel even on interval graphs. However, as noted by them, their reduction is heavily dependent on the sets designed for terminal pairs and thus cannot be directly generalized to \(\mathrm{VDP}\). Moreover, recently Wlodarczyk and Zehavi [46] established that both \(\mathrm{VDP}\) and \(\mathrm{EDP}\) are unlikely to admit polynomial compression even when restricted to planar graphs. They also suggested to investigate the existence of polynomial kernels for chordal graphs for \(\mathrm{VDP}\) and \(\mathrm{EDP}\). Another interesting open problem is to study the kernelization complexity of \(\mathrm{EDP}\) on well-partitioned chordal graphs. Note that \(\mathrm{EDP}\) on well-partitioned chordal graphs is more subtle than \(\mathrm{VDP}\) on the same. The reason is that, in \(\mathrm{VDP}\), a path between a non-adjacent terminal pair must be induced due to Proposition 16. This paves the way to define valid paths in the partition tree of the given well-partitioned chordal graph. However, the concept of valid paths and, thus, the Marking Procedure (from Section 5.2) fails for \(\mathrm{EDP}\), as a path in the solution can visit the bags of the partition tree in any weird manner. Furthermore, the approach for \(\mathrm{EDP}\) on block graphs does not generalize to well-partitioned chordal graphs because the intersection of adjacent bags can be large (in well-partitioned chordal graphs), whereas, for a block graph \(G\), there always exists a partition tree such that for any two consecutive bags, the corresponding boundary of one of the bags has size one.
2309.08474
VulnSense: Efficient Vulnerability Detection in Ethereum Smart Contracts by Multimodal Learning with Graph Neural Network and Language Model
This paper presents VulnSense framework, a comprehensive approach to efficiently detect vulnerabilities in Ethereum smart contracts using a multimodal learning approach on graph-based and natural language processing (NLP) models. Our proposed framework combines three types of features from smart contracts comprising source code, opcode sequences, and control flow graph (CFG) extracted from bytecode. We employ Bidirectional Encoder Representations from Transformers (BERT), Bidirectional Long Short-Term Memory (BiLSTM) and Graph Neural Network (GNN) models to extract and analyze these features. The final layer of our multimodal approach consists of a fully connected layer used to predict vulnerabilities in Ethereum smart contracts. Addressing limitations of existing vulnerability detection methods relying on single-feature or single-model deep learning techniques, our method surpasses accuracy and effectiveness constraints. We assess VulnSense using a collection of 1.769 smart contracts derived from the combination of three datasets: Curated, SolidiFI-Benchmark, and Smartbugs Wild. We then make a comparison with various unimodal and multimodal learning techniques contributed by GNN, BiLSTM and BERT architectures. The experimental outcomes demonstrate the superior performance of our proposed approach, achieving an average accuracy of 77.96\% across all three categories of vulnerable smart contracts.
Phan The Duy, Nghi Hoang Khoa, Nguyen Huu Quyen, Le Cong Trinh, Vu Trung Kien, Trinh Minh Hoang, Van-Hau Pham
2023-09-15T15:26:44Z
http://arxiv.org/abs/2309.08474v1
VulnSense: Efficient Vulnerability Detection in Ethereum Smart Contracts by Multimodal Learning with Graph Neural Network and Language Model ###### Abstract This paper presents VulnSense framework, a comprehensive approach to efficiently detect vulnerabilities in Ethereum smart contracts using a multimodal learning approach on graph-based and natural language processing (NLP) models. Our proposed framework combines three types of features from smart contracts comprising source code, opcode sequences, and control flow graph (CFG) extracted from bytecode. We employ Bidirectional Encoder Representations from Transformers (BERT), Bidirectional Long Short-Term Memory (BiLSTM) and Graph Neural Network (GNN) models to extract and analyze these features. The final layer of our multimodal approach consists of a fully connected layer used to predict vulnerabilities in Ethereum smart contracts. Addressing limitations of existing vulnerability detection methods relying on single-feature or single-model deep learning techniques, our method surpasses accuracy and effectiveness constraints. We assess VulnSense using a collection of 1.769 smart contracts derived from the combination of three datasets: Curated, SolidiFI-Benchmark, and Smartbugs Wild. We then make a comparison with various unimodal and multimodal learning techniques contributed by GNN, BiLSTM and BERT architectures. The experimental outcomes demonstrate the superior performance of our proposed approach, achieving an average accuracy of 77.96% across all three categories of vulnerable smart contracts. keywords: Vulnerability Detection, Smart Contract, Deep Learning, Graph Neural Networks, Multimodal + Footnote †: journal: Knowledge-Based Systems ## 1 Introduction The Blockchain keyword has become increasingly more popular in the era of Industry 4.0 with many applications for a variety of purposes, both good and bad. For instance, in the field of finance, Blockchain is utilized to create new, faster, and more secure payment systems, examples of which include Bitcoin and Ethereum. However, Blockchain can also be exploited for money laundering, as it enables anonymous money transfers, as exemplified by cases like Silk Road [1]. The number of keywords associated with blockchain is growing rapidly, reflecting the increasing interest in this technology. A typical example is the smart contract deployed on Ethereum. Smart contracts are programmed in Solidity, which is a new language that has been developed in recent years. When deployed on a blockchain system, smart contracts often execute transactions related to cryptocurrency, specifically the ether (ETH) token. However, the smart contracts still have many vulnerabilities, which have been pointed out by Zou et al. [2]. Alongside the immutable and transparent properties of Blockchain, the presence of vulnerabilities in smart contracts deployed within the Blockchain ecosystem enables attackers to exploit flawed smart contracts, thereby affecting the assets of individuals, organizations, as well as the stability within the Blockchain ecosystem. In more detail, the DAO attack [3] presented by Mehar et al. is a clear example of the severity of the vulnerabilities, as it resulted in significant losses up to $50 million. To solve the issues, Kushwaha et al. [4] conducted a research survey on the different types of vulnerabilities in smart contracts and provided an overview of the existing tools for detecting and analyzing these vulnerabilities. Developers have created a number of tools to detect vulnerabilities in smart contracts source code, such as Oyente [5], Slither [4], Conkas [6], Mythriri [7], Securify [8], etc. These tools use static and dynamic analysis for seeking vulnerabilities, but they may not cover all execution paths, leading to false negatives. Additionally, exploring all execution paths in complex smart contracts can be time-consuming. Current endeavors in contract security analysis heavily depend on pre-defined rules established by specialists, a process that demands significant labor and lacks scalability. Meanwhile, the emergence of Machine Learning (ML) methods in the detection of vulnerabilities in software has also been explored. This is also applicable to smart contracts, where numerous tools and research are developed to identify security bugs, such as ESCORT by Lutz [9], ContractWard by Wang [10] and Qian [11]. The ML-based methods have significantly improved performance over static and dynamic analysis methods, as indicated in the study by Jiang [12]. However, the current studies do exhibit certain limitations, primarily centered around the utilization of only a singular type of feature from the smart contract as the input for ML models. To elaborate, it is noteworthy that a smart contract's representation and subsequent analysis can be approached through its source code, employing techniques such as NLP, as demonstrated in the study conducted by Khodadadi et al. [13]. Conversely, an alternative approach, as showcased by Chen et al. [14], involves the usage of the runtime bytecode of a smart contract published on the Ethereum blockchain. Additionally, Wang and colleagues [10] addressed vulnerability detection using opcodes extracted through the employment of the Solc tool [15] (Solidity compiler), based on either the contract's source code or bytecode. In practical terms, these methodologies fall under the categorization of unimodal or monomodal models, designed to exclusively handle one distinct type of data feature. Extensively investigated and proven beneficial in domains such as computer vision, natural language processing, and network security, these unimodal models do exhibit impressive performance characteristics. However, their inherent drawback lies in their limited perspective, resulting from their exclusive focus on singular data attributes, which lack the potential characteristics for more in-depth analysis. This limitation has prompted the emergence of multimodal models, which offer a more comprehensive outlook on data objects. The works of Jabeen and colleagues [16], Tadas et al. [17], Nam et al. [18], and Xu [19] underscore this trend. Specifically, multimodal learning harnesses distinct ML models, each accommodating diverse input types extracted from an object. This approach facilitates the acquisition of holistic and intricate representations of the object, a concerted effort to surmount the limitations posed by unimodal models. By leveraging multiple input sources, multimodal models endeavor to enrich the understanding of the analyzed data objects, resulting in more comprehensive and accurate outcomes. Recently, multimodal vulnerability detection models for smart contracts have emerged as a new research area, combining different techniques to process diverse data, including source code, bytecode and opcodes, to enhance the accuracy and reliability of AI systems. Numerous studies have demonstrated the effectiveness of using multimodal deep learning models to detect vulnerabilities in smart contracts. For instance, Yang et al. [20] proposed a multimodal AI model that combines source code, bytecode, and execution traces to detect vulnerabilities in smart contracts with high accuracy. Chen et al. [13] proposed a new hybrid multimodal model called HyMo Framework, which combines static and dynamic analysis techniques to detect vulnerabilities in smart contracts. Their framework uses multiple methods and outperforms other methods on several test datasets. Recognizing that these features accurately reflect smart contracts and the potential of multimodal learning, we employ a multimodal approach to build a vulnerability detection tool for smart contracts called VulnSense. Different features can provide unique insights into vulnerabilities on smart contracts. Source code offers a high-level understanding of contract logic, bytecode reveals low-level execution details, and opcode sequences capture the execution flow. By fusing these features, the model can extract a richer set of features, potentially leading to more accurate detection of vulnerabilities. The main contributions of this paper are summarized as follows: * First, we propose a multimodal learning approach consisting of BERT, BiLSTM and GNN to analyze the smart contract under multi-view strategy by leveraging the capability of NLP algorithms, corresponding to three type of features, including source code, opcodes, and CFG generated from bytecode. * Then, we extract and leverage three types of features from smart contracts to make a comprehensive feature fusion. More specifics, our smart contract representations which are created from real-world smart contract datasets, including Smartbugs Curated, SolidiFI-Benchmark and Smartbugs Wild, can help model to capture semantic relationships of characteristics in the phase of analysis. * Finally, we evaluate the performance of VulnSense framework on the real-world vulnerable smart contracts to indicate the capability of detecting security defects such as Reentrancy, Arithmetic on smart contracts. Additionally, we also compare our framework with a unimodal models and other multimodal ones to prove the superior effectiveness of VulnSense. The remaining sections of this article are constructed as follows. Section 3 introduces some related works in adversarial attacks and countermeasures. The section 2 gives a brief background of applied components. Next, the threat model and methodology are discussed in section 4. Section 5 describes the experimental settings and scenarios with the result analysis of our work. Finally, we conclude the paper in section 6. ## 2 Background ### Bytecode of Smart Contracts Bytecode is a sequence of hexadecimal machine instructions generated from high-level programming languages such as C/C++, Python, and similarly, Solidity. In the context of deploying smart contracts using Solidity, bytecode serves as the compiled version of the smart contract's source code and is executed on the blockchain environment. Bytecode encapsulates the actions that a smart contract can perform. It contains statements and necessary information to execute the contract's functionalities. Bytecode is commonly derived from Solidity or other languages used in smart contract development. When deployed on Ethereum, bytecode is categorized into two types: creation bytecode and runtime bytecode. 1. **Creation Bytecode:** The creation bytecode runs only once during the deployment of the smart contract onto the system. It is responsible for initializing the contract's initial state, including initializing variables and constructor functions. Creation bytecode does not reside within the deployed smart contract on the blockchain network. 2. **Runtime Bytecode:** Runtime bytecode contains executable information about the smart contract and is deployed onto the blockchain network. Once a smart contract has been compiled into bytecode, it can be deployed onto the blockchain and executed by nodes within the network. Nodes execute the bytecode's statements to determine the behavior and interactions of the smart contract. Bytecode is highly deterministic and remains immutable after compilation. It provides participants in the blockchain network the ability to inspect and verify smart contracts before deployment. In summary, bytecode serves as a bridge between high-level programming languages and the blockchain environment, enabling smart contracts to be deployed and executed. Its deterministic nature and pre-deployment verifiability contribute to the security and reliability of smart contract implementations. ### Opcode of Smart Contracts Opcode in smart contracts refers to the executable machine instructions used in a blockchain environment to perform the functions of the smart contract. Opcodes are low-level machine commands used to control the execution process of the contract on a blockchain virtual machine, such as the Ethereum Virtual Machine (EVM). Each opcode represents a specific task within the smart contract, including logical operations, arithmetic calculations, memory management, data access, calling and interacting with other contracts in the Blockchain network, and various other tasks. Opcodes define the actions that a smart contract can perform and specify how the contract's data and state are processed. These opcodes are listed and defined in the bytecode representation of the smart contract. The use of opcodes provides flexibility and standardization in implementing the functionalities of smart contracts. Opcodes ensure consistency and security during the execution of the contract on the blockchain, and play a significant role in determining the behavior and logic of the smart contract. ### Control Flow Graph CFG is a powerful data structure in the analysis of Solidity source code, used to understand and optimize the control flow of a program extracted from the bytecode of a smart contract. The CFG helps determine the structure and interactions between code blocks in the program, providing crucial information about how the program executes and links elements in the control flow. Specifically, CFG identifies jump points and conditions in Solidity bytecode to construct a control flow graph. This graph describes the basic blocks and control branches in the program, thereby creating a clear understanding of the structure of the Solidity program. With CFG, we can identify potential issues in the program such as infinite loops, incorrect conditions, or security vulnerabilities. By examining control flow paths in CFG, we can detect logic errors or potential unwanted situations in the Solidity program. Furthermore, CFG supports the optimization of Solidity source code. By analyzing and understanding the control flow structure, we can propose performance and correctness improvements for the Solidity program. This is particularly crucial in the development of smart contracts on the Ethereum platform, where performance and security play essential roles. In conclusion, CFG is a powerful representation that allows us to analyze, understand, and optimize the control flow in Solidity programs. By constructing control flow graphs and analyzing the control flow structure, we can identify errors, verify correctness, and optimize Solidity source code to ensure performance and security. ## 3 Related work This section will review existing works on smart contract vulnerability detection, including conventional methods, single learning model and multimodal learning approaches. ### Static and dynamic method There are many efforts in vulnerability detection in smart contracts through both static and dynamic analysis. These techniques are essential for scrutinizing both the source code and the execution process of smart contracts to uncover syntax and logic errors, including assessments of input variable validity and string length constraints. Dynamic analysis evaluates the control flow during smart contract execution, aiming to unearth potential security flaws. In contrast, static analysis employs approaches such as symbolic execution and tainting analysis. Taint analysis, specifically, identifies instances of injection vulnerabilities within the source code. Recent research studies have prioritized control flow analysis as the primary approach for smart contract vulnerability detection. Notably, Kushwaha et al. [21] have compiled an array of tools that harness both static analysis techniques--such as those involving source code and bytecode--and dynamic analysis techniques via control flow scrutiny during contract execution. A prominent example of static analysis is Oyente [22], a tool dedicated to smart contract examination. Oyente employs control flow analysis and static checks to detect vulnerabilities like Reentrancy attacks, faulty token issuance, integer overflows, and authentication errors. Similarly, Slither [23], a dynamic analysis tool, utilizes control flow analysis during execution to pinpoint security vulnerabilities, encompassing Reentrancy attacks, Token Issuance Bugs, Integer Overflows, and Authentication Errors. It also adeptly identifies concerns like Transaction Order Dependence (TOD) and Time Dependence. Beyond static and dynamic analysis, another approach involves fuzzy testing. In this technique, input strings are generated randomly or algorithmically to feed into smart contracts, and their outcomes are verified for anomalies. Both Contract Fuzzer [24] and xFuzz [25] pioneer the use of fuzzing for smart contract vulnerability detection. Contract Fuzzer employs concolic testing, a hybrid of dynamic and static analysis, to generate test cases. Meanwhile, xFuzz leverages a genetic algorithm to devise random test cases, subsequently applying them to smart contracts for vulnerability assessment. Moreover, symbolic execution stands as an additional method for in-depth analysis. By executing control flow paths, symbolic execution allows the generation of generalized input values, addressing challenges associated with randomness in fuzzing approaches. This approach holds potential for overcoming limitations and intricacies tied to the creation of arbitrary input values. However, the aforementioned methods often have low accuracy and are not flexible between vulnerabilities as they rely on expert knowledge, fixed patterns, and are time-consuming and costly to implement. They also have limitations such as only detecting pre-defined fixed vulnerabilities and lacking the ability to detect new vulnerabilities. ### Machine Learning method ML methods often use features extracted from smart contracts and employ supervised learning models to detect vulnerabilities. Recent research has indicated that research groups primarily rely on supervised learning. The common approaches usually utilize feature extraction methods to obtain CFG and Abstract Syntax Tree (AST) through dynamic and static analysis tools on source code or bytecode. Th este studies [26; 27] used a sequential model of Graph Neural Network to process opcodes and employed LSTM to handle the source code. Besides, a team led by Nguyen Hoang has developed Mando Guru [28], a GNN model to detect vulnerabilities in smart contracts. Their team applied additional methods such as Heterogeneous Graph Neural Network, Coarse-Grained Detection, and Fine-Grained Detection. They leveraged the control flow graph (CFG) and call graph (CG) of the smart contract to detect 7 vulnerabilities. Their approach is capable of detecting multiple vulnerabilities in a single smart contract. The results are represented as nodes and paths in the graph. Additionally, Zhang Lejun et al. [29] also utilized ensemble learning to develop a 7-layer convolutional model that combined various neural network models such as CNN, RNN, RCN, DNN, GRU, Bi-GRU, and Transformer. Each model was assigned a different role in each layer of the model. ### Multimodal Learning The HyMo Framework [30], introduced by Chen et al. in 2020, presented a multimodal deep learning model used for smart contract vulnerability detection illustrates the components of the HyMo Framework. This framework utilizes two attributes of smart contracts, including source code and opcodes. After preprocessing these attributes, the HyMo framework employs FastText for word embedding and utilizes two Bi-GRU models to extract features from these two attributes. Another framework, the HYDRA framework, proposed by Chen and colleagues [31], utilizes three attributes, including API, bytecode, and opcode as input for three branches in the multimodal model to classify malicious software. Each branch processes the attributes using basic neural networks, and then the outputs of these branches are connected through fully connected layers and finally passed through the Softmax function to obtain the final result. And most recently, Jie Wanqing and colleagues have published a study [32] utilizing four attributes of smart contracts (SC), including source code, Static Single Assignment (SSA), CFG, and bytecode. With these four attributes, they construct three layers: SC, BB, and EVMB. Among these, the SC layer employs source code for attribute extraction using Word2Vec and BERT, the BB layer uses SSA and CFG generated from the source code, and finally, the EVMB layer employs assembly code and CFG derived from bytecode. Additionally, the authors combine these classes through various methods and undergo several distinct steps. These models yield promising results in terms of Accuracy, with HyMo [30] achieving approximately 0.79%, HYDRA [31] surpassing it with around 0.98% and the multimodal AI of Jie et al. [32]. achieved high-performance results ranging from 0.94% to 0.99% across various test cases. With these achieved results, these studies have demonstrated the power of multimodal models compared to unimodal models in classifying objects with multiple attributes. However, the limitations within the scope of the paper are more constrained by implementation than design choices. They utilized word2vec that lacks support for out-of-vocabulary words. To address this constraint, they proposed substituting word2vec with the fastText NLP model. Subsequently, their vulnerability detection framework was modeled as a binary classification problem within a supervised learning paradigm. In this work, their primary focus was on determining whether a contract contains a vulnerability or not. A subsequent task could delve into investigating specific vulnerability types through desired multi-class classification. From the evaluations presented in this section, we have identified the strengths and limitations of existing literature. It is evident that previous works have not fully optimized the utilization of Smart Contract data and lack the incorporation of a diverse range of deep learning models. While unimodal approaches have not adequately explored data diversity, multimodal ones have traded-off construction time for classification focus, solely determining whether a smart contract is vulnerable or not. In light of these insights, we propose a novel framework that leverages the advantages of three distinct deep learning models including BERT, GNN, and BiLSTM. Each model forms a separate branch, contributing to the creation of a unified architecture. Our approach adopts a multi-class classification task, aiming to collectively improve the effectiveness and diversity of vulnerability detection. By synergistically integrating these models, we strive to overcome the limitations of the existing literature and provide a more comprehensive solution. ## 4 Methodology This section provides the outline of our proposed approach for vulnerability detection in smart contracts. Additionally, by employing multimodal learning, we generate a comprehensive view of the smart contract, which allows us to represent of smart contract with more relevant features and boost the effectiveness of the vulnerability detection model. ### An overview of architecture Our proposed approach, VulnSense, is constructed upon a multimodal deep learning framework consisting of three branches, including BERT, BiLSTM, and GNN, as illustrated in **Figure 1**. More specifically, the first branch is the BERT model, which is built upon the Transformer architecture and employed to process the source code of the smart contract. Secondly, to handle and analyze the opcode context, the BiLSTM model is applied in the second branch. Lastly, the GNN model is utilized for representing the CFG of bytecode in the smart contract. This integrative methodology leverages the strengths of each component to comprehensively assess potential vulnerabilities within smart contracts. The fusion of linguistic, sequential, and structural information allows for a more thorough and insightful evaluation, thereby fortifying the security assessment process. This approach presents a robust foundation for identifying vulnerabilities in smart contracts and holds promise for significantly reducing risks in blockchain ecosystems. ### Bidirectional Encoder Representations from Transformers (BERT) In this study, to capture high-level semantic features from the source code and enable a more in-depth understanding of its functionality, we designed a BERT network which is a branch in our Multimodal. As in **Figure 2**, BERT model consists of 3 blocks: Preprocessor, Encoder and Neural network. More specifically, the Preprocessor processes the inputs, which is the source code of smart contracts. The inputs are transformed into vectors through the input embedding layer, and then pass through the _positional_encoding_ layer to add positional information to the words. Then, the **preprocessed** values are fed into the encoding block to compute relationships between words. The entire encoding block consists of 12 identical encoding layers stacked on top of each other. Each encoding layer comprises two main parts: a self-attention layer and a feed-forward neural network. The output **encoded** forms a vector space of length 768. Subsequently, the **encoded** values are passed through a simple neural network. The resulting **bert_output** values constitute the output of this branch in the multimodal model. Thus, the whole BERT component could be demonstrated as follows: \[\textbf{preprocessed}= positional\_encoding(\textbf{e}(input)) \tag{1}\] \[\textbf{encoded}=Encoder(\textbf{preprocessed}) \tag{2}\] \[\textbf{bert\_output}=NN(\textbf{encoded}) \tag{3}\] where, (1), (2) and (3) represent the Preprocessor block, Encoder block and Neural Network block, respectively. ### Bidirectional long-short term memory (BiLSTM) Toward the opcode, we applied the BiLSTM which is another branch of our Multimodal approach to analysis the contextual relation of opcodes and contribute crucial insights into the code's execution flow. By processing Opcode sequentially, we aimed to capture potential vulnerabilities that might be overlooked by solely considering structural information. In detail, as in **Figure 3**, we first tokenize the opcodes and convert them into integer values. The opcode features tokenized Figure 1: The overview of VulnSense framework. Figure 2: The architecture of BERT component in VulnSense are embedded into a dense vector space using an _embedding_ layer which has 200 dimensions. \[\begin{split}\textbf{token}=Tokenize(\textbf{opcode})\\ \textbf{vector\_space}=Embedding(\textbf{token})\end{split} \tag{4}\] Then, the opcode vector is fed into two BiLSTM layers with 128 and 64 units respectively. Moreover, to reduce overfitting, the Dropout layer is applied after the first BiLSTM layer as in (5). \[\begin{split}\textbf{bi\_lstm1}=Bi\_LSTM(128)(\textbf{vector\_space })\\ \textbf{r}=Dropout(dense(\textbf{bi\_lstm1}))\\ \textbf{bi\_lstm2}=Bi\_LSTM(128)(\textbf{r})\end{split} \tag{5}\] Finally, the output of the last BiLSTM layer is then fed into a dense layer with 64 units and ReLU activation function as in (6). \[\textbf{lstm\_output}=Dense(64,relu)(\textbf{bi\_lstm2}) \tag{6}\] ### Graph Neural Network (GNN) To offer insights into the structural characteristics of smart contracts based on bytecode, we present a CFG-based GNN model which is the third branch of our multimodal model, as shown in **Figure 4**. In this branch, we firstly extract the CFG from the bytecode, and then use OpenAI's embedding API to encode the nodes and edges of the CFG into vectors, as in (7). \[\textbf{encode}=Encoder(edges,nodes) \tag{7}\] The encoded vectors have a length of 1536. These vectors are then passed through 3 GCN layers with ReLU activation functions (8), with the first layer having an input length of 1536 and an output length of a custom hidden_channels (_hc_) variable. \[\begin{split}\textbf{GCN1}=GCNConv(1536,relu)(\textbf{encode}) \\ \textbf{GCN2}=GCNConv(hc,relu)(\textbf{GCN1})\\ \textbf{GCN3}=GCNConv(hc)(\textbf{GCN2})\end{split} \tag{8}\] Finally, to feed into the multimodal deep learning model, the output of the GCN layers is fed into 2 dense layers with 3 and 64 units respecitvely, as described in (9). \[\begin{split}\textbf{d1\_gnn}=Dense(3,relu)(\textbf{GCN3})\\ \textbf{gnn\_output}=Dense(64,relu)(\textbf{d1\_gnn})\end{split} \tag{9}\] ### Multimodal Each of these branches contributes a unique dimension of analysis, allowing us to capture intricate patterns and nuances present in the smart contract data. Therefore, we adopt an innovative approach by synergistically concatating the outputs of three distinctive models including BERT **bert_output** (3), BiLSTM **lstm_output** (6), and GNN **gnn_output** (9) to enhance the accuracy and depth of our predictive model, as shown in (10): \[\begin{split}\textbf{c}=\text{Concatenateate}([\textbf{bert\_output}, \\ \textbf{lstm\_output},\textbf{gnn\_output}])\end{split} \tag{10}\] Then the output \(\mathbf{c}\) is transformed into a 3D tensor with dimensions (batch_size, 194, 1) using the Reshape layer (11): \[\begin{split}\textbf{c\_reshaped}=\text{Reshape}((194,1))( \textbf{c})\end{split} \tag{11}\] Next, the transformed tensor **c_reshaped** is passed through a 1D convolutional layer (12) with 64 filters and a kernel size of 3, utilizing the rectified linear activation function: \[\begin{split}\textbf{conv\_out}=\text{Conv1D}(64,3,\text{relu})( \textbf{c\_reshaped})\end{split} \tag{12}\] The output from the convolutional layer is then flattened (13) to generate a 1D vector: \[\begin{split}\textbf{f\_out}=\text{flatten}()(\textbf{conv\_out} )\end{split} \tag{13}\] The flattened tensor **f_out** is subsequently passed through a fully connected layer with length 32 and an adjusted rectified linear activation function as in (14): \[\begin{split}\textbf{d\_out}=\text{Dense}(32,\text{relu})( \textbf{f\_out})\end{split} \tag{14}\] Finally, the output is passed through the softmax activation function (15) to generate a probability distribution across the three output classes: \[\begin{split}\widetilde{\textbf{y}}=\text{Dense}(3,\text{softmax })(\textbf{d\_out})\end{split} \tag{15}\] This architecture forms the final stages of our model, culminating in the generation of predicted probabilities for the three output classes. Figure 4: The architecture of GNN component in VulnSense Figure 3: The architecture of BiLSTM component in VulnSense. ## 5 Experiments and Analysis ### Experimental Settings and Implementation In this work, we utilize a virtual machine (VM) of Intel Xeon(R) CPU E5-2660 v4 @ 2.00GHz x 24, 128 GB of RAM, and Ubuntu 20.04 version for our implementation. Furthermore, all experiments are evaluated under the same experimental conditions. The proposed model is implemented using Python programming language and utilized well-established libraries such as TensorFlow, Keras. For all the experiments, we have utilized the fine-tune strategy to improve the performance of these models during the training stage. We set the batch size as 32 and the learning rate of optimizer Adam with 0.001. Additionally, to escape overfitting data, the dropout operation ratio has been set to 0.03. ### Performance Metrics We evaluate our proposed method via 4 following metrics, including Accuracy, Precision, Recall, F1-Score. Since our work conducts experiments in multi-classes classification tasks, the value of each metric is computed based on a 2D confusion matrix which includes True Positive (TP), True Negative (TN), False Positive (FP) and False Negative (FN). _Accuracy_ is the ratio of correct predictions \(TP\), \(TN\) over all predictions. _Precision_ measures the proportion of \(TP\) over all samples classified as positive. _Recall_ is defined the proportion of \(TP\) over all positive instances in a testing dataset. _F1-Score_ is the Harmonic Mean of \(Precision\) and \(Recall\). ### Dataset and Preprocessing In this dataset, we combine three datasets, including Smartbugs Curated [33; 34], SolidiFI-Benchmark [35], and Smartbugs Wild [33; 34]. For the Smartbugs Wild dataset, we collect smart contracts containing a single vulnerability (either an Arithmetic vulnerability or a Reentrancy vulnerability). The identification of vulnerable smart contracts is confirmed by at least two vulnerability detection tools currently available. In total, our dataset includes 547 Non-Vulnerability, 631 Arithmetic Vulnerabilities of Smart Contracts, and 591 Reentrancy Vulnerabilities of Smart Contracts, as shown in **Table 1**. #### 5.3.1 Source Code Smart Contract When programming, developers often have the habit of writing comments to explain their source code, aiding both themselves and other programmers in understanding the code snippets. BERT, a natural language processing model, takes the source code of smart contracts as its input. From the source code of smart contracts, BERT calculates the relevance of words within the code. Comments present within the code can introduce noise to the BERT model, causing it to compute unnecessary information about the smart contract's source code. Hence, preprocessing of the source code before feeding it into the BERT model is necessary. Moreover, removing comments from the source code also helps reduce the length of the input when fed into the model. To further reduce the source code length, we also eliminate extra blank lines and unnecessary whitespace. **Figure 5** provides an example of an unprocessed smart contract from our dataset. This contract contains comments following the '/' syntax, blank lines, and excessive white spaces that do not adhere to programming standards. **Figure 6** represents the smart contract after undergoing processing. #### 5.3.2 Opcode Smart Contracts We proceed with bytecode extraction from the source code of the smart contract, followed by opcode extraction through the bytecode. The opcodes within the contract are categorized into 10 functional groups, totaling 135 opcodes, according to the Ethereum Yellow Paper [36]. However, we have condensed them based on **Table 2**. \begin{table} \begin{tabular}{|c|c|} \hline Vulnerability Type & Contracts \\ \hline Arithmetic & 631 \\ Re-entrancy & 591 \\ Non-Vulnerability & 547 \\ \hline \end{tabular} \end{table} Table 1: Distribution of Labels in the Dataset Figure 5: An example of Smart Contract Prior to Processing During the preprocessing phase, unnecessary hexadecimal characters were removed from the opcodes. The purpose of this preprocessing is to utilize the opcodes for vulnerability detection in smart contracts using the BiLSTM model. In addition to opcode preprocessing, we also performed other preprocessing steps to prepare the data for the BiLSTM model. Firstly, we tokenized the opcodes into sequences of integers. Subsequently, we applied padding to create opcode sequences of the same length. The maximum length of opcode sequences was set to 200, which is the maximum length that the BiLSTM model can handle. After the padding step, we employ a Word Embedding layer to transform the encoded opcode sequences into fixed-size vectors, serving as inputs for the BiLSTM model. This enables the BiLSTM model to better learn the representations of opcode sequences. In general, the preprocessing steps we performed are crucial in preparing the data for the BiLSTM model and enhancing its performance in detecting vulnerabilities in Smart Contracts. #### 5.3.3 Control Flow Graph First, we extract bytecode from the smart contract, then extract the CFG through bytecode into.cfg.gy files as shown in **Figure 7**. From this.cfg.gy file, a CFG of a Smart Contract through bytecode can be represented as shown in **Figure 8**. The nodes in the CFG typically represent code blocks or states of the contract, while the edges represent control flow connections between nodes. To train the GNN model, we encode the nodes and edges of the CFG into numerical vectors. One approach is to use embedding techniques to represent these entities as vectors. In this case, we utilize the OpenAI API embedding to encode nodes and edges into vectors of length 1536. This could be a customized approach based on OpenAI's pre-trained deep learning models. Once the nodes and edges of the CFG are encoded into vectors, we employ them as inputs for the GNN model. ### Experimental Scenarios To prove the efficiency of our proposed model and compared models, we conducted training with a total of 7 models, categorized into two types: unimodal deep learning models and multi-modal deep learning models. On the one hand, the unimodal deep learning models consisted of models within each branch of VulnSense. On the other hand, these multimodal deep learning models are pairwise combinations of three unimodal deep learning models and VulnSense, utilizing a 2-way interaction. Specifically: * Multimodal BERT - BiLSTM (**M1**) * Multimodal BERT - GNN (**M2**) * Multimodal BiLSTM - GNN (**M3**) * (as mentioned in **Section 4**) Furthermore, to illustrate the fast convergence and stability of multimodal method, we train and validate 7 models on 3 different mocks of training epochs including 10, 20 and 30 epochs. \begin{table} \begin{tabular}{|c|c|} \hline Substituted Opcodes & Original Opcodes \\ \hline DUP & DUP1-DUP16 \\ SWAP & SWAP1-SWAP16 \\ PUSH & PUSH-PUSH32 \\ LOG & LOG1-LOG4 \\ \hline \end{tabular} \end{table} Table 2: The simplified opcode methods Figure 8: Visualize Graph by.cfg.gy file Figure 7: Graph Extracted from Bytecode ### Experimental Results The experimentation process for the models was carried out on the dataset as detailed in **Section 5.3**. #### 5.5.1 Models performance evaluation Through the visualizations in **Table 3**, it can be intuitively answered that the ability to detect vulnerabilities in smart contracts using multi-modal deep learning models is more effective than unimodal deep learning models in these experiments. Specifically, when testing 3 multimodal models including M1, M3 and VulnSense on 3 mocks of training epochs, the results indicate that the performance is always higher than 75.09% with 4 metrics mentioned above. Meanwhile, the testing performance of M2 and 3 unimodal models including BERT, BiLSTM and GNN are lower than 75% with all 4 metrics. Moreover, with the testing performance on all 3 mocks of training epochs, VulnSense model has achieved the highest F1-Score with more than 77% and Accuracy with more than 77.96%. In addition, **Figure 9** provides a more detailed of the performances of all 7 models at the last epoch training. It can be seen from **Figure 9** that, among these multimodal models, VulnSense performs the best, having the accuracy of Arithmetic, Reentrancy, and Clean label of 84.44%, 64.08% and 84.48%, respectively, followed by M3, M1 and M2 model. Even though the GNN model, which is an unimodal model, managed to attain an accuracy rate of 85.19% for the Arithmetic label and 82.76% for the Reentrancy label, its performance in terms of the Clean label accuracy was merely 1.94%. Similarity in the context of unimodal models, BiLSTM and BERT both have giv the accuracy of all 3 labels relatively low less than 80%. Furthermore, the results shown in **Figure 10** have demonstrated the superior convergence speed and stability of VulnSense model compared to the other 6 models. In detail, through testing after 10 training epochs, VulnSense model has gained the highest performance with greater than 77.96% in all 4 metrics. Although, VulnSense, M1 and M3 models give high performance after 30 training epochs, VulnSense model only needs to be trained for 10 epochs to achieve better convergence than the M1 and M3 models, which require 30 epochs. Besides, throughout 30 training epochs, the M3, M1, BiLSTM, and M2 models exhibited similar performance to the VulnSense model, yet they demonstrated some instability. On the one hand, the VulnSense model maintains a consistent performance level within the range of 75-79%, on the other hand, the M3 model experienced a severe decline in both Accuracy and F1-Score values, declining by over 20% by the 15\({}^{th}\) epoch, indicating significant disturbance in its performance. From this observation, these findings indicate that our proposed model, VulnSense, is more efficient in identifying vulnerabilities in smart contracts compared to these other models. Furthermore, by harnessing the advantages of multimodal over unimodal, VulnSense also exhibited consistent performance and rapid convergence. #### 5.5.2 Comparisons of Time **Figure 11** illustrates the training time for 30 epochs of each model. Concerning the training time of unimodal models, on the one hand, the training time for the GNN model is very short, at only 7.114 seconds, on the other hand BERT model reaches significantly longer training time of 252.814 seconds. For the BiLSTM model, the training time is significantly 10 times longer than that of the GNN model. Furthermore, when comparing the multimodal models, the shortest training time belongs to the M3 model (the multimodal combination of BiLSTM and GNN) at 81.567 seconds. Besides, M1, M2, and VulnSense involve the BERT model, resulting in relatively longer training times with over 270 seconds for 30 epochs. It is evident that the unimodal model significantly impacts the training time of the multimodal model it contributes to. Although VulnSense takes more time compared to the 6 other models, it only requires 10 epochs to converge. This greatly reduces the training time of VulnSense by 66% compared to the other 6 models. In addition, **Figure 12** illustrates the prediction time on the same testing set for each model. It's evident that these multimodal models M1, M2, and VulSense, which are incorporated from BERT, as well as the unimodal BERT model, ex \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline **Score** & **Epoch** & **BERT** & **BiLSTM** & **GNN** & **M1** & **M2** & **M3** & **VulnSense** \\ \hline \multirow{3}{*}{**Accuracy**} & **E10** & 0.5875 & 0.7316 & 0.5960 & 0.7429 & 0.6468 & 0.7542 & **0.7796** \\ \cline{2-10} & **E20** & 0.5903 & 0.6949 & 0.5988 & 0.7796 & 0.6553 & 0.7768 & **0.7796** \\ \cline{2-10} & **E30** & 0.6073 & 0.7146 & 0.6016 & 0.7796 & 0.6525 & 0.7683 & **0.7796** \\ \hline \multirow{3}{*}{**Precision**} & **E10** & 0.5818 & 0.7540 & 0.4290 & 0.7749 & 0.6616 & 0.7790 & **0.7940** \\ \cline{2-10} & **E20** & 0.6000 & 0.7164 & 0.7209 & 0.7834 & 0.6800 & 0.7800 & **0.7922** \\ \cline{2-10} & **E30** & 0.6000 & 0.7329 & 0.5784 & 0.7800 & 0.7000 & 0.7700 & **0.7800** \\ \hline \multirow{3}{*}{**Recall**} & **E10** & 0.5876 & 0.7316 & 0.5960 & 0.7429 & 0.6469 & 0.7542 & **0.7797** \\ \cline{2-10} & **E20** & 0.5900 & 0.6949 & 0.5989 & 0.7797 & 0.6600 & **0.7800** & 0.7797 \\ \cline{2-10} & **E30** & 0.6100 & 0.7147 & 0.6017 & 0.7700 & 0.6500 & 0.7700 & **0.7700** \\ \hline \multirow{3}{*}{**F1**} & **E10** & 0.5785 & 0.7360 & 0.4969 & 0.7509 & 0.6520 & 0.7602 & **0.7830** \\ \cline{2-10} & **E20** & 0.5700 & 0.6988 & 0.5032 & 0.7809 & 0.6600 & 0.7792 & **0.7800** \\ \cline{1-1} \cline{2-10} & **E30** & 0.6000 & 0.7185 & 0.5107 & 0.7700 & 0.6500 & 0.7700 & **0.7750** \\ \hline \end{tabular} \end{table} Table 3: The performance of 7 models hibit extended testing durations, surpassing 5.7 seconds for a set of 354 samples. Meanwhile, the testing durations for the GNN, BiLSTM, and M3 models are remarkably brief, approximately 0.2104, 1.4702, and 2.0056 seconds correspondingly. It is noticeable that the presence of the unimodal models has a direct influence on the prediction time of the multimodal models in which the unimodal models are involved. In the context of the 2 most effective multimodal models, M3 and VulSense, M3 model gave the shortest testing time, about 2.0056 seconds. On the contrary, the VulSense model exhibits the lengthiest prediction time, extending to about 7.4964 seconds, which is roughly four times that of the M3 model. While the M3 model outperforms the VulSense model in terms of training and testing duration, the VulSense model surpasses the M3 model in accuracy. Nevertheless, in the context of detecting vulnerability for smart contracts, increasing accuracy is more important than reducing execution time. Consequently, the VulSense model decidedly outperforms the M3 model. ## 6 Conclusion In conclusion, our study introduces a pioneering approach, VulnSense, which harnesses the potency of multimodal deep learning, incorporating graph neural networks and natural language processing, to effectively detect vulnerabilities within Ethereum smart contracts. By synergistically leveraging the strengths of diverse features and cutting-edge techniques, our framework surpasses the limitations of traditional single-modal methods. The results of comprehensive experiments underscore the superiority of our approach in terms of accuracy and efficiency, outperforming conventional deep learning techniques. This affirms the potential and applicability of our approach in bolstering Ethereum smart contract security. The significance of this research extends beyond its immediate applications. It contributes to the broader discourse on enhancing the integrity of blockchain-based systems. As the adoption of smart contracts continues to grow, the vulnerabilities associated with them pose considerable risks. Our proposed methodology not only addresses these vulnerabilities but also paves the way for future research in the realm of multimodal deep learning and its diversified applications. In closing, VulnSense not only marks a significant step towards securing Ethereum smart contracts but also serves as a stepping stone for the development of advanced techniques in blockchain security. As the landscape of cryptocurrencies and blockchain evolves, our research remains poised to contribute to the ongoing quest for enhanced security and reliability in decentralized systems. ## Acknowledgment This research was supported by The VNUHCM-University of Information Technology's Scientific Research Support Fund.
2305.19874
Quantum Trajectory Approach to Error Mitigation
Quantum Error Mitigation (EM) is a collection of strategies to reduce errors on noisy intermediate scale quantum (NISQ) devices on which proper quantum error correction is not feasible. One of such strategies aimed at mitigating noise effects of a known environment is to realise the inverse map of the noise using a set of completely positive maps weighted by a quasi-probability distribution, i.e. a probability distribution with positive and negative values. This quasi-probability distribution is realised using classical post-processing after final measurements of the desired observables have been made. Here we make a connection with quasi-probability EM and recent results from quantum trajectory theory for open quantum systems. We show that the inverse of noise maps can be realised by performing classical post-processing on the quantum trajectories generated by an additional reservoir with a quasi-probability measure called the influence martingale. We demonstrate our result on a model relevant for current NISQ devices. Finally, we show the quantum trajectories required for error correction can themselves be simulated by coupling an ancillary qubit to the system. In this way, we can avoid the introduction of the engineered reservoir.
Brecht. I. C Donvil, Rochus Lechler, Joachim Ankerhold, Paolo Muratore-Ginanneschi
2023-05-31T14:10:35Z
http://arxiv.org/abs/2305.19874v1
# Quantum Trajectory Approach to Error Mitigation ###### Abstract Quantum Error Mitigation (EM) is a collection of strategies to reduce errors on noisy intermediate scale quantum (NISQ) devices on which proper quantum error correction is not feasible. One of such strategies aimed at mitigating noise effects of a known environment is to realise the inverse map of the noise using a set of completely positive maps weighted by a quasi-probability distribution, i.e. a probability distribution with positive and negative values. This quasi-probability distribution is realised using classical post-processing after final measurements of the desired observables have been made. Here we make a connection with quasi-probability EM and recent results from quantum trajectory theory for open quantum systems. We show that the inverse of noise maps can be realised by performing classical post-processing on the quantum trajectories generated by an additional reservoir with a quasi-probability measure called the influence martingale. We demonstrate our result on a model relevant for current NISQ devices. Finally, we show the quantum trajectories required for error correction can themselves be simulated by coupling an ancillary qubit to the system. pacs: 03.65.Yz, 42.50.Lc ## I Introduction Current quantum computation platforms operate in the noisy intermediate scale quantum (NISQ) regime. The noisy character of these devices significantly inhibits their ability to successfully perform quantum computations. Managing and reducing this noise is therefore of the main challenges in current era quantum platforms. The main strategy to counteract noise is quantum error correction, see e.g. [1], which allows to detect and correct errors. It relies on encoding the information present in a quantum system into a larger Hilbert space. In practice, this encoding procedure requires both the ability to perform quantum operations below a certain error threshold and control of sizable quantum systems [2], for example, the authors of [3] showed that the surface error correcting code requires a thousand qubits per logical qubit to perform Shor's algorithm. Despite these significant experimental challenges, a recent experimental implementation of error correction was done in [4]. The field quantum Error Mitigation (EM) provides a set of alternative strategies to reduce the effects of noise that are effective on the currently available NISQ platforms [2; 5]. These methods aim to use a limited amount of quantum operations and ancillary qubits supplemented by classical post-processing on the final measurement outcomes. Extrapolation methods learn the dependence of the measurement outcomes on the noise strength by increasing it and then extrapolate to 0 noise strength [6; 7]. This method was experimentally implemented in [8; 9; 10]. In readout error mitigation, the measurement outcomes are corrected by applying a linear transformation that compensates for known measurement errors [11; 12]. Finally, the idea of quasi-probability methods relies on the assumption that the noise on every unitary operation is given by Completely Positive Trace Preserving (CPTP) maps [2]. Since the inverse of a CPTP map is CPTP if and only if the map is unitary, it cannot directly be implemented. However, if one is able to perform a complete set of CPTP operations, the inverse can be implemented with elements from this set weighted by a quasi probability distribution (i.e. a probability distribution which can take negative values) [6; 13; 14], see also [15] and [16; 17; 18] for results on the cost of this error mitigation. Quasi-probability has successfully been implemented on a super conducting quantum processor [19] and a Rydberg quantum simulator [20]. Recently, the authors of [21] developed an efficient algorithm to implement the inverse of any 2-level system CPTP map and implemented it on an IBM quantum processor. Implementing the inverse of CPTP maps also has interest outside of error mitigation, see e.g [22]. One of the main issues of the current quasi-probability-based error mitigation schemes is that one needs to be able to efficiently implement the CPTP maps that make up the desired inverse. In this paper, we propose a novel method which circumvents the challenge of implementing any CPTP maps. We take advantage of the well-known fact that any open quantum system dynamics always solves a time local evolution equation see e.g. [23] and put the system in contact with a specifically designed reservoir leading to additional terms in the master equation. The terms added by the designed reservoir are of the Lindblad-Gorini-Kossakowski-Sudarshan form [24; 25], or Lindblad form in short. Therefore, by continuously monitoring the additional reservoir, quantum jump trajectories can be reconstructed for the system state [26; 27]. By applying a recent development in the field of quantum trajectory theory introduced by some of us [28; 29], we show that the quantum trajectories generated by the designed reservoir can weighed by a pseudo-probability measure changing, on average, the additional terms in the system master equation. For a suitable choice of reservoir and pseudo-measure, the influence of the original noise source can be completely cancelled. Essentially, what we are presenting here is a simulation technique for generators of time local master equations with negative decoherence rates, which can be used for error mitigation by simulating a generator which cancels out the generator of an existing bath. Recently, the authors of [30] implemented the opposite scheme. They simulated time local master equations (with positive rate functions) using quantum error mitigation. Alternatively, the quantum trajectories generated by the designed reservoir can be simulated using the formalism developed in [31] to simulate the evolution of time-local master equations of the Lindblad form in short. Again, relying on [28; 29], we extend the method by [31] to general time-local master equations and in this way it can be used to replace the engineered reservoir to perform the error mitigation. The simulation of [31] scheme was experimentally implemented in [32]. ## II Error mitigation with quasi probabilities The error mitigation methods presented in [6; 13; 33] assume that the noise on quantum operations can be modelled by a completely positive map \(E\) which we know. Counteracting \(E\) essentially relies on having access to a complete set of completely-positive trace-preserving maps \(S=\{B_{k}\}_{k}\) such that \[E^{-1}=\sum_{k}q_{k}\,B_{k}\] The normalisation of the c-numbers \(q_{k}\in\mathbb{R}\) ensures trace preservation \(\sum_{k}q_{k}=1\). As the inverse of a completely positive map is generically only completely bounded [34], the \(q_{k}\)'s can take negative values and consequently the instrument [35] associated to map on operators \(E^{-1}\) specifies a pseudo probability distribution. Let us define \(C=\sum_{k}|q_{k}|\), \(p_{k}=q_{k}/C\) and \(s_{k}=\operatorname{sign}(q_{k})\) such that we can rewrite the the above equation as \[E^{-1}=C\sum s_{k}\,p_{k}\,B_{k}.\] in terms of the probability distribution \(\{p_{k}\}_{k}\) and \(C\) is called the _cost_ of the quantum error mitigation. The expectation value of an observable \(O\) for the action of \(E\) on a state \(\rho\) is then computed by the formula \[\operatorname{tr}(O\,E^{-1}(\rho))=C\sum s_{k}\,p_{k}\operatorname{tr}(O\,B_{ k}(\rho)) \tag{1}\] The above equation explains how the expectation value can be measured in practice. First an operator \(B_{k}\) is drawn from the probability distribution \(\{p_{k}\}_{k}\). Then \(B_{k}\) is applied to the quantum state and the observable \(O\) is measured. The final result is reweighed by \(C\), multiplied by the sign \(s_{k}\) and summed to the estimate of \(\operatorname{tr}(O\,E^{-1}(\rho))\). ## III Error mitigation with quantum trajectories In this section we develop our error mitigation technique based on coupling the noisy system to an additional reservoir and reweighing the resulting trajectories with the influence martingale [28], a pseudo-probability measure. Before going to the error mitigation, we dedicate a subsection to introduce quantum trajectories and the influence martingale. ### Simulating a non-Markovian Reservoir with Quantum Trajectories In this subsection we provide a short introduction to quantum trajectories and the influence martingale, see Appendix A for more technical details. We consider a system in contact with a bath, such that the effective evolution of the system state \(\rho(t)\) is described by the master equation \[\frac{d}{dt}\rho(t)=-i[H(t),\rho(t)]+\mathcal{L}_{t}(\rho(t)) \tag{2}\] where \(\mathcal{L}_{t}\) is a dissipator fully characterised by the set \(\{L_{k},\gamma_{k}(t)\}\) of Lindblad operators \(L_{k}\) and jump rates \(\gamma_{k}(t)\geq 0\) \[\mathcal{L}_{t}(\rho)=\sum_{k}\gamma_{k}(t)\left(L_{k}\rho L_{k}^{\dagger}- \frac{1}{2}\{L_{k}^{\dagger}L_{k},\rho\}\right). \tag{3}\] Note that the above master equation is of the Lindblad form if and only if the rates \(\gamma_{k}(t)\) are positive. When the bath is continuously monitored, quantum jump trajectories for the system state vector \(\psi(t)\) can be reconstructed [26; 27]. This procedure is also called unravelling in trajectories. These trajectories are fully characterised by a set of jump times and jump operators \(\{t_{j},L_{k_{j}}\}\) which means that at the times \(t_{j}\) the system state vector \(\psi(t)\) made a jump described by the operator \(L_{k_{j}}\): \[\psi(t)\xrightarrow[\text{at }t_{j}]{\text{jump}}\frac{L_{k_{j}}\psi(t)}{ \lVert L_{k_{j}}\psi(t)\rVert}. \tag{4}\] The solution \(\rho(t)\) to the master equation (2) is then reconstructed by taking the average \(\mathbb{E}\) of the states \(\psi(t)\psi^{\dagger}(t)\)[36; 37; 38] \[\rho(t)=\mathbb{E}(\psi(t)\psi^{\dagger}(t)) \tag{5}\] Recently [28; 29] some of us introduced a martingale method, called the influence martingale, which provides a conceptually simple and numerically efficient avenue to unravel completely bounded maps on operators. The influence martingale pairs the evolution of a master equation with dissipator \(\mathcal{L}_{t}\) (3) with positive rate functions \(\gamma_{k}(t)\) to a dissipator \(-\tilde{\mathcal{L}}_{t}\) characterised by \(\{L_{k},\Gamma_{k}(t)\}_{k}\) \[-\tilde{\mathcal{L}}_{t}(\rho)=-\sum_{k}\Gamma_{k}(t)\left(L_{k}\rho L_{k}^{ \dagger}-\frac{1}{2}\{L_{k}^{\dagger}L_{k},\rho\}\right). \tag{6}\] with non-sign definite decoherence rates \(-\Gamma_{k}(t)\) if there exist a positive function \(m(t)\) such that \[\gamma_{k}(t)+\Gamma_{k}(t)=m(t),\quad\forall k. \tag{7}\] This pairing between dissipators is achieved with the influence martingale \(\mu(t)\) which is designed to follow evolution of its corresponding quantum trajectory. For a quantum trajectory with jumps \(\{t_{j},L_{k_{j}}\}\), the influence martingale at time \(t\) equals \[\mu(t)=\exp\left(\int_{0}^{t}ds\:m(s)\right)\prod_{j,\,t_{j}\leq t}\left( \frac{-\Gamma_{k_{j}}(t_{j})}{\gamma_{k_{j}}(t_{j})}\right). \tag{8}\] The solution to the master equation \[\frac{d}{dt}\bar{\rho}(t)=-i[H(t),\bar{\rho}(t)]-\tilde{\mathcal{L}}_{t}(\bar {\rho}(t)) \tag{9}\] where \(\tilde{\mathcal{L}}_{t}\) has jump operators and rates \(\{L_{k},\Gamma_{k}(t)\}\), is then obtained by weighing the average over all trajectories \(\mathbb{E}\) with \(\mu(t)\) for each trajectory. Concretely, the solution to eq. (9) is given by \[\bar{\rho}(t)=\mathbb{E}(\mu(t)\sigma(t)). \tag{10}\] As outlined above, the quantum trajectories from one reservoir can be used to simulate the dynamics of another by reweighing stochastic averages with the influence martingale. In this sense, the influence martingale has the role of a quasi-probability measure in a post-processing procedure. Computing the expectation value of an observable \(O\) with eq. (10) gives \(\mathrm{tr}(O\rho(t))=\mathbb{E}(\mu(t)\mathrm{tr}(O\sigma(t)))\). This expression clearly resembles the error mitigation via quasi-probability methods given in eq. (1). ### Error Mitigation We consider a system which undergoes a unitary evolution governed by a Hamiltonian \(H(t)\). The free system evolution is disturbed by a bath which leads to a dissipator \(\tilde{\mathcal{L}}_{t}\) of the form (6) with a set of Lindblad operators and decoherence rates \(\{L_{k},\Gamma_{k}(t)\}_{k}\). The system evolution is thus described by the master equation \[\frac{d}{dt}\rho(t)=-i[H(t),\rho(t)]+\tilde{\mathcal{L}}_{t}(\rho(t)). \tag{11}\] We aim to cancel the influence of the reservoir by introducing another specifically engineered reservoir, which leads to an extra dissipator \(\mathcal{L}_{t}\) with \(\{L_{k},\gamma_{k}(t)\}_{k}\) where the rates \(\gamma_{k}(t)\geq 0\). The rates are chosen such that there is a function \(m(t)\) such that the relation (7) holds true. The resulting system evolution is \[\frac{d}{dt}\rho(t)=-i[H(t),\rho(t)]+\tilde{\mathcal{L}}_{t}(\rho(t))+ \mathcal{L}_{t}(\rho(t)) \tag{12}\] If the decoherence rates \(\Gamma_{k}(t)\) of the noise bath dissipator \(\tilde{\mathcal{L}}_{t}\) are all positive definite the full master equation (12) can be unravelled in quantum jump trajectories. The jumps of the system are then caused by both the engineered reservoir and the noise source, in Fig. 2 the to curve illustrate such a system jump trajectory. We assume, however, that we are only able to continuously measure the engineered reservoir. Therefore we only obtain the measurement record of the reservoir, the bottom curve in Fig. 2. With this measurement record we obtain the set of jump times and jump operators \(\{t_{j},L_{k_{j}}\}_{j}\) for the system state \(\psi(t)\) caused by the reservoir. With this set, we construct the martingale as in the last section (8) and reweigh the trajectories by it. Our error mitigation strategy is then as follows: * Bring the system in contact with an additional reservoir as illustrated in Fig. 1 which leads to an extra dissipator \(\mathcal{L}_{t}\) in the master equation with rates and jump operators \(\{\gamma_{k}(t),L_{k}\}_{k}\). The resulting evolution is given by equation (12). * By continuously monitoring the additional reservoir, construct a measurement record \(t_{j},L_{k_{j}}\) with jumps caused by Lindblad operators \(L_{k_{j}}\) at times \(t_{j}\). * Weigh the trajectories by the appropriate martingale function (8), such that the averaged state \(\rho^{*}(t)=E(\mu(t)\psi(t)\psi^{\dagger}(t))\) solves the purely unitary evolution \[\frac{d}{dt}\rho^{*}(t)=-i[H(t),\rho^{*}(t)].\] (13) The quantum error mitigation cost, as defined in [33] is given in terms of the martingale by \[C(t) =E(|\mu(t)|)\] \[\leq\exp\left(\int_{0}^{t}ds\left[\min_{k}(-\Gamma_{k}(s))-|\min _{k}(-\Gamma_{k}(s))|\:\right]\right).\] The bound for the cost bears similarity to the expression for the cost obtained in [18]. In fact, by making the same assumption on the Lindblad operators (i.e. \(L_{k}^{\dagger}L_{k}=\mathbb{I},\:\forall k\)) as [18] recover the same expression of for cost, see Appendix B. Note that the above bound equals 1 when all decoherence rates \(\Gamma_{k}(t)\) are negative definite functions. In this case the \(-\tilde{\mathcal{L}}_{t}\) is of the Lindblad form and thus the noise can be cancelled without the need implementing a quasi-probability distribution. In the above presentation we have made the assumption that adding the reservoir, a second bath, results in an extra dissipative term in the master equation. This is justified when the system and both bath are initially in a product states and the coupling to the baths is sufficiently weak such that the weak coupling limit can be applied. For an overview of the weak coupling limit see e.g. [39; 40] and for a discussion on the applicability of master equations in quantum computing models see [41]. ### Example: Anisotropic Heisenberg Model This model is described by the Hamiltonian \[H= J\sum_{\langle ij\rangle}[(1+\gamma)\sigma_{x}^{(i)}\sigma_{x}^ {(j)}+(1-\gamma)\sigma_{y}^{(i)}\sigma_{y}^{(j)}+\sigma_{z}^{(i)}\sigma_{z}^{( j)}]\] \[-\gamma h\sum_{i=1}^{4}\sigma_{y}^{(i)}, \tag{14}\] where the sum over \(\langle ij\rangle\) denotes a nearest interaction between the 4 spins ordered on a \(2\times 2\) lattice and \(\sigma_{x,y,z}^{(i)}\) are the Pauli operators acting on the \(i\)-th site. The qubits all experience local relaxation and dephasing noise, respectively described by the dissipators \(\mathcal{L}_{R}\) and \(\mathcal{L}_{D}\), thus \(\tilde{\mathcal{L}}=\mathcal{L}_{R}+\mathcal{L}_{D}\). The dissipators are of the form (3) where \(\mathcal{L}_{R}\) is characterised by the set of rates and Lindblad operators \(\{\Gamma_{R},\sigma_{-}^{(i)}\}_{i=1}^{4}\) and \(\mathcal{L}_{D}\) by \(\{\Gamma_{D},\sigma_{z}^{(i)}\}_{i=1}^{4}\). For our model we take \(\Gamma_{R}=\Gamma_{D}=0.001J\). Fig. 3 shows illustrates the performance of our proposed error mitigation scheme by plotting \(|1-\mathcal{F}|\), where \(\mathcal{F}\) is the fidelity \[\mathcal{F}(\rho,\sigma)=\left(\mathrm{tr}\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}} \right).\] The (smooth) light red line shows the change in fidelity between the between target state with unitary evolution governed by \(H\) (14) and the disturbed evolution with the extra dissipator \(\tilde{\mathcal{L}}=\mathcal{L}_{R}+\mathcal{L}_{D}\). The (smooth) dark red line shows the fidelity between the target state and the state with both the dissipator \(\tilde{\mathcal{L}}\) due to the noise source Figure 2: Illustration of a quantum jump trajectory of a quantum system (top) and the measurement record in the engineered reservoir (bottom). Figure 1: Our proposed simulation scheme. The system is in contact with a noise source which adds the dissipator \(\tilde{\mathcal{L}}\) in its evolution. The system is brought in contact with an additional engineered reservoir which leads to the additional dissipator \(\mathcal{L}\). The engineered reservoir is constantly observed such that trajectories for the system state \(\sigma_{k}\) can be generated. By post selection with the influence martingale \(\mu(t)\) (8) the state \(\rho^{*}\) can be constructed which solves the free evolution (13). Figure 3: Fidelity \(F\) of the pure unitary 4 qubit evolution governed by (14) with the disturbed and error mitigated evolutions. The pure evolution is disturbed by the dissipator \(\tilde{\mathcal{L}}=\mathcal{L}_{R}+\mathcal{L}_{D}\), the error mitigation leads to the extra dissipator \(\mathcal{L}\) worsening the outcome. The blue curves show the fidelity after error mitigation for \(10^{4}\), \(10^{5}\) and \(10^{6}\) trajectories. and the dissipator \(\mathcal{L}\) due to the additional reservoir which will be used for error mitigation. The (noisy) blue lines show the fidelity with the martingale-based error mitigation which realises \(-\tilde{\mathcal{L}}\) by averaging over \(10^{4}\), \(10^{5}\) and \(10^{6}\) generated trajectories. Let \(\mathcal{F}_{E}\) be the fidelity of the target unitary evolution with the error correction and \(\mathcal{F}_{\mathcal{L}}\) with the unitary evolution disturbed by \(\mathcal{L}\). The difference \(\log_{10}(|1-\mathcal{F}_{E}|)-\log_{10}(|1-\mathcal{F}_{\tilde{\mathcal{L}} }|)\) averaged over the time interval in Fig. 3 is \(-1.5\), \(-1.8\) and \(-2.6\) for \(10^{4}\), \(10^{5}\) and \(10^{6}\) trajectories, respectively. We consider two types of errors in the quantum error mitigation 1. **Errors in correctly identifying the Lindblad operators \(L_{k}\) and the decoherence rates \(\Gamma_{k}\)**: We model the errors on the Lindblad operators \(L_{k}+\eta_{k}\,J_{k}\) where \(\eta_{k}\in[0,\mathcal{E}_{L}]\) and \(J_{k}\) a matrix with uniform random entries between \(0\) and \(1\). \(J_{k}\) only acts on the site on which \(L_{k}\) acts, e.g. for \(\sigma_{x}^{(i)}\) the noise operator only acts on the \(i\)-th site. The errors on the rates are modelled as \(\Gamma_{k}+\delta_{k}\Gamma_{k}\) where \(\delta_{k}\in[0,\mathcal{E}_{R}]\). 2. **Errors in correctly identifying the jump times of quantum trajectories \(\{t_{j},L_{k_{j}}\}_{j}\)**: The errors on the jump times are modelled by drawing a random number from \(\epsilon_{j}\in[-\mathcal{E}_{T}/2,\mathcal{E}_{T}/2]\) and shifting the jump timings by \(\{t_{j}+\epsilon_{j}\Gamma^{-1},L_{k_{j}}\}_{j}\). Fig. 4 shows the improvement of the fidelity between the free evolution and the evolution disturbed by the noise source \(\mathcal{F}_{\tilde{\mathcal{L}}}\) and the fidelity with error mitigation with errors \(\mathcal{F}_{E}\) as described in point a. above, averaged over \(10\) realisations of the errors. Even for errors of the order of the norm of the Lindblad operators and the jump rates, the error mitigation still gives improvement in the fidelity. In Fig. 4 we show the fidelity for the errors defined in point b. as a function of the size of the errors, again averaged over \(10\) realisations of the errors. ## IV Error mitigation with simulated quantum trajectories In the last section, we showed that performing post-processing on the quantum trajectories of a specially engineered reservoir can be used as a quantum error mitigation technique to cancel out the influence of a noise source. Constructing such a reservoir in full generality can be challenging. Luckily, we do not require an actual reservoir but just that the system undergoes the appropriate quantum jump trajectories. In this section, we show that the these trajectories can be simulated using the scheme developed by Lloyd and Viola in [31] by letting the system interact with one ancillary qubit and measuring that qubit. Using this scheme for error mitigation relies on the assumption that interaction with the ancilla and measurement happen on a much faster time scale than the interaction with the reservoir. The error mitigation scheme shown in Fig. 7 then consists splitting the total interaction time with the bath up into intervals of size \(\Delta t\). Before each of these interval, an interaction with the ancilla (\(U_{LV}\)) takes place, which negates the environmental influence of the coming step. Figure 5: Final state fidelity as a function of the error on jump detection times. The three (overlapping) full lines show the performance of the error mitigation for the state at time \(Jt=50\) for \(10^{4}\), \(10^{5}\) and \(10^{6}\) trajectories. The result is the average over \(10\) realisations of the noise. The (full) light red line shows the value of the final state fidelity under the noise described by the dissipator \(\tilde{\mathcal{L}}\). Figure 4: Improvement of the fidelity at the final time \(J\,t=50\) with quantum error mitigation including errors. \(\mathcal{F}_{\tilde{\mathcal{L}}}\) denotes the fidelity of the state freely evolving with the Hamiltonian \(H\) (14) with the state disturbed by \(\tilde{\mathcal{L}}=\mathcal{L}_{R}+\mathcal{L}_{D}\) and \(\mathcal{F}_{E}\) with the error mitigated state with errors. \(\mathcal{E}_{L}\) gives the strength of the error on the Lindblad operators and \(\mathcal{E}_{R}\) the strength of the errors on the rates in \(\mathcal{L}\). Error mitigation is done with \(10^{4}\) trajectories and the values of the plotted function are averaged over \(10\) realisations of the noise. ### Simulating Quantum Trajectories for Lindblad master equations In a physical system, Lindblad dissipators \(\mathcal{L}_{t}\) (3) can be realised with the simulation scheme for Lindblad dissipators proposed in [31]. The simulation scheme essentially realises the stochastic quantum jump trajectories which we discussed in Section III.1. These trajectories are generated by repeatedly letting the system interact with an ancilla qubit and measuring that ancilla. The Lindblad evolution generated by \(\mathcal{L}_{t}\) is then obtained by averaging over the generated trajectories. Therefore, the influence martingale (8) can be used to weigh the trajectories just as in the previous section, such that that time local master equations with non-positive-definite decoherence rates. We present here a simplified version of the scheme of [31] in the case of one Lindblad operator \(L\) with \(L^{\dagger}L=\mathbb{I}\) and the polar decomposition \(L=U\,X\) where \(U\) is a unitary and \(X\) a positive operator. To simulate a generator with Lindblad operator \(L\) and jump rate \(\gamma\) for time steps \(\Delta t=t_{C}/\alpha\) the system is coupled with an ancillary qubit through the Hamiltonian \[H_{LV}=\sqrt{\frac{\gamma}{\alpha t_{C}}}X\otimes\sigma_{x} \tag{15}\] for a time \(t_{C}\). Let the initial condition be \(\rho(0)=\rho_{S}\otimes|0\rangle\langle 0|\), where \(\rho_{S}\) is the system initial state and \(\sigma_{z}|0\rangle=-|0\rangle\). A straightforward computation gives an explicit expression for the unitary time-evolution operator of the composite system \[e^{-iH_{LV}t_{C}}= \cos\left(\sqrt{\frac{\gamma}{\alpha t_{C}}}t_{C}X\right) \otimes\mathbb{I}-i\sin\left(\sqrt{\frac{\gamma}{\alpha t_{C}}}t_{C}X\right) \otimes\sigma_{x} \tag{16}\] Let \(\rho(t_{C})\) be the composite state after an interaction time \(t_{C}\) \[\rho(t_{C})=e^{-iH_{LV}t_{C}}\rho(0)e^{iH_{LV}t_{C}}. \tag{17}\] After this interaction, the ancillary qubit is measured in the \(|0\rangle\), \(|1\rangle\) basis, with \[\sigma_{z}|0\rangle=-|0\rangle\quad\text{and}\quad\sigma_{z}|1\rangle=|1\rangle.\] We assume that \(\sqrt{\frac{\gamma}{\alpha t_{C}}}t_{C}\) is small enough such that we can expand all expressions below up to second order in \(\sqrt{\frac{\gamma}{\alpha t_{C}}}t_{C}\). The state \(|0\rangle\) is measured with probability \[p_{0}=1-\gamma\Delta t\,\text{tr}(L^{\dagger}L\rho_{S}) \tag{18}\] and the system is in the state \[\frac{\text{tr}_{2}(\rho(t_{C})\,\mathbb{I}\otimes|0\rangle\langle 0|)}{p_{0}} =\frac{\rho_{S}-\frac{\gamma\Delta t}{2}\{L^{\dagger}L,\rho_{S}\}}{1-\gamma \Delta t\,\text{tr}(L^{\dagger}L\rho_{S})}, \tag{19}\] where \(\text{tr}_{2}\) is the partial trace over the qubit Hilbert space and we used that \(L=U\,X\), with \(U\) a unitary operator and thus \(X^{2}=L^{\dagger}L\). After measuring 1 with probability \[p_{1}=\gamma\Delta t\,\text{tr}(X^{\dagger}X\rho_{S})\] the unitary \(U\) is multiplied to the system such that it is in the state \[U\frac{\text{tr}_{2}(\rho(t_{C})\,\mathbb{I}\otimes|1\rangle \langle 1|)}{p_{1}}U^{\dagger}=\frac{L\rho_{S}L^{\dagger}}{\text{tr}(L^{ \dagger}L\rho_{S})} \tag{20}\] After averaging over both measurement outcomes the system is in the state \[\rho_{S}(\Delta t)=p_{0}\frac{\text{tr}(\rho(t_{C})\,\mathbb{I} \otimes|0\rangle\langle 0|)}{p_{0}}+p_{1}\frac{\text{tr}(\rho(t_{C})\,\mathbb{I} \otimes|1\rangle\langle 1|)}{p_{1}} \tag{21}\] and we find that \[\frac{\rho_{S}(\Delta t)-\rho_{S}}{\Delta t}=\gamma\left(L\rho_{S}L^{\dagger} -\frac{1}{2}\{L^{\dagger}L,\rho_{S}\}\right) \tag{22}\] and have thus simulated one time step \(\Delta t\) of the evolution of a master equation with Lindblad operator \(L\) and rate \(\gamma\). Let us take a closer look at the states after measurement outcomes 0 (19) and 1 (20). For small time steps \(\Delta t\), the probability to measure 0 is of order \(O(1-\Delta t)\) and the system state undergoes a small change of order \(O(\Delta t)\). For small \(\Delta t\), mostly outcome 0 is measured and the state evolves continuously (19) on the rare events that 1 is measured, the system undergoes a sudden change, a quantum jump. A quantum trajectory is thus generated by \(N\) consecutive interactions with the ancillary qubit and is fully characterised by set of jump times \(\{t_{j}=n_{j}\Delta t,L\}\) where \(n_{j}\in\mathbb{N},\,\forall j\). By averaging over all trajectories, a system evolution of a master equation with Lindblad operator \(L\) and rate \(\gamma\) is simulated for a time \(t=N\Delta t\). In the limit of \(\Delta t\downarrow 0\) these trajectories are exactly the quantum jump trajectories discussed in Section III.1 of a dissipator with Lindblad operator \(L\) and rate \(\gamma\). The generalisation to \(K\) Lindblad operators and rates \(\{L_{k},\gamma_{k}\}_{k=1}^{K}\) is given in the supplementary material. This can either be done by performing \(K\) consecutive qubit interactions, each time with a different operator \(X_{k}\) and coupling strengths in \(H_{LV}\) (15), and measurements. On the other hand, we can resort to a probabilistic approach and for each time step draw a random Lindblad operator \(L_{k}\) with probability \(1/K\) and choose \(X_{k}\) and the required coupling strength in the Hamiltonian (15) to implement one step with \(L_{k}\) and \(\gamma_{k}\). As in the single Lindblad operator case, the trajectory generated by \(N\) interactions with the ancilla is characterised by the set of jump times and Lindblad operators \(\{t_{j}=n_{j}\Delta t,L_{k_{j}}\}\) and in the limit of \(\Delta t\downarrow 0\) they recover the quantum jump trajectories unravelling a dissipator with \(\{L_{k},\gamma_{k}\}_{k=1}^{K}\). A Hamiltonian term \(H(t)\) in the master equation can be simulated as well. This can be done by making use of Trotter's formula. After the \(k\)-th time step of size \(\Delta t\) is simulated, the unitary operator \(\exp(-iH(k\Delta t)\Delta t)\) is applied to the state of the system. In the limit of \(\Delta t\downarrow 0\) the system state averaged over all trajectories solves the master equation with the simulated dissipator and the Hamiltonian \(H(t)\). ### Simulating Quantum Trajectories for general master equations Let us now perform additional post-processing after the measurement on the ancilla qubit to implement the martingale pseudo-measure (8). Depending on the measurement outcome, we multiply \(\mu\) \[\mu=\begin{cases}1+m\Delta t=1+m\Delta t\,L^{\dagger}L&\text{0 measurement}\\ +\frac{\gamma-m}{m}&\text{1 measurement}\end{cases} \tag{23}\] After averaging over the measurement results we find \[\bar{\rho}_{S}(\Delta t)= p_{0}(1+m\Delta t\,L^{\dagger}L)\frac{\operatorname{tr}(\rho(t_{ C})\,\mathbb{I}\otimes|0\rangle\langle 0|)}{p_{0}}\] \[-p_{1}\frac{\gamma-m}{\gamma}\frac{\operatorname{tr}(\rho(t_{C}) \,\mathbb{I}\otimes|1\rangle\langle 1|)}{p_{1}} \tag{24}\] up second order in \(\sqrt{\frac{\gamma}{atc}}t_{C}\), the above equation leads to \[\frac{\bar{\rho}(\Delta t)-\rho_{S}}{\Delta t}=-\Gamma\left(L\rho_{S}L^{ \dagger}-\frac{1}{2}\{L^{\dagger}L,\rho_{S}\}\right) \tag{25}\] which simulates one time step of a master equation with a dissipator with Lindblad operator \(L\) and decoherence rate \(-\Gamma=\gamma-m\) which is not necessarily positive definite. After \(N\) interactions with the ancilla, a quantum trajectory \(\{t_{j}=n_{j}\Delta t,L\}\) is generated. The total multiplicative factor equals \[\mu(Nt)=\prod_{n,\,n\leq N}\left(1+m\Delta t\right)\prod_{j,\,n_{j}\leq N} \left(\frac{-\Gamma}{\gamma}\right), \tag{26}\] which in the limit of \(\Delta t\downarrow 0\) recovers the influence martingale pseudo measure (8) in the case of one Lindblad operator. For \(K\) Lindblad operators with rates \(\gamma_{k}\) and decoherence rates \(\Gamma_{k}\), the global multiplicative factor equals \[\mu(t)=\prod_{n,\,n\Delta t\leq t}\left(1+m\Delta t\right)\prod_{j,\,n_{j} \Delta t\leq t}\left(\frac{-\Gamma_{k_{j}}}{\gamma_{k_{j}}}\right), \tag{27}\] which, again, recovers the martingale (8) in the limit \(\Delta t\downarrow 0\). Fig. 6 shows the simulation of the time evolution of a 2-level atom in a photonic bandgap [42]. The evolution of the 2-level is governed by the master equation \[\frac{d}{dt}\rho(t)= i\frac{S(t)}{2}[\sigma_{+}\sigma_{-},\rho(t)]\] \[+\Gamma(t)\left(\sigma_{-}\rho(t)\sigma_{+}-\frac{1}{2}\{\sigma_{ +}\sigma_{-},\rho(t)\}\right) \tag{28}\] where \[S(t)=-2\operatorname{Im}\frac{\frac{d}{dt}c(t)}{c(t)},\quad\Gamma(t)=-2 \operatorname{Re}\frac{\frac{d}{dt}c(t)}{c(t)} \tag{29}\] and \(c(t)\) is given by eq. (2.21) in [42] with \(\beta=-\delta\). ### Error mitigation We illustrate how the simulation of quantum trajectories using an ancilla qubit can be used to implement the error scheme developed in Sec. III.2. Figure 7 shows Figure 7: Error mitigation scheme with simulated trajectories. The evolution is divided into steps of size \(\Delta t\). Before each step, the error mitigation is applied to cancel out the ensuing effect of the noise bath. Figure 6: Computation of the solution of the 2-level system master equation (28) using the simulation of quantum trajectories developed in Sec. IV.2. The initial 2 level system state equals \(\rho(t_{i})=(\mathbb{I}+\sigma_{z})/2\). The trace of the state is shown by the crosses and the diamonds and circles show expectations normalised by the trace. The full lines show the numerical integration of (28). how we use the extension of [31] to cancel the dissipator \(\bar{\mathcal{L}}\) with Lindblad operates and jump rates \(\{L_{k},\Gamma_{k}(t)\}_{k}\) of the unwanted noise source. We assume that the time \(t_{LV}\) to implement one time step of the trajectory simulation scheme \(t_{LV}\ll|\bar{\mathcal{L}}|^{-1}\) such that the step can be implemented without disturbance of the environment. The time \(t_{LV}\) consist of the time to implement the unitary evolution of system and ancilla (16), to perform the measurement and to implement the eventual unitary on the system depending on the measurement outcome. We then divide the system evolution up into time intervals of length \(\Delta t\). At the beginning of each time interval, we implement one step of the quantum trajectory simulation scheme introduced in the last section. Then, for a simulated quantum trajectory with jumps \(\{t_{j}=n_{j}\Delta t,L_{k_{j}}\}_{j}\), we construct the measure (27). When averaging the trajectories weighted by the martingale function defined above the evolution with dissipator \(-\bar{\mathcal{L}}\) is simulated. Thus, in the limit of \(\Delta t\downarrow 0\) the original dissipator due to the noise \(\bar{\mathcal{L}}\) is cancelled. ## Appendix A Unravelling Time-Local Master Equations ### Unravelling an equation of the Lindblad form Lindblad dynamics for a state operator \(\rho(t)\) are generated by the differential equation \[\frac{d}{dt}\rho(t)= -i[H(t),\rho(t)]\] \[+\sum_{k}\gamma_{k}(t)\left(L_{k}\rho(t)L_{k}^{\dagger}-\frac{1} {2}\{L_{k}^{\dagger}L_{k},\rho(t)\}\right) \tag{10}\] Figure 8: Fidelity \(F\) of the pure unitary 4 qubit evolution governed by (14) with the disturbed and error mitigated evolutions. \(10^{4}\) trajectories were generated and total time interval is divided in different amount of a time steps, i.e. different \(\Delta t\) are chosen. where the jump rates \(\gamma_{k}(t)\geq 0\). The dynamics of the Lindblad equation can also be obtained by the so-called unravelling in quantum trajectories. These quantum trajectories are generated by stochastic differential equations containing either Wiener noise or random jumps governed by Poison processes. Here we are concerned with the latter. Consider the state vector \(\psi(t)\) which solves the stochastic Schrodinger equation \[d\psi(t)= -iH(t)\psi(t)\,dt\] \[-\frac{1}{2}\sum_{k}\gamma_{k}(t)\left(L_{k}^{\dagger}L_{k}-\|L_{ k}\psi(t)\|^{2}\right)\psi(t)\,dt\] \[+\sum_{k}\left(\frac{L_{k}\psi(t)}{\|L_{k}\psi(t)\|}-\psi(t) \right)dN_{k} \tag{10}\] where \(d\psi(t)=\psi(t+dt)-\psi(t)\) and the \(dN_{k}\) are increments of counting processes \(N_{k}\). The increments equal \(0\) when no jumps happen and \(1\) when a jump happens. The rates of the counting processes, conditioned on the system state are \[\mathbb{E}(dN_{k}|\psi(t))=\gamma_{k}(t)\|L_{k}\psi(t)\|^{2}dt. \tag{11}\] The solution \(\rho(t)\) of the Lindblad master equation (10) is then obtained by the average \[\rho(t)=\mathbb{E}(\psi(t)\psi^{\dagger}(t)). \tag{12}\] Equivalent to the stochastic Schrodinger equation for the state vector, Lindblad dynamics can be unravelled by a stochastic master equation for a state operator \(\sigma(t)\) \[d\sigma(t)= -i[H(t),\sigma(t)]\] \[-\frac{1}{2}\sum_{k}\gamma_{k}(t)\left(\{L_{k}^{\dagger}L_{k}, \sigma(t)\}-2\operatorname{tr}(L_{k}^{\dagger}L_{k}\sigma(t))\sigma(t)\right)\] \[+\sum_{k}\left(\frac{L_{k}\sigma(t)L_{k}^{\dagger}}{\operatorname{ tr}(L_{k}^{\dagger}L_{k}\sigma(t))}-\sigma(t)\right)dN_{k}. \tag{13}\] It is indeed straightforward to check that setting \(\sigma(t)=\psi(t)\psi^{\dagger}(t)\) solves the above master equation. ### Unravelling time-local master equations with the influence martingale Let us now introduce the martingale stochastic process \(\mu(t)\) whose evolution is enslaved to the stochastic state vector evolution (10) \[\begin{cases}d\mu(t)&=m(t)\mu(t)dt+\mu(t)\sum_{k}\left(\frac{\gamma_{k}(t)-m(t )}{\gamma_{k}(t)}-1\right)dN_{k}(t)\\ \mu(0)&=1.\end{cases} \tag{14}\] We then define state \[\rho^{\prime}(t)=\mathbb{E}(\mu(t)\psi(t)\psi^{\dagger}(t))\] which, using the rules of stochastic calculus [43], can be proven to solve the master equation [28; 29] \[\frac{d}{dt}\rho^{\prime}(t)= -i[H(t),\rho^{\prime}(t)]\] \[-\sum_{k}\Gamma_{k}(t)\left(L_{k}\rho^{\prime}(t)L_{k}^{\dagger}- \frac{1}{2}\{L_{k}^{\dagger}L_{k},\rho^{\prime}(t)\}\right). \tag{15}\] where \[\gamma_{k}(t)+\Gamma_{k}(t)=m(t) \tag{16}\] and note that since \(m(t)\) is an arbitrary scalar function and therefore the decoherence rates \(\Gamma_{k}(t))\) are not necessarily positive definite. ## Appendix B Quantum Error Mitigation ### Error Mitigation Scheme We consider a system undergoing a unitary evolution governed by a Hamiltonian \(H(t)\). The system's unitary evolution is disturbed by a noise source which leads to a dissipator \(\mathcal{L}\) in the system evolution \[\frac{d}{dt}\rho(t)=-i[H(t),\rho(t)]+\tilde{\mathcal{L}}_{t}(\rho(t)) \tag{10}\] with \[\tilde{\mathcal{L}}_{t}(\rho)=\sum_{k}\Gamma_{k}(t)\left(L_{k}\rho L_{k}^{ \dagger}-\frac{1}{2}\{L_{k}^{\dagger}L_{k},\rho\}\right). \tag{11}\] We will cancel out the influence of the noise source by bringing the system in contact with a specifically engineered reservoir which leads to an additional dissipator \(\mathcal{L}_{t}\) in the evolution of the system state operator \[\frac{d}{dt}\rho(t)=-i[H(t),\rho(t)]+\tilde{\mathcal{L}}_{t}(\rho(t))++ \mathcal{L}_{t}(\rho(t)) \tag{12}\] with \[\mathcal{L}_{t}(\rho)=\sum_{k}\gamma_{k}(t)\left(L_{k}\rho L_{k}^{\dagger}- \frac{1}{2}\{L_{k}^{\dagger}L_{k},\rho\}\right). \tag{13}\] and positive jump rates \[\gamma_{k}(t)+\Gamma_{k}(t)=m(t). \tag{14}\] If the additional reservoir is continuously observed, the system evolution becomes a stochastic master equation \[\frac{d}{dt}\sigma(t)= -i[H(t),\sigma(t)]+\tilde{\mathcal{L}}_{t}(\sigma(t))\] \[-\frac{1}{2}\sum_{k}\gamma_{k}(t)\left(\{L_{k}^{\dagger}L_{k}, \sigma(t)\}-\mathrm{tr}(L_{k}^{\dagger}L_{k}\sigma(t))\sigma(t)\right)\] \[+\sum_{k}\left(L_{k}\sigma(t)L_{k}^{\dagger}-\sigma(t)\right)dN_{k} \tag{15}\] note that \(\mathbb{E}(\sigma(t))\) solves (12). Let \(\mu(t)\) be the influence martingale evolving according to (8) and define the state \[\rho^{*}(t)=\mathbb{E}(\mu(t)\sigma(t)). \tag{16}\] Again, using the rules of stochastic calculus and using the relation (14), it is possible to show that \[\frac{d}{dt}\rho^{*}(t) =-i[H(t),\rho(t)]+\tilde{\mathcal{L}}(\rho(t))-\tilde{\mathcal{L} }(\rho(t)) \tag{17}\] \[=-i[H(t),\rho(t)] \tag{18}\] ### Bound on the Cost The cost for the quantum error mitigation which we propose relying on the influence martingale \(\mu(t)\) is given by \[C(t)=E(|\mu_{t}|) \tag{19}\] using the result in [29], and the Cauchy-Schwarz inequality \(E(|\mu(t)|)\leq E(\mu(t)^{2})\) (where we used that \(E(1)=1\))), we find that for \(m(t)=2\min_{k}(\Gamma_{k}(t),0)\) \[E(|\mu_{t}|)\leq\exp\left(\int_{0}^{t}ds[1-\mathrm{sign}(\min_{k}(\Gamma_{k}(t ),0))]\,|\min_{k}(\Gamma_{k}(t),0)|\right). \tag{20}\] Alternatively, assuming that \(L_{k}^{\dagger}L_{k}=\mathbb{I}\), \(\forall k\), similar to [29], we recover the expression for the cost of [18] \[E(|\mu_{t}|)\leq\exp\left(\sum_{k}\int_{0}^{t}ds[1-\mathrm{sign}(\Gamma_{k}(s ))]\,|\Gamma_{k}(s)|\right). \tag{21}\] ## Appendix C Extension of Lloyd Viola ### \(N\) positive channels Let us now consider a master equation with \(N\) Lindblad operators, i.e. with a dissipator of the form (3) with operators and rates \(\{L_{k},\gamma_{k}\}_{k=1}^{N}\) For \(N\) positive channels, the scheme developed in [31] requires at most \(N\) consecutive measurements. Let the Lindblad operators have the polar decomposition \(L_{k}=U_{k}A_{k}\), where \(U_{k}\) is a unitary and \(A_{k}=|L_{k}|=\sqrt{L_{k}^{\dagger}L_{k}}\) a positive operator. Before outlining the measurement scheme, it is convenient to introduce two results. **Proposition 1**.: _A couple of positive Kraus operators \(M_{1,2}\) that satisfies the completeness relation \(M_{1}^{\dagger}M_{1}+M_{2}^{\dagger}M_{2}=\mathbb{I}\) can always be written in terms of an operator \(X\)_ \[M_{1}=\cos(X),\qquad M_{2}=\sin(X). \tag{10}\] Proof.: From the completeness relation \[M_{1}^{\dagger}M_{1}+M_{2}^{\dagger}M_{2}=\mathbb{I} \tag{11}\] we find that \(M_{1,2}\) are simultaneously diagonalisable. Indeed, let \(U\) be the unitary that diagonalises \(M_{1}\), written as \(D_{1}\). Acting on both sides of the completeness relation with \(U\) on the left and \(U^{\dagger}\) on the right gives \[D_{1}^{\dagger}D_{1}+UM_{2}^{\dagger}M_{2}U^{\dagger}=\mathbb{I}. \tag{12}\] Such that \(UM_{2}^{\dagger}M_{2}U^{\dagger}\) is diagonal and therefore \(D_{2}=\sqrt{UM_{2}^{\dagger}M_{2}U^{\dagger}}\) is also diagonal. Now it should be clear that \(M_{1,2}\) can indeed be written as (10). **Lemma 1**.: _Let \(M_{1,2}\) be positive operators. If \(\psi\) is in the kernel of \(\sqrt{M_{1}^{2}+M_{2}^{2}}\) then it is also in the kernel of \(M_{1}\) and \(M_{2}\)._ Let us now define the operators \(B_{j}\) for \(j=0\) to \(n-1\), where \(B_{0}=1-\frac{t_{C}^{2}}{2}\sum_{k}\varepsilon_{k}^{2}A_{k}^{\dagger}A_{k}\) and \[B_{j} =t_{C}\sqrt{\sum_{k=j}^{n}\varepsilon_{k}^{2}A_{k}^{\dagger}A_{k} }\quad\text{for}\quad j\geq 1. \tag{13}\] \[\varepsilon_{k} =\frac{\sqrt{\gamma_{k}\Delta t}}{t_{C}} \tag{14}\] it should be clear from their definition that the \(B_{j}\) are positive operators. Note that this does not impose any constraints on the Lindblad rates \(\gamma_{k}\), the interaction time \(t_{C}\) or the simulated time-step length \(\Delta t\). We then implement up to \(N\) consecutive quantum measurements, as outlined in Section IV.2: * Measurement 0: Is performed with operators \(B_{0}\), \(B_{1}\) which satisfy the completeness relation \(B_{0}^{\dagger}B_{0}+B_{1}^{\dagger}B_{1}=\mathbb{I}\) (neglecting terms of order \(O(\Delta t^{4})\)). By Proposition 1 we know that we can implement this measurement just as outlined above. If the measurement outcome is 0 we are done, if the outcome is 1 we implement another measurement. * Measurement \(j>0\): We perform a measurement with \(G_{0}=\varepsilon_{j}t_{C}A_{j}B_{j}^{-1}\) and \(G_{1}=B_{j+1}B_{j}^{-1}\). Note that by Lemma 1 taking the inverse makes sense since we have measured \(B_{j}\) in the last measurement and therefore the current state of the system cannot be orthogonal to \(B_{j}\). Furthermore Lemma 1 states that if \(\psi\) is in the kernel of \(B_{j}\) then it is also in the kernel of \(B_{j+1}\) and \(A_{j}\). The measurement operators satisfy the completeness relation \[G_{0}^{\dagger}G_{0}+G_{1}^{\dagger}G_{1}=\mathbb{J}_{j}\] (15) where \(\mathbb{J}_{j}\psi=0\) when \(\psi\) is in the kernel of \(B_{j}\) and \(\mathbb{J}_{j}\psi=\psi\) otherwise (so Proposition 1 can still be used). By using the polar decomposition for \(G_{0,1}=W_{0,1}\left|G_{0,1}\right|\) we can perform the measurement performed as outlined before. If the measurement outcome is 0 we apply the unitary \(U_{j}\) to the system, otherwise we perform measurement \(j+1\) * Measurement \(n-1\): The last measurement is performed with the operators \(G_{0}=\varepsilon_{n-1}t_{C}A_{n-1}B_{n-2}^{-1}\) and \(G_{1}=\varepsilon_{n}t_{C}A_{n}B_{n-2}^{-1}\). This can again be performed using the polar decomposition. For measurement outcome \(0\) we perform the unitary \(U_{n-1}\) and otherwise \(U_{n}\). To find the operator \(X\) and corresponding coupling rate \(\delta\) in the coupling Hamiltonian between system and ancilla \[H_{LV}=\delta t_{C}X\otimes\sigma_{x}, \tag{100}\] let \(V\) the unitary that diagonalises \(|G_{0}|\), \(D=V|G_{0}|V^{\dagger}\), \(D\) diagonal. Then we impose \[\cos(\delta t_{C}V^{\dagger}XV)\stackrel{{!}}{{=}}D\,. \tag{101}\] Because the right-hand side is diagonal, we can apply the inverse cosine to every single element to find the unit-trace operator \(V^{\dagger}XV\) and along with it the factor \(\delta t_{C}=\text{tr}(\arccos(D))\). Applying again the unitary \(V\) we obtain the operator \(X\). Overall the presented procedure implements the target evolution \[\rho_{S} \to B_{0}\rho_{s}B_{0}+\sum_{k=1}^{n}\varepsilon_{k}^{2}t_{C}^{2}U_{ k}A_{k}\rho_{S}A_{k}U_{k}^{\dagger}=\rho_{S}+\sum_{k=1}^{n}\gamma_{k}\Delta t \left(L_{k}\rho_{S}L_{k}^{\dagger}-\frac{1}{2}\{L_{k}^{\dagger}L_{k},\rho_{S} \}\right) \tag{102}\] \[\Rightarrow \frac{\rho_{S}(\Delta t)-\rho_{S}}{\Delta t}=\sum_{k=1}^{n} \gamma_{k}\left(L_{k}\rho_{S}L_{k}^{\dagger}-\frac{1}{2}\{L_{k}^{\dagger}L_{k},\rho_{S}\}\right) \tag{103}\] ### Probabilistic Approach Alternatively to the procedure outlined in the previous section, a master equation comprising \(N\) Lindblad operators can be implemented using the scheme developed in [31] probabilistically. To this end, we first of all recall the implementation of a single Lindblad operator \(L=UX\), U unitary and X positive, as it is presented in Section IV.2. We couple the system to an ancilla and evolve the system according to \[H_{LV}=\sqrt{\frac{\gamma}{\alpha t_{C}}}X\otimes\sigma_{x}\,, \tag{104}\] where \(\gamma\) is the corresponding rate, \(t_{C}\) is the coupling time and \(\alpha=t_{C}/\Delta t\), where \(\Delta t\) is the timestep we want to simulate. Subsequently we perform a measurement in the \(|0\rangle\), \(|1\rangle\) basis and apply \(U\) if the result is \(1\). This way we can implement dynamics according to \[\frac{\rho_{S}(\Delta t)-\rho_{S}}{\Delta t}=\gamma\left(L\rho_{S}L^{\dagger} -\frac{1}{2}\{L^{\dagger}L,\rho_{S}\}\right)\,. \tag{105}\] This procedure can straightforwardly be generalised to multiple Lindblad operators. Let us now consider \(N\) Lindblad operators \(L_{k}=U_{k}X_{k}\), \(U_{k}\) unitary and \(X_{k}\) positive, with corresponding rates \(\gamma_{k}\geq 0\) for \(k=1...N\). We define the Hamiltonians and rates \[H_{LV,k} =\sqrt{\frac{\tilde{\gamma}_{k}}{\alpha t_{C}}}X\otimes\sigma_{x} \tag{106}\] \[\tilde{\gamma}_{k} =N\,\gamma_{k} \tag{107}\] for \(k=1...N\). Now we choose randomly a Hamiltonian and rate \(H_{LV,k},\tilde{\gamma}_{k}\) from this set (meaning each pair has a probability \(p_{k}=1/N\) to be selected). This Hamiltonian we apply according to the beforementioned procedure to (on average) generate the dynamics \[\frac{\rho_{S}(\Delta t)-\rho_{S}}{\Delta t}=\tilde{\gamma}_{k}\left(L\rho_{S} L^{\dagger}-\frac{1}{2}\{L^{\dagger}L,\rho_{S}\}\right)\,. \tag{108}\] Iterating this for all timesteps and many realisations yields the target evolution \[\frac{\rho_{S}(\Delta t)-\rho_{S}}{\Delta t} =\sum_{k=1}^{N}p_{k}\tilde{\gamma}_{k}\left(L\rho_{S}L^{\dagger}- \frac{1}{2}\{L^{\dagger}L,\rho_{S}\}\right) \tag{109}\] \[=\sum_{k=1}^{N}\gamma_{k}\left(L\rho_{S}L^{\dagger}-\frac{1}{2} \{L^{\dagger}L,\rho_{S}\}\right)\,. \tag{110}\] ### Influence Martingale Method In case the Lindblad operators \(L_{j}\) themselves satisfy a completeness relation \[\sum_{j}L_{j}^{\dagger}L_{j}=\mathbb{I} \tag{101}\] we can act directly modify the method outlined in Subsection C.1. The extension relies on two observations: * If \(\sum_{j}L_{j}^{\dagger}L_{j}=\mathbb{I}\), then for \(\|\psi\|=1\) we have \[\sum_{j}\|L_{j}\psi\|=\mathbb{I}\] (102) * For any set of non-positive definite functions \(\Gamma_{j}(t)\), we can find a set of positive definite functions \(\gamma_{j}(t)\) and a positive function \(C(t)\) such that \[\Gamma_{j}(t)=\gamma_{j}(t)+m(t).\] (103) With these two realisations we can realise a master equation of the form \[\frac{d}{dt}\rho(t)=-i[H,\rho(t)]-\sum_{k}\Gamma_{k}(t)\left(L_{k}\rho(t)L_{k} ^{\dagger}-\frac{1}{2}\{L_{k}^{\dagger}L_{k},\rho(t)\}\right) \tag{104}\] by choosing a set of functions \(\gamma_{k}(t)\) and \(m(t)\) as outlined above and then perform the method outlined in Section C.1 to realise the master equation \[\frac{d}{dt}\rho(t)=-i[H,\rho(t)]+\sum_{k}\gamma_{k}(t)\left(L_{k}\rho(t)L_{k} ^{\dagger}-\frac{1}{2}\{L_{k}^{\dagger}L_{k},\rho(t)\}\right). \tag{105}\] However, we modify the protocol slightly be reweighing the measurement outcomes. Measurement \(0\) we reweigh by \[1+C(t)\gamma^{2}t^{2}=1+C(t)\gamma^{2}t^{2}\sum_{k}L_{k}^{\dagger}L_{k} \tag{106}\] and the other measurements by \[\frac{-\Gamma_{j}(t)}{\gamma_{j}(t)} \tag{107}\] which indeed gives equation (104). _Remark:_ even if (101) is not satisfied, we can still make things work. We know there exists a positive number \(a>0\) such that \[\sum_{j}L_{j}^{\dagger}L_{j}<a\mathbb{I} \tag{108}\] which means there exists an operator \(P\) such that \[\sum_{j}L_{j}^{\dagger}L_{j}+P^{\dagger}P=\mathbb{I} \tag{109}\] where we rescaled all the \(L_{j}\), \(P\) by \(\sqrt{a}\). We can then realise the desired master equation by choosing the rate for \(P\) to be \(\gamma(t)=C(t)\). Note that this means that whenever we measure \(P\), the trajectory is given probability \(0\).
2309.11643
Bottomonium Dissociation in a Rotating Plasma
Heavy vector mesons provide important information about the quark gluon plasma (QGP) formed in heavy ion collisions. This happens because the fraction of quarkonium states that are produced depends on the properties of the medium. The intensity of the dissociation process in a plasma is affected by the temperature, the chemical potential and the presence of magnetic fields. These effects have been studied by many authors in the recent years. Another important factor that can affect the dissociation of heavy mesons, and still lacks of a better understanding, is the rotation of the plasma. Non central collisions form a plasma with angular momentum. Here we use a holographic model to investigate the thermal spectrum of bottomonium quasi-states in a rotating medium in order to describe how a non vanishing angular velocity affects the dissociation process.
Nelson R. F. Braga, Yan F. Ferreira
2023-09-20T21:15:11Z
http://arxiv.org/abs/2309.11643v1
# Bottomonium Dissociation in a Rotating Plasma ###### Abstract Heavy vector mesons provide important information about the quark gluon plasma (QGP) formed in heavy ion collisions. This happens because the fraction of quarkonium states that are produced depends on the properties of the medium. The intensity of the dissociation process in a plasma is affected by the temperature, the chemical potential and the presence of magnetic fields. These effects have been studied by many authors in the recent years. Another important factor that can affect the dissociation of heavy mesons, and still lacks of a better understanding, is the rotation of the plasma. Non central collisions form a plasma with angular momentum. Here we use a holographic model to investigate the thermal spectrum of bottomonium quasi-states in a rotating medium in order to describe how a non vanishing angular velocity affects the dissociation process. Introduction Heavy ion collisions, produced in particle accelerators, lead to the formation of a new state of matter, a plasma where quarks and gluons are deconfined. This so called QGP behaves like a perfect fluid and lives for a very short time[1; 2; 3; 4]. The study of this peculiar state of matter is based on the analysis of the particles that are observed after the hadronization process occur and the plasma disappears. For this to be possible, it is necessary to understand how the properties of the QGP, like temperature (\(T\)) and density (\(\mu\)), affect the spectra that reach the detectors. In particular, quarkonium states, like bottomonium, are very interesting since they survive the deconfinement process that occur when the QGP is formed. They undergo a partial dissociation, with an intensity that depends on the characteristics of the medium, like \(T\) and \(\mu\). So, it is important to find it out how the properties of the plasma affect the dissociation. Bottomonium quasi-states in a thermal medium can be described using holographic models [5; 6; 7; 8; 9; 10; 11] see also [12; 13; 14]. In particular, the improved holographic model proposed in [8], which will be considered here, involves three energy parameters: one representing the heavy quark mass, another associated with the intensity of the strong interaction (string tension) and another with the non-hadronic decay of quarkonium. This model provides good estimates for masses and decay constants. Besides temperature, density and magnetic fields, another, less studied, property that affects the thermal behaviour is the rotation of the QGP, that occurs in non-central collisions. For previous works about the rotation effects in the QGP, see for example [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32]. In particular, a holographic description of the QGP in rotation can be found in Refs. [22; 28]. A rotating plasma with uniform rotational speed is described holographically in these works by a rotating black hole with cylindrical symmetry. Rotation is obtained by a coordinate transformation and the holographic model obtained predicts that plasma rotation decreases the critical temperature of confinement/deconfinement transition [28]. For a very recent study of charmonium in a rotating plasma see [33]. The purpose of this work is to study how rotation of the plasma affects the thermal spectrum of bottomonium quasi-states. In other words, we want to understand what is the effect of rotation in the dissociation process of \(b\bar{b}\). We will follow two complementary approaches. One is to calculate the thermal spectral functions and the other is to find the quasinormal models associated with bottomonium in rotation. The organization is the following. In section II we present a holographic model for bottomonium in a rotating plasma. In III we work out the equations of motion for the fields that describe the quasi states. In IV we discuss the solutions, taking into account the incoming wave boundary conditions on the black hole horizon. In section V we calculate the spectral functions for bottomonium and in VI we present the complex frequencies of the quasi-normal modes. Finally, section VII contains our conclusions and discussions about the results obtained. Holographic model for quarkonium in the plasma Vector mesons are represented holographically by a vector field \(V_{m}=(V_{t},V_{1},V_{2},V_{3},V_{z})\), that lives, in the case of a non rotating plasma, in a five dimensional anti-de Sitter (AdS\({}_{5}\)) black hole space with metric \[\mathrm{d}s^{2}=\frac{R^{2}}{z^{2}}\Big{(}-f(z)\mathrm{d}t^{2}+(\mathrm{d}x^{ 1})^{2}+(\mathrm{d}x^{2})^{2}+(\mathrm{d}x^{3})^{2}+\frac{1}{f(z)}\mathrm{d}z ^{2}\Big{)}, \tag{1}\] with \[f(z)=1-\frac{z^{4}}{z_{h}^{4}}\,. \tag{2}\] The constant \(R\) is the AdS radius and the Hawking temperature of the black hole, given by \[T=\frac{1}{\pi z_{h}}, \tag{3}\] is identified with the temperature of the plasma. The action integral for the field has the form \[I=\int\mathrm{d}^{4}x\int_{0}^{z_{h}}\mathrm{d}z\,\sqrt{-g}\,\mathcal{L}, \tag{4}\] with the Lagrangian density \[\mathcal{L}=-\frac{1}{4g_{5}^{2}}\mathrm{e}^{-\phi(z)}g^{mp}g^{nq}F_{mn}F_{pq}, \tag{5}\] where \(F_{mn}=\nabla_{\!m}V_{n}-\nabla_{\!n}V_{m}=\partial_{m}V_{n}-\partial_{n}V_{m}\), with \(\nabla_{\!m}\) being the covariant derivative. The dilaton like background field \(\phi(z)\) is [7; 8] \[\phi(z)=\kappa^{2}z^{2}+Mz+\tanh\!\Big{(}\frac{1}{Mz}-\frac{\kappa}{\sqrt{ \Gamma}}\Big{)}. \tag{6}\] The back reaction of \(\phi(z)\) on the metric is not taken into account. This modified dilaton is introduced in order to obtain an approximation for the values of masses and decay constants of heavy vector mesons. The three parameters of the model are fixed at zero temperature to give the best values for these quantities, specially the decay constants, compared with the experimental values for bottomonium found in the particle data group table [34]: \(\kappa_{b}=2.45\,\mathrm{GeV}\), \(\sqrt{\Gamma_{b}}=1.55\,\mathrm{GeV}\) and \(M_{b}=6.2\,\mathrm{GeV}\). They give the results presented in table 1. In order to have some characterization of the quality of the fit, one can define the root mean square percentage error (RMSPE) as \[\mathrm{RMSPE}=100\%\times\sqrt{\frac{1}{N-N_{p}}\sum_{i=1}^{N}\Big{(}\frac{ y_{i}-\hat{y}_{i}}{\hat{y}_{i}}\Big{)}^{2}}, \tag{7}\] where \(N=8\) is the number of experimental points (4 masses and 4 decay constants), \(N_{p}=3\) is the number of parameters of the model, the \(y_{i}\)'s are the values of masses and decay constants predicted by the model and the \(\hat{y}_{i}\)'s are the experimental values of masses and decay constants. With this definition, we have \(\text{RMSPE}=14.8\%\) for bottomonium. Now, in order to analyse the case of a rotating plasma, with homogeneous angular velocity, we consider an \(AdS_{5}\) space with cylindrical symmetry by writing the metric as \[\text{d}s^{2}=\frac{R^{2}}{z^{2}}\Big{(}-f(z)\text{d}t^{2}+\ell^{2}\text{d} \varphi^{2}+(\text{d}x^{1})^{2}+(\text{d}x^{2})^{2}+\frac{1}{f(z)}\text{d}z^{2 }\Big{)}, \tag{8}\] where \(R\) is again the AdS radius and \(\ell\) is the hypercylinder radius. As in ref. [22; 23; 28], we introduce rotation via the Lorentz-like coordinate transformation \[t \rightarrow\gamma(t+\Omega\ell^{2}\varphi) \tag{9}\] \[\varphi \rightarrow\gamma(\Omega t+\varphi),\] with \[\gamma=\frac{1}{\sqrt{1-\Omega^{2}\ell^{2}}}, \tag{10}\] where \(\Omega\) is the angular velocity of the rotation. With this transformation, the metric (8) becomes \[\begin{split}\text{d}s^{2}=\frac{R^{2}}{z^{2}}\Big{[}-\gamma^{2} \big{(}f(z)-\Omega^{2}\ell^{2}\big{)}\text{d}t^{2}+2\gamma^{2}\big{(}1-f(z) \big{)}\Omega\ell^{2}\text{d}t\text{d}\varphi&+\gamma^{2}\big{(} 1-\Omega^{2}\ell^{2}f(z)\big{)}\ell^{2}\text{d}\varphi^{2}\\ &+(\text{d}x^{1})^{2}+(\text{d}x^{2})^{2}+\frac{1}{f(z)}\text{d} z^{2}\Big{]},\end{split} \tag{11}\] The temperature of the rotating AdS black hole is[22; 23; 28] \[T=\frac{1}{\pi z_{h}}\sqrt{1-\Omega^{2}\ell^{2}}. \tag{12}\] Note that we recover (8) by doing \(\Omega\to 0\) in (11). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{5}{|c|}{Bottomonium Masses and Decay Constants} \\ \hline \multirow{2}{*}{State} & Experimental & \multirow{2}{*}{\begin{tabular}{c} Masses on the \\ (MeV) \\ \end{tabular} } & Experimental & \multirow{2}{*}{\begin{tabular}{c} Decay Constants \\ Decay Constants \\ (MeV) \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} Decay Constants \\ (MeV) \\ \end{tabular} } \\ \hline \(1S\) & \(9460.30\pm 0.26\) & 6905 & \(715.0\pm\ 4.8\) & 719 \\ \hline \(2S\) & \(10023.26\pm 0.31\) & 8871 & \(497.4\pm\ 4.5\) & 521 \\ \hline \(3S\) & \(10355.2\ \pm 0.5\) & 10442 & \(430.1\pm\ 3.9\) & 427 \\ \hline \(4S\) & \(10579.4\ \pm 1.2\) & 11772 & \(341\ \pm 18\) & 375 \\ \hline \end{tabular} \end{table} Table 1: Comparison of bottomonium masses and decay constants obtained experimentally [34] and from the tangent model. We assume that the rotating black hole metric of Eq. (11) is dual to a cylindrical slice of rotating plasma is dual to a cylindrical slice of rotating plasma. This interpretation can be justified by analysing the angular momentum \(J\). For the metric (11), \(J\) can be calculated [28] with the result \[J=-\frac{\partial\Phi}{\partial\Omega}=\frac{2L^{3}}{\kappa^{2}}\frac{\Omega}{z _{h}^{4}(1-\Omega^{2}l^{2})}\;, \tag{13}\] while for the metric (8) one has \(J=0\). Thus, the coordinate transformation is adding angular momentum to the system and therefore representing a plasma in rotation. ## III Equations of motion By extremizing the action (4), we find the equations of motion \[\partial_{n}(\sqrt{-g}\mathrm{e}^{-\phi}F^{mn})=0. \tag{14}\] We now choose a Fourier component of the field and, for simplicity, consider zero momentum (meson at rest), \(V_{m}(t,\mathbf{x},z)=v_{m}(\omega,z)\mathrm{e}^{-\mathrm{i}\omega t}\). We also choose the gauge \(V_{z}=0\). The equations of motion (14) become \[\frac{1-\Omega^{2}\ell^{2}f}{1-\Omega^{2}\ell^{2}}\frac{\omega^{2}}{f^{2}}v_{ i}+\Big{(}\frac{f^{\prime}}{f}-\frac{1}{z}-\phi^{\prime}\Big{)}v_{i}^{\prime}+v_{ i}^{\prime\prime}=0\qquad(i=1,2), \tag{15}\] \[-\Big{[}\;\frac{1}{1-\Omega^{2}\ell^{2}f^{-1}}\frac{f^{\prime}}{f}+\frac{1-f} {f-\Omega^{2}\ell^{2}}\Big{(}\frac{1}{z}+\phi^{\prime}\Big{)}\;\Big{]}\Omega \ell^{2}v_{t}^{\prime}+\frac{1-f}{f-\Omega^{2}\ell^{2}}\Omega\ell^{2}v_{t}^{ \prime\prime}\] \[+\frac{1-\Omega^{2}\ell^{2}}{1-\Omega^{2}\ell^{2}f^{-1}}\frac{\omega^{2}}{f^{ 2}}v_{\varphi}+\Big{[}\;\frac{1}{1-\Omega^{2}\ell^{2}f^{-1}}\frac{f^{\prime}} {f}-\frac{1}{z}-\phi^{\prime}\;\Big{]}v_{\varphi}^{\prime}+v_{\varphi}^{ \prime\prime}=0, \tag{16}\] \[\Big{[}\;\frac{1}{f^{-1}-\Omega^{2}\ell^{2}}\frac{f^{\prime}}{f}+ \frac{1-f}{1-\Omega^{2}\ell^{2}f}\Big{(}\frac{1}{z}+\phi^{\prime}\Big{)}\; \Big{]}\Omega v_{\varphi}^{\prime}-\frac{1-f}{1-\Omega^{2}\ell^{2}f}\Omega v_ {\varphi}^{\prime\prime}\] \[-\Big{[}\;\frac{1}{(\Omega^{2}\ell^{2}f)^{-1}-1}\frac{f^{\prime}}{f}+\frac{1 }{z}+\phi^{\prime}\;\Big{]}v_{t}^{\prime}+v_{t}^{\prime\prime}=0 \tag{17}\] and \[v_{t}^{\prime}-\frac{1-f}{1-\Omega^{2}\ell^{2}f}\Omega v_{\varphi}^{\prime}=0. \tag{18}\] where the prime stands for the derivative with respect to \(z\), \(f^{-1}=1/f\), and, for simplicity, we omit the dependence on \(z\) of \(f\) and \(\phi\) and the dependence on \((\omega,z)\) of the fields \(v_{\mu}\). These equations are not all independent, if we substitute (18) into (17) we obtain an identity, and substituting the same equation into (16), we obtain an equation for \(v_{\varphi}\) only. With this, the system of equations of motion simplifies to \[\frac{1-\Omega^{2}\ell^{2}f}{1-\Omega^{2}\ell^{2}}\frac{\omega^{2}}{f ^{2}}v_{i}+\Big{(}\frac{f^{\prime}}{f}-\frac{1}{z}-\phi^{\prime}\Big{)}v^{\prime} _{i}+v^{\prime\prime}_{i}=0\qquad(i=1,2), \tag{19}\] \[\frac{1-\Omega^{2}\ell^{2}f}{1-\Omega^{2}\ell^{2}}\frac{\omega^{2} }{f^{2}}v_{\varphi}+\Big{(}\frac{1}{1-\Omega^{2}\ell^{2}f}\frac{f^{\prime}}{f} -\frac{1}{z}-\phi^{\prime}\Big{)}v^{\prime}_{\varphi}+v^{\prime\prime}_{\varphi }=0,\] (20) \[v^{\prime}_{t}-\frac{1-f}{1-\Omega^{2}\ell^{2}f}\Omega v^{\prime }_{\varphi}=0. \tag{21}\] Note that when \(\Omega\ell=0\), equations (19) and (20) are the same and the two states are degenerate. Rotation breaks this degeneracy. From this point on, we will divide our analysis in two cases, according to three possible polarizations. The first case is for polarizations in directions \(x^{1}\) and \(x^{2}\), for which \((v_{m})=(0,v,0,0,0)\) or \((v_{m})=(0,0,v,0,0)\). The second case is for the polarization in direction \(\varphi\), for which \((v_{m})=(0,0,0,v,0)\). ## IV Solving the equations of motion ### Near the horizon behavior of the solution By approximating the function \(f\) as the first term of its power series at \(z=z_{h}\), we write \[f(z)\simeq f^{\prime}(z_{h})(z-z_{h})\qquad(\text{for }z\simeq z_{h}) \tag{22}\] and we see that, in this region, equations (19) and (20) both take the form \[\frac{\gamma^{2}\omega^{2}}{f^{\prime}(z_{h})^{2}(z-z_{h})^{2}}\,v_{\text{hor} }(z)+\frac{1}{z-z_{h}}\,v^{\prime}_{\text{hor}}(z)+\,v^{\prime\prime}_{\text{ hor}}(z)=0, \tag{23}\] which, in terms of the temperature, can be written as \[\frac{\omega^{2}}{(4\pi T)^{2}}\,v_{\text{hor}}(z)+(z-z_{h})\,v^{\prime}_{\text {hor}}(z)+(z-z_{h})^{2}\,v^{\prime\prime}_{\text{hor}}(z)=0. \tag{24}\] The two solutions of this equation are \[\Big{(}1-\frac{z}{z_{h}}\Big{)}^{\!-\!\text{i}\omega/4\pi T}\qquad\qquad\text {and}\qquad\qquad\Big{(}1-\frac{z}{z_{h}}\Big{)}^{\!+\!\text{i}\omega/4\pi T}. \tag{25}\] The solution with the minus sign in the exponent corresponds to an infalling wave at the horizon, while the other, with positive sign, corresponds to an outgoing wave. This becomes clear if one change to the Regge-Wheeler tortoise coordinate, as explained in refs. [35; 36]. The black hole allows only infalling waves at the horizon. Therefore, the field that solves the complete equations of motion has to obey the condition \[v(z)\simeq A\Big{(}1-\frac{z}{z_{h}}\Big{)}^{-\mathrm{i}\omega/4\pi T}\qquad( \text{for }z\simeq z_{h}), \tag{26}\] where \(A\) is a normalization constant. The norm of the field will have no importance for us, then we can set \(A=1\). In order to solve the complete equations of motion (19) and (20) numerically, we translate the infalling wave condition at the horizon in two boundary conditions to be imposed at a point \(z=z_{0}\) close to the horizon: \[v(z_{0})=v_{\text{hor},\,p}(z_{0})\qquad\text{and}\qquad v^{\prime}(z_{0})=v^{ \prime}_{\text{hor},\,p}(z_{0}), \tag{27}\] with \[v_{\text{hor},\,p}(z)=\Big{(}1-\frac{z}{z_{h}}\Big{)}^{-\mathrm{i}\omega/4\pi T }\sum_{n=0}^{p}a_{n}\Big{(}1-\frac{z}{z_{h}}\Big{)}^{\!\!n}. \tag{28}\] The function \(v_{\text{hor},\,p}(z)\) is just the infalling wave expression (26) (with \(A\) set to 1) times a polynomial correction of order \(p\) introduced for purposes of the numerical calculations. The coefficient \(a_{0}\) is, of course, 1 and the other coefficients \(a_{n}\) are determined by imposing the equation of motion (19) or (20) with \(v=v_{\text{hor},\,p}\) to be valid up to the order \(p\). The coefficients \(a_{n}\) are, therefore, the same for polarization in directions \(x^{1}\) and \(x^{2}\) but are different from the ones for polarization in the direction \(\varphi\). The point \(z_{0}\) is a choosen value of \(z\), close to \(z_{h}\), where the approximation \(v(z)\simeq v_{\text{hor},\,p}(z)\) is valid. Using equation (26), the leading term of the field, for \(z\) close to \(z_{h}\), can be written as: \[\cos\!\Big{[}\,\frac{\omega}{4\pi T}\ln\!\Big{(}1-\frac{z}{z_{h}}\Big{)}\, \Big{]}-\mathrm{i}\sin\!\Big{[}\,\frac{\omega}{4\pi T}\ln\!\Big{(}1-\frac{z}{ z_{h}}\Big{)}\,\Big{]}. \tag{29}\] In the region of values of \(z\) close to the horizon, very small changes in \(z\) produce significant changes in \(v(z)\), a problem for the numerical calculation. For this reason, the point \(z_{0}\) cannot be chosen too close to \(z_{h}\). This is the reason why we introduce the polynomial perturbation from (28) in the infalling condition. In this work, the value \(z_{0}=0.9z_{h}\) with 20 coefficients \(a_{n}\) was sufficient. Figure 1 shows the solution of the equation of motion for the non-rotating plasma at a specific temperature and for some representative value of \(\omega\) as well as two approximations for this solution near the horizon. From this figure, one can see that if we had chosen \(z_{0}=0.99z_{h}\), for example, instead of \(z_{0}=0.9z_{h}\), we would be in the unstable region and any small numerical error would be significantly propagated. Also, if we had used 10 coefficients, for example, instead of 20, we woldn't have a good aproximation for the field at \(z_{0}=0.9z_{h}\). For more discussion on this method, see, for example, [37]. This method produces a numerical solution of equations (19) or (20) with the infalling wave condition at \(z_{h}\) for any value of \(\omega\). We will use these solutions to calculate spectral functions and quasinormal modes in the following sections. ## V Spectral functions Spectral Functions are defined in terms of the retarded Green's functions as \[\varrho_{\mu\nu}(\omega)=-2\,\mathrm{Im}\,G_{\mu\nu}^{R}(\omega). \tag{30}\] Figure 1: Real (left) and Imaginary (right) parts of the field \(v(\omega,z)\) (blue) and its approximations near the horizon \(v_{\mathrm{hor},\,p}(\omega,z)\) with 20 (orange) and 10 (green) coefficients for the representative value of \(\omega=10\,\mathrm{GeV}\) for the non-rotating plasma at temperature \(T=150\,\mathrm{MeV}\). In the first line we plot the field in all its domain. In the second line we zoom in the region from \(z=0.7z_{h}\) to \(z=z_{h}\). The vertical lines highlight the values \(z=0.9z_{h}\) and \(z=0.99z_{h}\). They provide an important way of analysing the dissociation of quarkonia in a thermal medium. At zero temperature, the spectral function of a quarkonium, considering just one particle states, is a set of delta peaks at the values of the holographic masses of table 1. At finite temperature, these peaks acquire a finite height and a non-zero width. As the temperature increases the height of each peak decreases and its width increases. This broadening effect of the peaks indicates dissociation in the medium. In this section we calculate the spectral function for bottomonium in a rotating plasma at three fixed temperatures in order to analyse the effect of the rotational speed in the dissociation process. ### Retarded Green's Function In the four dimensional vector gauge theory we define a retarded Green's functions of the currents \(J_{\nu}\), that represent the heavy vector mesons, as \[G^{R}_{\mu\nu}=-{\rm i}\int{\rm d}^{4}x{\rm e}^{-{\rm i}p\cdot x}\Theta(t) \langle[J_{\mu}(x),J_{\nu}(0)]\rangle. \tag{31}\] The Son-Starinets prescription [38] provide a way of extracting the retarded Green's function from the on shell action of the dual vector fields in AdS space \[I_{\rm on\;shell} = -\frac{1}{4g_{5}^{2}}\int{\rm d}^{4}x\int_{0}^{z_{h}}{\rm d}z\, \sqrt{-g}\,{\rm e}^{-\phi(z)}F_{mn}F^{mn} \tag{32}\] \[= -\frac{1}{2g_{5}^{2}}\int{\rm d}^{4}x\int_{0}^{z_{h}}{\rm d}z\, \partial_{m}(\sqrt{-g}\,{\rm e}^{-\phi(z)}V_{n}F^{mn}),\] where we have used the equations of motion, (14), to go from the first line to the second. In the gauge \(V_{z}=0\), we have \(V_{n}F^{mn}=V_{\nu}F^{m\nu}\) and, therefore, \[I_{\rm on\;shell}=-\frac{1}{2g_{5}^{2}}\biggl{[}\int{\rm d}x^{1}{ \rm d}x^{2}{\rm d}\varphi{\rm d}z\sqrt{-g}\,{\rm e}^{-\phi(z)}V_{\nu}F^{\nu} \biggr{|}_{t\,=\,-\infty}^{t\,=\,+\,+\,+\,+\,+\,\,+\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ In momentum space and considering the meson at rest, we find \[I_{\text{on shell}}=-\frac{1}{2g_{5}^{2}}\int\mathrm{d}\omega\,\sqrt{-g}\,\mathrm{ e}^{-\phi(z)}g^{zz}g^{\mu\nu}v_{\mu}(-\omega,z)v^{\prime}_{\nu}(\omega,z)\left| \begin{smallmatrix}z=z_{h}\\ z=0\end{smallmatrix}.\right. \tag{35}\] Using the equation of motion (18) to substitute \(v^{\prime}_{t}\) in terms of \(v^{\prime}_{\varphi}\), one eliminates \(v_{t}\) ending up with \[\begin{split} I_{\text{on shell}}=-\frac{1}{2g_{5}^{2}}\int \mathrm{d}\omega\,\sqrt{-g}\,\mathrm{e}^{-\phi(z)}g^{zz}\Big{\{}\sum_{j=1,2}g^ {jj}& v_{j}(-\omega,z)v^{\prime}_{j}(\omega,z)\\ &+\Big{[}\,\frac{(g^{t\varphi})^{2}}{-g^{tt}}+g^{\varphi\varphi} \,\Big{]}v_{\varphi}(-\omega,z)v^{\prime}_{\varphi}(\omega,z)\Big{\}}\bigg{|} \begin{smallmatrix}z=z_{h}\\ z=0\end{smallmatrix}.\end{split} \tag{36}\] Now we separate the value of the field at the boundary \(z=0\) by defining the bulk to boundary propagator \(\mathcal{E}_{\mu}(\omega,z)\) such that \[v_{\mu}(\omega,z)=\mathcal{E}_{\mu}(\omega,z)v^{0}_{\mu}(\omega)\qquad\qquad \text{(no summation, $\mu=1,2,\varphi$)}, \tag{37}\] with \(v^{0}_{\mu}(\omega)=v_{\mu}(\omega,0)\). This implies the bulk to boundary condition \(\mathcal{E}_{\mu}(\omega,0)=1\). Using the definition (37) in equation (36), the on shell action becomes \[\begin{split} I_{\text{on shell}}=-\frac{1}{2g_{5}^{2}}\int \mathrm{d}\omega\,\sqrt{-g}\,\mathrm{e}^{-\phi(z)}g^{zz}\Big{\{}& \sum_{j=1,2}g^{jj}\mathcal{E}_{j}(-\omega,z)v^{0}_{j}(-\omega) \mathcal{E}^{\prime}_{j}(\omega,z)v^{0}_{j}(\omega)\\ &+\Big{[}\,\frac{(g^{t\varphi})^{2}}{-g^{tt}}+g^{\varphi\varphi} \,\Big{]}\mathcal{E}_{\varphi}(-\omega,z)v^{0}_{\varphi}(-\omega)\mathcal{E}^ {\prime}_{\varphi}(\omega,z)v^{0}_{\varphi}(\omega)\,\Big{\}}\bigg{|}_{z=0}^{ z=z_{h}}.\end{split} \tag{38}\] Then, applying the Son-Starinets prescription, we determine the retarded Green's functions \[G^{R}_{jj}(\omega)=-\frac{\ell R}{g_{5}^{2}}\mathrm{e}^{-\phi(0)}\lim_{z\to 0 }\frac{1}{z}\mathcal{E}^{\prime}_{j}(\omega,z)\qquad\text{(no summation, $j=1,2$)} \tag{39}\] and \[G^{R}_{\varphi\varphi}(\omega)=-\frac{R}{\ell g_{5}^{2}}\mathrm{e}^{-\phi(0)} \lim_{z\to 0}\frac{1}{z}\mathcal{E}^{\prime}_{\varphi}(\omega,z). \tag{40}\] The other \(G^{R}_{\mu\nu}\) vanish. In vacuum (\(T=0\)), the Green's function is just \[\Pi(p^{2})=\sum_{n=1}^{\infty}\frac{f_{n}^{2}}{-p^{2}-m_{n}^{2}+\mathrm{i} \varepsilon}, \tag{41}\] where \(m_{n}\) and \(f_{n}\) are the mass and decay constant of the radial states of excitation level \(n\) of bottomonium. The imaginary part of this Green's function and, hence, the spectral function at zero temperature is proportional to \[\sum_{n=1}^{\infty}f_{n}\delta(p^{2}+m_{n}^{2}), \tag{42}\] a set of delta peaks, each one located at the mass \(m_{n}\) of a state. When the meson is inside a thermal medium, at a non-zero temperature, the change in the spectral function is a broadening of the peaks. These peaks acquire a finite height and a non zero width. This broadening effect rises with the temperature and with the excitation level \(n\) and is interpreted as dissociation in the thermal medium. Variations of the spectral function of quarkonia in a thermal medium without rotation can be found on [8; 9; 10; 11]. The same holographic model considered here was used in these references. ### Numerical Results of Spectral Functions Figure 2 shows how bottomonium's spectral function changes with the rotation speed \(\Omega\ell\) at temperature \(T=150\,\text{MeV}\). Figures 3 and 4 do the same for temperatures fixed at \(T=200\,\text{MeV}\) and \(T=250\,\text{MeV}\), respectively. In these charts we multiplied the spectral functions by the inverse of the constants that appear in eqs. (39) and (40) in order to represent functions with the same dimension, that can be compared. From these figures one can see that rotation increases the dissociation effect and also that fields with polarization \(v_{1}\) and \(v_{2}\) dissociate slightly faster than the ones with polarization \(v_{\varphi}\). ## VI Quasinormal modes In vacuum, the equations of motion simplify to \[\omega^{2}v+\left(-\ \frac{1}{z}-\phi^{\prime}\right)v^{\prime}+v^{\prime\prime}=0. \tag{43}\] In this case, there is no black hole and, therefore, no infalling wave condition. We determine the normal modes by solving these equations with the existence of the field to satisfy the normalization condition \[\int_{0}^{\infty}\frac{R}{z}\mathrm{e}^{-\phi(z)}\,|v(z)|^{2}\,\mathrm{d}z=1. \tag{44}\] It is possible to translate this normalization condition into the Dirichlet condition \[v(\omega,z=0)=0. \tag{45}\] This equation is solvable only for a discrete set of real values \(\omega_{n}\). These values are the masses of the quarkonium states in vacuum. They are shown in the third column of table 1. At finite temperature, instead of normal modes, we have the quasinormal modes. They are the solutions of the equations of motion (19) and (20), that satisfy 1. the infalling wave condition at the horizon, 2. the Dirichlet condition (45). The values \(\omega_{n}\) that satisfy both of this conditions are called quasinormal frequencies, the fields \(v_{\mu}(\omega_{n},z)\) are called quasinormal modes and represent the meson quasistates. As the value at \(T=0\) of the normal frequency \(\omega_{n}\) is interpreted as the mass of the particle in its state \(n\), the real part of the quasinormal frequency \(\mathrm{Re}(\omega_{n})\), at finite temperature, is interpreted as the thermal mass of the quasiparticle. The imaginary part \(\mathrm{Im}(\omega_{n})\) is related to its degree of dissociation. The larger the absolute value of the imaginary part, the stronger the dissociation. It is interesting to note that the real and imaginary parts of a \(n\)-th quasinormal mode are related to the position and width of the \(n\)-th peak in the spectral function. Therefore, we can interpret a growth in the imaginary part of the quasinormal frequency as an increase in the dissociation effect. Indeed, at \(T=0\) the width of the spectral function peaks is zero, as is the imaginary part of the frequency \(\omega_{n}\). Also, the limit for \(T\to 0\) of \(\mathrm{Re}(\omega_{n})\) is the mass \(m_{n}\). This discussion for the non-rotating plasma with temperature only is already present in the literature. One can find an application of the tangent model for this case on Refs.[8; 9; 10; 11]. The results of quasinormal frequencies for polarizations \(x^{j}\) and \(\varphi\) as function of the rotation speed \(\Omega\ell\) and for temperature fixed at \(T=200\,\mathrm{MeV}\) are shown in figure 5. From this figure, one sees that the dissociation degree, measured by \(-\mathrm{Im}(\omega)\), rises with the rotation speed \(\Omega\ell\). ## VII Conclusions We analysed in this work how does rotation of a quark gluon plasma affect the dissociation of heavy vector mesons that are inside the medium. The motivation for such a study is the fact that non central heavy ion collisions lead to the formation of a QGP with high angular momentum. So, a description of quarkonium inside the plasma should take rotation into account. We considered, as an initial study, the case of a cylindrical shell of plasma in rotation about the symmetry axis. The real case of the QGP should involve a volume rather than a cylindrical surface and also possible interaction between different layers of the plasma, that would have different rotational speeds. However this simple case considered here already provides important non trivial information. It is clear from the results obtained here that rotation enhances the dissociation process for heavy vector mesons inside a plasma. It was also found that these effect, caused by rotation, is more intense for heavy vector mesons that have polarization perpendicular to the rotation axis. **Acknowledgments:** N.R.F.B. is partially supported by CNPq -- Conselho Nacional de Desenvolvimento Cientifico e Tecnologico grant 307641/2015-5 and by FAPERJ -- Fundacao Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro. The authors received also support from Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior -- Brasil (CAPES), Finance Code 001.
2309.04528
Fortifying gravitational-wave tests of general relativity against astrophysical assumptions
Most tests of general relativity with gravitational-wave observations rely on inferring the degree to which a signal deviates from general relativity in conjunction with the astrophysical parameters of its source, such as the component masses and spins of a compact binary. Due to features of the signal, measurements of these deviations are often highly correlated with the properties of astrophysical sources. As a consequence, prior assumptions about astrophysical parameters will generally affect the inferred magnitude of the deviations. Incorporating information about the underlying astrophysical population is necessary to avoid biases in the inference of deviations from general relativity. Current tests assume that the astrophysical population follows an unrealistic fiducial prior chosen to ease sampling of the posterior -- for example, a prior flat in component masses -- which is is inconsistent with both astrophysical expectations and the distribution inferred from observations. We propose a framework for fortifying tests of general relativity by simultaneously inferring the astrophysical population using a catalog of detections. Although this method applies broadly, we demonstrate it concretely on massive graviton constraints and parameterized tests of deviations to the post-Newtonian phase coefficients. Using observations from LIGO-Virgo-KAGRA's third observing run, we show that concurrent inference of the astrophysical distribution strengthens constraints and improves overall consistency with general relativity. We provide updated constraints on deviations from the theory, finding that, upon modeling the astrophysical population, the 90\%-credible upper limit on the mass of the graviton improves by $25\%$ to $m_g \leq 9.6 \times 10^{-24}\, \mathrm{eV}/c^2$ and the inferred population-level post-Newtonian deviations move ${\sim} 0.4 \sigma$ closer to zero.
Ethan Payne, Maximiliano Isi, Katerina Chatziioannou, Will M. Farr
2023-09-08T18:00:03Z
http://arxiv.org/abs/2309.04528v2
# Fortifying gravitational-wave tests of general relativity against astrophysical assumptions ###### Abstract Most tests of general relativity with gravitational-wave observations rely on inferring the degree to which a signal deviates from general relativity in conjunction with the astrophysical parameters of its source, such as the component masses and spins of a compact binary. Due to features of the signal, measurements of these deviations are often highly correlated with the properties of astrophysical sources. As a consequence, prior assumptions about astrophysical parameters will generally affect the inferred magnitude of the deviations. Incorporating information about the underlying astrophysical population is necessary to avoid biases in the inference of deviations from general relativity. Current tests assume that the astrophysical population follows an unrealistic fiducial prior chosen to ease sampling of the posterior--for example, a prior flat in component masses--which is is inconsistent with both astrophysical expectations and the distribution inferred from observations. We propose a framework for fortifying tests of general relativity by simultaneously inferring the astrophysical population using a catalog of detections. Although this method applies broadly, we demonstrate it concretely on massive graviton constraints and parameterized tests of deviations to the post-Newtonian phase coefficients. Using observations from LIGO-Virgo-KAGRA's third observing run, we show that concurrent inference of the astrophysical distribution strengthens constraints and improves overall consistency with general relativity. We provide updated constraints on deviations from the theory, finding that, upon modeling the astrophysical population, the 90%-credible upper limit on the mass of the graviton improves by 25% to \(m_{g}\leq 9.6\times 10^{-24}\,\mathrm{eV}/c^{2}\) and the inferred population-level post-Newtonian deviations move \(\sim\)\(0.4\sigma\) closer to zero. ## I Motivation Gravitational-wave observations from compact binary mergers have provided a unique laboratory to test Einstein's theory of gravity in the strong-field regime [1, 2, 3, 4, 5, 6, 7]. These individual detections by the Advanced LIGO [8] and Advanced Virgo [9] detectors allow for various tests--such as inspiral-merger-ringdown consistency [10, 11], parameterized inspiral deviations [12, 13, 14], gravitational-wave dispersion [15, 16], birefringence [17, 18] and nontensorial polarizations [19, 20, 21, 22, 23], among many more; see Ref. [7] for recent results--to both target specific properties of general relativity (GR) as well as broadly explore its consistency with observations. Beyond analyzing events individually, the ensemble of detections can be analyzed collectively to study the possibility of deviations from GR at the population level [6, 7, 24, 25]. Hierarchical population tests rely on inferring the distribution of deviation parameters across all events and confirming that it is consistent with a globally vanishing deviation [24, 26, 27]. In this study we explore the systematic impact of astrophysical population assumptions on these studies, show that they already come into play for current catalogs due to the increasing number of detections, and offer a solution under the framework of hierarchical population modeling. In inferences about deviations from GR, there are strong likelihood-level correlations between the deviation parameters and the astrophysical parameters of the source, such as the masses and spins of compact binaries [1, 28, 29]. Therefore, any inference of deviations from GR signals from black hole coalescences will be affected by assumptions about the distribution of binary black-hole masses and spins in the Universe--otherwise known as the astrophysical population distribution [30]. This is true at both the individual-event and catalog levels, regardless of the specific assumptions made in combining deviation parameters across events, whether the analysis is hierarchical or not. Even when astrophysical parameters do not explicitly appear in the catalog-level test of GR, assumptions about these parameters are implicitly encoded in the individual-event deviation posteriors through the prior. As the catalog of gravitational-wave observations grows and the precision of the measurements improves, these systematic effects become more important. In presence of correlations between deviation and astrophysical parameters, we must simultaneously model the astrophysical population distribution in conjunction with testing GR. By not explicitly doing so, as has been the case in previous tests of GR [1, 2, 3, 4, 5, 6, 7, 24], the astrophysical population is typically implicitly assumed to be uni form in detector-frame masses and uniform in spin magnitude. This fiducial sampling prior is adopted to ensure broad coverage of the sampled parameter-space, and not to represent a realistic astrophysical population. In reality, the primary-black hole mass population more closely follows a decreasing power-law with an excess of sources at \(\sim\)35 \(M_{\odot}\), and preferentially supports low spins [30; 31]. This mismatch can lead to biased inference regarding deviations from GR. Simultaneously modeling the astrophysical and deviation distributions will not eliminate the influence of the former on the latter, but it will ensure that this interplay is informed by the data and not arbitrarily prescribed by analysis settings. While this insight applies to all tests of GR, for concreteness we devote our attention to constraints on the mass of the graviton [15; 16] and deviations in parameterized post-Newtonian (PN) coefficients [32; 33; 34; 12; 35]. A massive graviton would affect the propagation of a gravitational wave over cosmological distances; this leads to a frequency-dependent dephasing of the gravitational wave which is related to the mass of the graviton, \(m_{g}\), and the propagated distance. The PN formalism describes the Fourier-domain phase of an inspiral signal under the stationary phase approximation through an expansion in the orbital velocity of the binary system; each \(k/2\) PN expansion order can then be modified by a deviation parameter, \(\delta\varphi_{k}\), which vanishes in GR. See App. A for further details about both calculations. We focus on these tests as they target the signal inspiral phase, which also primarily informs astrophysical parameters such as masses and spins; we leave other tests [10; 11; 15; 16; 19; 20; 21; 22; 23; 5] to future work. As motivation, Fig. 1 shows how inference on the 0PN coefficient of a real event (GW191216_213338) depends on astrophysical assumptions. This figure compares measurements with (blue) and without (red) a simultaneous measurement of the population of black hole masses and spins (see Sec. II). The observed binary black-hole population shows a preference for systems with comparable masses; as a consequence of the strong correlation between the 0PN deviation coefficient and the mass ratio of GW191216_213338, this preference then "pulls" the system towards more equal masses and a more negative deviation coefficient. This is a direct manifestation of the fact that tests of GR are contingent on our astrophysical assumptions. Higher PN orders are expected to display similar correlations as in Fig. 1 with these and other parameters. For example, spins are known to be correlated with the coupling constant of dynamical Chern-Simons gravity which modifies the phase at the 2PN order [36; 37; 38; 39]. The remainder of the manuscript focuses on combining information from many observations to simultaneously infer the astrophysical population while testing GR; it is structured as follows. We first introduce our hierarchical analysis framework, as well as astrophysical and GR deviation models, in Sec. II. We then demonstrate the impact of incorporating astrophysical information by constraining the graviton mass and inferring the PN deviation properties with an ensemble of gravitational-wave observations in Sec. III. We analyze events from LIGO-Virgo-KAGRA (LVK)'s third observing run with individual-event results from Ref. [7] (the posterior samples are available in Ref. [45]) -- a subset of the events in GWTC-3 [46]. The simultaneous modeling of the astrophysical population while testing GR leads tightens the graviton mass upper limit by 25%, and improves consistency with GR on the PN coefficients by \(\sim\)0.4\(\sigma\), when using a modified SEOBNRv4 waveform [41; 42; 43; 44; 14]. Finally, we conclude in Sec. IV, where we summarize the case for jointly modeling the astrophysical population when testing GR in order to avoid biases and hidden assumptions, and comment on how the same is true for gravitational-wave studies of cosmology or nuclear matter. ## II Population analyses In this section, we introduce the fundamentals of inferring a population distribution from individual observations and discuss the population models we employ. We Figure 1: Posterior distributions for the 0PN deviation coefficient \(\delta\varphi_{0}\), detector-frame chirp mass \(\mathcal{M}(1+z)\), and symmetric mass ratio \(\eta\) for the gravitational-wave event GW191216_213338 [6; 40], as inferred by a modified SEOBNRv4 waveform [41; 42; 43; 44; 44]. Posteriors are conditioned on two different astrophysical assumptions: the broad prior used during parameter estimation (red), and the astrophysical population inferred by the data using the model in Sec. II.2 (blue). The black dashed curves show the expected correlation (App. B). Due to the correlations between astrophysical and deviation parameters, different astrophysical populations lead to different posteriors for \(\delta\varphi_{0}\). also outline the implementation and importance of observational selection effects in accounting for the events used within the analysis. ### Preliminaries We infer the astrophysical population distribution and deviations from GR (see Refs. [47; 48; 49] for a discussion of hierarchical inference in the context gravitational-wave astronomy). This framework has already been extensively applied to tests of GR and astrophysical population inference separately [6; 7; 24; 25; 30; 31; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66]. Here we focus on combining both methods to jointly infer the astrophysical population while testing GR. Our approach is based on a _population likelihood_, \(p(\{d\}|\Lambda)\), for the ensemble of observations, \(\{d\}\), given population hyperparameters, \(\Lambda=\{\Lambda_{\rm astro},\,\Lambda_{\rm GR}\}\). We separate the hyperparameters into the parameters describing the astrophysical population distribution, \(\Lambda_{\rm astro}\), and parameters describing the deviation to GR, \(\Lambda_{\rm nGR}\). The hyperparameters encode the shape of the population distribution, \(\pi(\theta|\Lambda)\), where \(\theta\) are parameters of a single event; we describe our population models in the following subsections. This hierarchical approach allows us to test GR while concurrently inferring the astrophysical population from the data. Given the likelihoods of individual events, \(p(d_{i}|\theta_{i})\), the population likelihood is \[p(\{d\}|\Lambda)=\frac{1}{\xi(\Lambda)^{N}}\prod_{i=1}^{N}\int\mathrm{d}\theta _{i}\,p(d_{i}|\theta_{i})\,\pi(\theta_{i}|\Lambda)\,, \tag{1}\] where \(d_{i}\) and \(\theta_{i}\) are respectively the data and parameters for the \(i\)th event, and \(\{d\}\) is the collection of data for the ensemble of \(N\) observations1. We address the technical aspects of the likelihood calculation in App. C. Footnote 1: Equation (1) assumes a prior on the rate of observations as \(\pi(R)\propto 1/R\), which was analytically marginalized [62]. In Eq. (1), \(\xi(\Lambda)\) is the detectable fraction of observations given a set of population hyperparameters and accounts for selection biases [47]. It is defined as \[\xi(\Lambda)=\int\mathrm{d}\theta\,p_{\rm det}(\theta)\,\pi(\theta|\Lambda)\,. \tag{2}\] Here \(p_{\rm det}(\theta)\) is the probability of detecting a binary black-hole system with parameters \(\theta\). The selection factor in Eq. (2) accounts for both the intrinsic selection bias of a gravitational-wave detector (e.g., heavier binaries are more detectable), as well as selection thresholds used when deciding which gravitational-wave events to analyze. The detected fraction can also be framed as a "normalizing factor," which relaxes the need for normalizable population distributions (so long as the integrals in Eqs. 1 and 2 are finite) [67]. This correction will become important in Sec. II.3 when discussing the selection criteria for events to be included in the analysis. In theory, the selection factor should account for the effect of both astrophysical and deviation parameters. However, we ignore the latter here, the effect of which is subject of ongoing research [68]. For the former, we compute the detectable fraction, \(\xi(\Lambda)\), from a set of recovered injections, \[\xi(\Lambda)=\frac{1}{N_{\rm inj}}\sum_{j=1}^{N_{\rm rec}}\frac{\pi(\theta_{i} |\Lambda)}{\pi_{\rm draw}(\theta_{i})}\,, \tag{3}\] where \(N_{\rm inj}\) is the number of injected signals, \(N_{\rm rec}\) is the number of recovered signals, and \(\pi_{\rm draw}(\theta_{i})\) is the distribution from which the injected signals were drawn (for more details see Refs. [30; 31; 47; 48; 49; 50]). The subset of injected signals that are recovered is determined by the particular thresholds used to determine which gravitational-wave observations to use within the hierarchical analysis. To avoid biases, the criteria on the threshold for the detectable fraction calculation must match that of the observed signals. We address the specifics of the relevant criteria for our analysis in Sec. II.3. Finally, Eq. (1) explicitly shows the need for jointly modeling the astrophysical population when testing GR. While the astrophysical population may be separable from the deviation distribution so that \(\pi(\theta|\Lambda)=\pi(\theta_{\rm astro}|\Lambda_{\rm astro})\,\pi(\theta_{ \rm nGR}|\Lambda_{\rm nGR})\), this factorization cannot be undertaken for individual event likelihoods, as the deviations are often correlated with astrophysics (see Fig. 1), i.e. \(p(\{d_{i}\}|\theta)\neq p(\{d\}|\theta_{\rm nGR})\,p(\{d\}|\theta_{\rm astro})\). Therefore, the integrals of Eq. (1) do not separate and tests of GR cannot be undertaken in isolation from the astrophysics. From the hyperposterior distribution on the population parameters, we can construct the individual event population-informed posteriors following Refs. [69; 70; 71] (and references therein). Such distributions represent our best inference about the properties of a given event in the context of the entire catalog of observed signals. These calculations are subtle as they avoid "double-counting" the gravitational-wave events which also used to infer the population distribution. ### Population models In this subsection, we outline the population models for both the GR deviations and the astrophysical population. While many astrophysical population models have been proposed [30; 31; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66] as a product of the increasing number of observations [30; 46], in this work we restrict ourselves to standard parameterized models motivated by previous analyses. GR deviation population models There are two typical approaches to combining posteriors on GR deviation parameters obtained from different gravitational-wave observations, each stemming from different assumptions behind the deviations (see, e.g., discussions in[6; 7]). The first, more general approach is to assume that the population describing deviations from GR is, to the lowest order, a Gaussian distribution with a mean, \(\mu\), and standard deviation, \(\sigma\)[24; 26]. In the limit that all observations are consistent with GR, \((\mu,\sigma)\to(0,0)\) and the inferred distribution approaches a Dirac delta function at the origin. Since a Gaussian distribution encapsulates the lowest order moments of more complicated distributions, given enough events any deviation from a delta function at the origin will be identified as a violation of GR, even if the exact shape of the deviation distribution is not captured by a Gaussian [24; 27]. This approach is now routinely applied to post-Newtonian deviations tests, inspiral-merger-ringdown consistency tests and ringdown analyses [6; 7; 24], but it can be naturally extended to any analysis that recovers GR in the limit of some vanishing parameter. This method provides a null test in cases where the exact nature of the deviation is unknown. The second approach assumes all observations share the same value of the deviation parameter [5; 12; 13; 14; 44; 72; 73; 74; 75; 76]. This is the limiting case of the aforementioned Gaussian model when \(\sigma\to 0\). This model (in the absence of astrophysical information) is equivalent to simply multiplying the marginal likelihoods of the deviation parameter obtained from the individual events. The assumption of a shared parameter is only suitable in the context of specific theories or models, in which case the expected degree of deviation for each event can be predicted exactly as a function system specific parameters (e.g., BH masses and spins) and universal, theory-specific parameters (e.g., coupling constants), the second of which can be measured jointly from a catalog of detections by multiplying likelihoods. In practice, the lack of complete waveform models beyond GR means that this approach has so far only been well-suited for measurements such as the mass of the graviton, and features of the propagation of gravitational waves whose observational signatures are independent of specific source properties by construction [5; 6; 7]. #### ii.1.2 Astrophysical population models Following Refs. [30; 31; 50], we model the primary black-hole mass (\(m_{1}\)) distribution as a power-law whose slope is given by an index \(\alpha\), with a sharp cut-off governed by the minimum mass, \(m_{\rm min}\), and a higher-mass Gaussian peak, \[\pi(m_{1}|\Lambda)=(1-f_{\rm peak})\,\mathcal{P}[\alpha,m_{\rm min }](m_{1})+\] \[\qquad\qquad\qquad\qquad f_{\rm peak}\,\mathcal{N}[\mu_{\rm peak}, \sigma_{\rm peak}^{2}](m_{1})\,. \tag{4}\] Here, \(f_{\rm peak}\) is the fraction of binaries in the Gaussian peak, the powerlaw is given by \[\mathcal{P}[\alpha,m_{\rm min}](m_{1})\propto\begin{cases}m_{1}^{-\alpha},&m_ {1}\geq m_{\rm min}\\ 0,&m_{1}<m_{\rm min}\,,\end{cases} \tag{5}\] and \(\mathcal{N}[\mu,\sigma^{2}](x)\) is the probability density function for a Gaussian with mean \(\mu\) and variance \(\sigma^{2}\). We fix \(m_{\rm min}=5\,M_{\odot}\) for simplicity. Unlike other studies [30; 31; 53], we do not infer much structure in the Gaussian peak as higher mass features become unresolvable when looking at the light binary systems that provide constraints of PN coefficients (see Sec. II.3). We parameterize the distribution of mass ratios, \(q\equiv m_{2}/m_{1}\), as a conditional power-law, with index \(\beta\), and a sharp cut-off imposed by \(m_{\rm min}\), such that \[\pi(q|m_{1};\Lambda)\propto\begin{cases}q^{\beta},&1\geq q\geq m_{\rm min}/m_ {1}\\ 0,&q\leq m_{\rm min}/m_{1}\,.\end{cases} \tag{6}\] Here \(\beta\) can take any value without leading to a singularity due to the lower bound on the mass ratio. We adopt a truncated Gaussian population model for the component spins with a mean, \(\mu_{\chi}\), and standard deviation, \(\sigma_{\chi}\), bounded between zero and one, assuming both spins are drawn independently from the same population distribution. This differs from standard Beta distribution utilized in many recent analyses [30; 31; 50; 54; 77], as it allows for non-zero support at the edges of the spin-magnitude domain [78]. Furthermore, adopting a Gaussian model allows for efficient computation of the population likelihood via analytic integration (see App. C). For individual-event analyses where the spins are assumed to be aligned with the orbital angular momentum (as is the case for posteriors using a modified SEOBNRv4 waveform [41; 42; 43; 44; 41]), this model treats the measured spin along the orbital angular momentum as the total spin magnitude. For analyses where the individual event inferences also possess information about the spin-precession degrees of freedom, we adopt a model for the spin tilts, \(\cos\theta_{1/2}\), whereby the population is parameterized as a mixture of isotropically distributed and preferentially aligned spins [54], \[\pi(\cos\theta_{1},\cos\theta_{2}|\Lambda)=\frac{f_{\rm iso}}{4}+(1-f_{\rm iso })\times\] \[\qquad\qquad\qquad\mathcal{N}[1,\sigma_{\theta}^{2}](\cos\theta_{1 })\,\mathcal{N}[1,\sigma_{\theta}^{2}](\cos\theta_{2})\,, \tag{7}\] where \(f_{\rm iso}\) is the mixing fraction, and \(\sigma_{\theta}\) is the standard deviation of the preferentially aligned Gaussian component. This model is only relevant for analyses with precessing spins. In this manuscript, this includes the massive graviton constraints (Sec. III.1), and PN deviation tests with the IMRPhenomPv2 [35; 34; 79] waveform (App. D). Finally, we also adopt a power-law model for the merger-rate density as a function of redshift [62], \[\pi(z|\Lambda)\propto\frac{1}{1+z}\frac{\mathrm{d}V_{c}}{\mathrm{d}z}(1+z)^{ \lambda}\,, \tag{8}\] where \(\mathrm{d}V_{c}/\mathrm{d}z\) denotes the evolution of the comoving volume with redshift, and \(\lambda\) is the power-law index. When \(\lambda=0\), the binary black-hole population is uniformly distributed within the source-frame comoving volume. ### Selection criteria and observations We limit ourselves to binary black-hole observations made during LIGO-Virgo-KAGRA's third observing run [46] with false-alarm-rates of less than \(10^{-3}\) per year2. This mirrors the selection criteria chosen for the tests of GR within Refs. [5, 6, 7], and therefore we do need not reanalyze any individual gravitational-wave observations [45, 80]. The events that pass these criteria are listed in Table 4 of Ref. [6] and Table 5 of Ref. [7]. In our analyses, we exclude GW190814 [81] as it is an outlier from the binary black-hole population [31] and GW200115_042309 since it is a black hole-neutron star merger [82]. We then use all events except GW200316_2157563 when inferring the mass of the graviton, mirroring the analysis in Ref. [7]. When constraining the PN deviation coefficients, we include the additional criterion that signal-to-noise ratios (SNRs) during the binaries' inspiral must be greater than 6, again mirroring previous analyses [6, 7]. Footnote 2: For comparison, the population analyses presented Ref. [30] used a false alarm rate threshold of 1 per year. A more stringent false-alarm-rate threshold is often adopted when testing GR to avoid contaminating from false detections. Footnote 3: GW200316_215756 was excluded from propagation tests within Ref. [7] due to poor sampling convergence. We use posteriors for the graviton's mass inferred using a modified IMRPhenomPv2 [35, 34, 79] waveform, whereas we use both modified SEOBNRv4 [41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 60, 61, 62, 64, 65, 66, 67, 68, 69, 61, 62, 63, 64, 65, 67, 68, 69, 60, 61, 62, 64, 65, 66, 67, 68, 69, 60, 62, 61, 63, 64, 65, 66, 67, 68, 69, 61, 62, 65, 68, 67, 69, 60, 63, 68, 64, 69, 62, 66, 67, 68, 69, 60, 64, 68, 65, 69, 61, 66, 67, 68, 69, 60, 61, 68, 69, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 61, 60, 63, 68, 69, 60, 64, 61, 65, 66, 67, 68, 69, 62, 67, 68, 69, 60, 61, 68, 69, 60, 62, 63, 64, 65, 66, 67, 68, 69, 61, 62, 69, 63, 64, 66, 67, 68, 69, 60, 65, 68, 69, 61, 60, 66, 67, 68, 69, 62, 69, 60, 63, 61, 64, 65, 66, 67, 68, 69, 60, 68, 69, 61, 62, 63, 65, 66, 68, 67, 69, 60, 61, 68, 69, 60, 62, 64, 65, 66, 67, 68, 69, 62, 68, 69, 60, 63, 64, 66, 69, 61, 65, 67, 68, 69, 60, 67, 69, 61, 68, 69, 62, 69, 60, 64, 65, 66, 67, 68, 69, 61, 69, 62, 63, 66, 67, 68, 69, 60, 69, 61, 60, 62, 64, 65, 66, 68, 69, 60, 67, 68, 69, 61, 69, 62, 61, 63, 64, 65, 67, 68, 69, 60, 61, 62, 63, 65, 69, 62, 64, 66, 67, 68, 69, 60, 68, 69, 61, 63, 67, 69, 62, 64, 68, 69, 60, 65, 69, 61, 66, 67, 68, 69, 62, 69, 63, 67, 69, 64, 68, 69, 65, 66, 69, 67, 68, 69, 60, 69, 61, 62, 63, 64, 65, 67, 69, 68, 69, 60, 69, 61, 60, 62, 64, 69, 63, 65, 66, 67, 68, 69, 69, 60, 68, 69, 61, 60, 69, 62, 61, 64, 65, 66, 68, 69, 62, 63, 67, 69, 60, 61, 62, 65, 69, 63, 64, 66, 65, 67, 68, 69, 62, 66, 69, 63, 68, 69, 60, 67, 68, 69, 69, 60, 68, 69, 61, 60, 69, 62, 63, 69, 64, 65, 69, 66, 67, 68, 69, 60, 69, 61, 62, 63, 64, 65, 66, 67, 68, 69, 62, 69, 60, 63, 69, 64, 65, 69, 66, 67, 68, 69, 60, 69, 61, 62, 64, 65, 69, 66, 68, 69, 60, 61, 63, 64, 66, 69, 62, 65, 67, 68, 69, 60, 69, 62, 61, 63, 64, 65, 69, 66, 67, 68, 69, 69, 60, 61, 64, 68, 69, 62, 69, 61, 65, 69, 66, 67, 68, 69, 61, 69, 62, 63, 69, 60, 64, 65, 66, 68, 69, 61, 66, 69, 62, 67, 68, 69, 60, 63, 69, 60, 64, 65, 69, 66, 67, 68, 69, 61, 68, 69, 62, 69, 63, 60, 65, 69, 61, 67, 68, 69, 62, 61, 69, 63, 64, 66, 65, 67, 69, 68, 69, 60, 69, 61, 62, 63, 64, 66, 66, 69, 60, 67, 68, 69, 61, 69, 62, 63, 65, 69, 60, 68, 69, 62, 64, 65, 66, 67, 68, 69, 60, 69, 61, 60, 62, 63, 64, 66, 68, 69, 60, 61, 62, 65, 69, 62, 66, 67, 68, 69, 60, 69, 62, 68, 69, 63, 61, 64, 65, 66, 67, 68, 69, 69, 60, 68, 69, 61, 69, 60, 69, 62, 69, 60, 63, 64, 65, 69, 66, 67, 68, 69, 61, 69, 60, 62, 63, 64, 65, 69, 62, 66, 67, 68, 69, 60, 69, 61, 60, 64, 69, 62, 65, 69, 66, 67, 68, 69, 60, 69, 61, 68, 69, 60, 69, 62, 60, 67, 69, 61, 60, 68, 69, 62, 69, 63, 64, 65, 69, 60, 64, 66, 67, 68, 69, 61, 69, 62, 63, 65, 69, 60, 67, 69, 61, 68, 69, 60, 69, 60, 69, 62, 61, 60, 69, 63, 64, 65, 66, 67, 68, 69, 60, 61, 62, 65, 69, 61, 67, 69, 68, 69, 60, 69, 60, 61, 60, 62, 63, 64, 65, 69, 60, 67, 68, 69, 61, 69, 60, 63, 68, 69, 61, 69, 60, 62, 63, 64, 65, 69, 60, 61, 60, 63, 66, 67, 68, 69, 61, 69, 60, 62, 69, 60, 63, 64, 65, 67, 69, 61, 68, 69, 62, 69, 60, 69, 60, 61, 60, 62, 63, 64, 65, 67, 69, 61, 68, 69, 60, 69, 60, 61, 60, 67, 68, 69, 60, 61, 69, 62, 63, 69, 60, 61, 60, 63, 68, 69, 60, 61, 60, 69, 62, 61, 60, 63, 69, 61, by inferring the slope and offset of the line, as well as the uncertainty on the data points. We assume identical uncertainties on all SNR ratios, and marginalize over this parameter to fit the line. We validate this approximation by computing the detection probability \(p_{\text{det}}(\theta)\) with different draws of the linear fit. We find that different realizations of the approximation do not change the detection probability, and so we consider this approximation to be sufficiently accurate for our purposes. Future injection campaigns may also opt to compute the inspiral SNR directly. ## III Results In this section we simultaneously infer the astrophysical population while testing GR and quantify the impact of fixing the population distribution to the sampling prior. Throughout, we use the nomenclature "fixed" and "inferred" to refer to whether the analysis uses the fixed sampling prior or infers the distribution from data, respectively. We implement the analyses using NumPyro[84, 85] and JAX[86], leveraging AstroPy[87, 88, 89] and SciPy[90] for additional calculations, and matplotlib[91], arViz[92] and corner[93] for plotting purposes. The code for the hierarchical tests is available in Ref. [94]. ### Massive graviton constraints We begin by demonstrating that astrophysical assumptions are crucial even in the simplest scenarios, where a global deviation parameter is shared across events. This is the case for the mass of the graviton, \(m_{g}\)[15, 16] (see App. A.1), for which we produce an updated upper limit by simultaneously inferring the astrophysical distribution. We combine results from individual-event likelihoods under the assumption of a shared deviation parameter as described in Sec. II.2. In practice, we compute this as the limit of a vanishing standard deviation of the hierarchical analysis described in Sec. II. For technical reasons, we assume a uniform prior distribution on \(\log_{10}(m_{g})\) when combining observations, which differs from Refs. [5, 6, 7] which applied a uniform prior on \(m_{g}\) itself; this is to avoid poor convergence when reweighting between individual-event posterior distributions. In the end, we reweight the shared graviton mass inference to a uniform prior to report upper limits on \(m_{g}\). We compare this to results obtained assuming the sampling prior for the astrophysical parameters. The one-dimensional marginal distributions of the shared mass of the graviton are shown in Fig. 3. The inclusion of astrophysical information changes the inferred distributions of the graviton's mass increasing support for \(m_{g}=0\). When using the sampling prior for the astrophysical population (and thereby assuming the incorrect distribution), the graviton's mass is constrained to be \(m_{g}\leq 1.3\times 10^{-23}\,\text{eV}/c^{2}\) at the 90% level5; however, upon inferring the astrophysical population the graviton's mass becomes more constrained, with \(m_{g}\leq 9.6\times 10^{-24}\,\text{eV}/c^{2}\) at the 90% credible level. Under the expectation that GR is correct and \(m_{g}=0\), a reduced constraint is generically expected as we have included the correct information regarding the astrophysical population. This highlights the effect of unreasonable astrophysical assumptions, which are inconsistent with the observed population, on tests of GR. Footnote 5: This constraint differs from the 90% upper limit of \(1.27\times 10^{-23}\)\(\text{eV}/c^{2}\) calculated in Ref. [7], which is determined by additionally incorporating observations from the first and second LIGO-Virgo-KAGRA observing periods [5, 95]. We do not include these observations due to the ambiguity in the detector network sensitivity during these periods. Figure 2: Ratio between the network _maximum a posteriori_ gravitational-wave inspiral and the total SNRs as a function of detector-frame total mass, \(M(1+z)\equiv(m_{1}+m_{2})(1+z)\), for all gravitational-wave observations in the LIGO-Virgo-KAGRA third observing run [6, 7, 46, 83] with a false-alarm rate less than \(10^{-3}/\text{yr}\). The solid blue line is the median best-fit line to the observations, with the band representing the 90%-credible uncertainty. While computing this fit, we also estimate the uncertainty in the individual data points. We use this fit to compute the inspiral SNR for the injections used to estimate the detection probability, \(p_{\text{det}}(\theta)\), as described in Sec. II.3. ### Hierarchical post-Newtonian deviation constraints from SEOBNRv4 We repeat the population analysis, this time measuring the hierarchical PN deviation distribution with a mean, \(\mu_{\rm PN}\), and standard deviation, \(\sigma_{\rm PN}\), for all PN orders. This is corresponds to ten separate analyses where only one PN deviation coefficient is allowed to vary. To compare with the default approach (which implicitly assumes a flat-in-detector-frame mass, uniform mass ratio, uniform spin-magnitude aligned spin, and comoving volume redshift distributions), we also fit the GR deviation in isolation under the assumption of the (astrophysically unrealistic) sampling prior [6, 7]. Figure 4 shows the two-dimensional posterior distribution of the deviation hyperparameters for \(-1\) through to \(3.5\) PN orders. The standard results implicitly using the sampling prior are shown in yellow, while the results from the simultaneous modeling of the astrophysical and deviation populations are shown in dark blue. When concurrently modeling the astrophysical distribution, in all PN deviation parameters the inferred mean resides closer to zero, i.e., the expected value from GR, while there is no clear trend in \(\sigma_{\rm PN}\). Overall, \((\mu_{\rm PN},\sigma_{\rm PN})=(0,0)\) is retained with greater significance for almost all PN orders. We quantify this improvement by comparing the two-dimensional credible level6 at which the expected GR value, \((\mu_{\rm PN},\sigma_{\rm PN})=(0,0)\), resides in Fig. 5. A lower value for the credible region implies that the value of hyperparameters expected from GR resides closer to the bulk of the distribution. In all but one PN order, jointly inferring the astrophysical and PN deviation distributions moves the inferred distribution to be more consistent with GR. For the \(0.5\)PN deviation coefficient, \(\delta\varphi_{1}\), there is little change in the credible level at which GR is recovered. Generally, inference of the astrophysical population allows our inferences of GR deviations to be more consistent with GR, with an average improvement of \(0.4\sigma\). Footnote 6: This “displacement” is the quantile, \(\mathcal{Q}_{\rm GR}\), reported in Refs. [6, 7] as (displacement)\({}^{2}=-2\ln(1-\mathcal{Q}_{\rm GR})\,\sigma^{2}\). The quantile is computed by integrating over all regions of the hyperposterior distribution which are at a higher probability than \((\mu_{\rm PN},\sigma_{\rm PN})=(0,0)\). We report values in terms of the standard deviation in two dimensions, \(1\sigma\) and \(2\sigma\) correspond to \(\sim\)\(39.3\%\) and \(\sim\)\(86.5\%\) credibility, respectively. To shed further light on the interaction between the GR and astrophysics parameters, we focus on two specific deviation parameters. In particular, we draw attention to the 3PN coefficient (which shows the largest tightening of the supported hyperparameter space in Fig. 4) and the 0PN coefficient (where the PN deviation is most inconsistent with GR in Fig. 5). #### iii.2.1 Example: 3PN deviation coefficient, \(\delta\varphi_{6}\) To understand the origin of the improved measurement for \(\delta\varphi_{6}\) when modeling astrophysics in Fig. 4, we show an expanded corner plot in Fig. 6 with an additional subset of the hyperparameter posterior distributions. The top left corner reproduces the corresponding panel in Fig. 4, wherein the yellow posterior distribution is obtained under the assumption of the astrophysical population given by the sampling priors, while the dark blue is obtained by simultaneously inferring the astrophysical-population and the GR deviation parameters. Additionally, we use the same set of individual-event posterior samples to _separately_ infer the astrophysical population independently of the PN deviation parameters, which amounts to assuming a uniform distribution of deviations across events (solid green). This differs from standard astrophysical population inference, which assumes that GR is correct _a priori_ and thus starts from individual-event posteriors conditioned on \(\delta\varphi=0\)[30, 31, 50]. Finally, we also compute the astrophysical population under the assumption that GR is correct, \((\mu_{\rm PN},\sigma_{\rm PN})=(0,0)\) (dashed green). The result assuming GR is correct is computed by fixing \((\mu_{\rm PN},\sigma_{\rm PN})\rightarrow(0,0)\) to ensure equivalent samples are used between analyses, and is consistent with the usual population inference modulo model choices at the individual-event and population levels [30, 31, 50]. Figure 3: Marginal one-dimensional posterior distributions for the mass of a massive graviton. In practice, we compute the shared value of graviton mass by assuming a shared deviation parameter \(\log_{10}(m_{g}c^{2}/\text{eV})\) then reweighting to a uniform graviton mass prior. The dashed lines correspond to the 90% upper limits from the two analyses. We compare the result when astrophysical information is not included, equivalent to multiplying individual event likelihood functions (yellow), to also modeling the astrophysical population (dark blue). The result shifts towards smaller values of \(m_{g}\) if simultaneously modelling the astrophysical population and the graviton’s mass. From the two-dimensional marginal distributions, the most apparent feature is that inferring the astrophysical population under the assumption of a broad uniform GR deviation population (shown in solid green) leads to inferences consistent with broad spin populations (large \(\sigma_{\chi_{\pm}}\)) and populations favoring uneven mass ratios (\(\beta<0\)). This can be straightforwardly explained by the presence of correlated structure between \(\delta\varphi_{6}\), mass ratio, and the component spins at the individual-event level. To demonstrate this, Fig. 7 shows four different posteriors for GW191216_213338 under different priors. The four distributions shown are the posterior obtained with the sampling priors (red), the one informed by the GR deviation population only analysis (yellow), the one informed by the astrophysical population only analysis (green), and the one informed by the jointly-inferred GR deviation and astrophysical populations (blue). The posteriors which involve information from inferred populations are computed following Ref. [69], and do not double-count the data from GW191216_213338, as discussed in Sec. II.1. Under the sampling astrophysical prior, posteriors exhibit a low-\(q\), high-\(\chi_{1}\) mode. Since the inferred astrophysical population is inconsistent with low mass ratios and high spin magnitudes, the astrophysical-population-informed posteriors have reduced support for unequal masses (compare the red contour to the green one). Additionally incorporating the GR deviation information (blue), the population-informed posterior further reduces support for high-spinning systems. The similarity of the results under the sampling prior (red) with those in which only the GR deviation population is inferred (yellow), Figure 4: Two-dimensional marginal posterior distributions for the hyperparameters of the Gaussian PN deviation distribution informed by the 20 events in the third LIGO-Virgo-KAGRA observing run passing the selection criteria, analysed with a modified SEOBNRv4 [41; 42; 43; 44; 14] waveform. The contours indicate the 50% and 90% credible regions. Each panel corresponds to a separate analysis where the coefficient varied was at a different PN order. The analysis was undertaken with an implicitly assumed, astrophysically-unrealistic population (yellow), and a model which simultaneously infers the astrophysical population model (dark blue). Modelling both the astrophysical population and the PN deviation population systematically shifts the inferred mean, \(\mu_{\rm PN}\), closer to zero. Figure 5: Displacement of the deviation parameter distribution from GR for each PN deviation coefficient. The displacement corresponds to the credible levels at which the hyperparameter values corresponding to GR, \((\mu_{\rm PN},\sigma_{\rm PN})=(0,0)\), reside for two different models as shown in Fig. 4. This quantity is indicative of the relative position of the posterior to the GR value. Incorporating the astrophysical population as well as the hierarchical model for the PN deviation leads to an inferred result more consistent with GR for most cases. Figure 6: Marginal one- and two-dimensional posterior distributions for the \(\delta\varphi_{6}\) PN deviation and a subset of astrophysical population hyperparameters. Contours correspond to the 50% and 90% credible regions. Results from four analyses are shown — population inference using the PN deviation population only with the “default” sampling prior astrophysical population (yellow), astrophysical population only (green), astrophysical population under the assumption that GR is correct (dashed green), and the joint analysis inferring the post-Newtonian deviation and astrophysical populations simultaneously (dark blue). No strong correlations exist between either the mean or standard deviation of the deviation Gaussian and astrophysical population parameters. The starkest difference is that inferring the population when the PN deviation population is ignored leads to broad spin magnitude populations. suggests that inferring small GR deviations is on its own not enough to significantly affect the inference of the astrophysical parameters in this case. The tightening of the \(\sigma_{\chi_{z}}\) hyperposterior distribution (i.e., inferring a more narrow spin population) when jointly inferring the GR deviation and astrophysical populations is precisely what we observe at the population level in Fig. 6 comparing the dark blue and green contours. Additionally, when enforcing that \(\delta\varphi_{6}=0\) for all events (dashed green), we no longer recover support for broad spin populations. Interestingly, the astrophysical population inferred jointly with the GR deviation population is very similar to the result obtained when fixing \(\delta\varphi_{6}=0\). This illustrates that, if we allow the model to infer that the scale of GR deviations is small, we will recover similar inferences overall as if we had fixed \(\delta\varphi=0\)_a priori_: we are learning _both_ that spins are small _and_ that any GR deviation must be small at this PN order. Conversely, an assumption of a broad GR deviation population leads to unrealistic astrophysical populations to account for the far-fetched astrophysical systems such analyses allow. We can also use this example to understand why inferring the deviation population in the absence of astrophysical modelling leads to a different deviation population with a larger inferred mean. Figure 7 shows that \(q\) and \(\delta\varphi_{6}\) are correlated at the individual-event level, and therefore a broader \(q\) distribution will lend more support to the higher values of \(\delta\varphi_{6}\). This correlation then systematically pulls the mean of the PN deviation distribution to higher values. #### iv.2.2 Example: 0PN deviation coefficient, \(\delta\varphi_{0}\) We now turn to \(\delta\varphi_{0}\), for which the standard analysis with a fixed astrophysical prior finds the least consistency with GR, at the \(2.2\sigma\) credible level (yellow circle for \(\delta\varphi_{0}\) in Fig. 5), driven by a displacement away from \(\mu_{\rm PN}=0\) (Fig. 4). Since this parameter is strongly correlated with the chirp mass and mass ratio (Fig. 1), we expect improvements when jointly modeling the astrophysical and deviation distributions; indeed that is the case, with GR recovered at the \(1.6\sigma\) level (blue circle in Fig. 5). This analysis infers a \(\sigma_{\rm PN}\) distribution that peaks slightly away from zero. We can understand this behavior with Fig. 8, where we plot a subset of the two-dimensional marginal population posterior distributions in the same color scheme as Fig. 6. The structure of the PN deviation distribution is directly correlated with the mass ratio power-law index, \(\beta\): steeper power-laws correspond to more variance in the GR deviation (larger \(\beta\), larger \(\sigma_{\rm PN}\)). This is also manifested in the fact that when the PN deviation is assumed to be uniformly distributed (solid green), the astrophysical inference prefers steeper mass ratio power-laws (larger \(\beta\)), and that the analysis with deviations fixed to zero (dashed green) leads to a shallower slope (\(\beta\lesssim 6\)). There is also a correlation between \(\sigma_{\rm PN}\) and the width of the spin distribution, \(\sigma_{\chi_{z}}\), by which a narrower spin distribution demands for a greater spread in deviation parameters within the population. Such correlations highlight precisely why we need to account for the astrophysical population when testing GR. By assuming a particular, fixed model for the astrophysical population, the hyperparameter correlations will not be captured in the marginal posterior for the GR deviation population. The analysis assuming the sampling prior for the astrophysical population (yellow), infers a value of \(\sigma_{\rm PN}\) which peaks at zero. Among other hyperparameters, the sampling prior corresponds to a uniform (\(\beta=0\)) mass-ratio distribution. Fixing the astrophysical population in such a way will lead to the hyperparameter posterior peaking at \(\sigma_{\rm PN}=0\), as seen in Fig. 8. Figure 7: One- and two-dimensional posterior distributions for the 3PN deviation parameter, the mass ratio, and the primary black-hole spin for GW191216_213338 under four different assumptions: broad sampling priors (red), informed by the GR deviation population analysis (yellow), informed by the astrophysical population (green), informed by the joint inference of PN deviation and astrophysical populations (dark blue). Contours indicate the 90% credible region. Evidence for both a low mass ratio and larger primary spins is strongly contingent upon the astrophysical assumptions. Broad priors such as those used while sampling the posterior distribution have significant support for lower mass ratios. Inclusion of information from both the deviation population and the astrophysics leads to an inferred result with both low primary spin and high mass ratio. Figure 8: Similar to Fig. 6, one- and two-dimensional posterior distributions for the \(\delta\varphi_{0}\) deviation and a subset of astrophysical population hyperparameters. A strong correlation is found between the width of the inferred post-Newtonian deviation population and the index of the mass ratio power-law when jointly inferring the deviation and astrophysical population models. There is also a less pronounced correlation between the deviation and spin population standard deviations. In the absence of modelling the astrophysical population, the inferred PN population is pulled to a higher mean with a reduced width. Conclusions In this study, we have shown the importance of modeling the astrophysical population when testing GR with gravitational waves. Current tests do not explicitly model the astrophysical population, and therefore implicitly treat the prior used for sampling the posterior distribution as the assumed astrophysical population. Due to the presence of correlations between many GR deviations and astrophysical parameters, inappropriate astrophysical population choices will bias the test of GR. Like other sources of systematics, including waveform modeling [96; 97; 98; 99], the severity of this bias increases with the number of detections. We have shown that the effect of this bias is already being felt in the present catalog. This issue can only be fully addressed by simultaneously modelling both the astrophysical population in addition to the GR deviations. We demonstrate the effect of inappropriate astrophysical models using constraints of the graviton's mass and tests of PN deviations as concrete examples. We show that jointly modeling the astrophysical population distribution while testing GR leads to results more consistent with GR. Furthermore, for some deviations at various PN orders there are correlations between hyperparameters governing the astrophysical and deviation populations. The impact of the astrophysical distribution is not just important for these parameters and these hierarchical models: any test of GR should accurately account for the astrophysical population. In fact, this problem is not unique to tests of GR-- attempts to infer cosmological properties [100] or the equation of state of dense nuclear matter [101] are also impacted by these same considerations. We can generically understand the impact of folding in the astrophysical population as follows. The standard sampling prior is chosen to broadly cover the parameter range of interest, and not to accurately represent the true astrophysical population. The actual population distribution will then typically provide support on a more narrow region of parameter space than the sampling prior. As a result, population-informed posteriors will not only avoid systematic biases but will also provide more stringent constraints on GR due to the additional information from the associated narrower population. This posterior shrinkage is illustrated in Fig. 9, which shows the 0PN deviation parameter and detector frame chirp mass for the 20 events considered in our study (Table 1). The three sets of distributions correspond to the posteriors under different priors: fixed sampling priors (light red), fixed astrophysical prior and an inferred PN deviation population (yellow), as well as the case where both PN-deviation and astrophysics distributions are inferred (blue). As more information about the GR deviation distribution is included, the inferred posterior of 0PN deviation parameter and the detector-frame chirp mass is more constrained. The posteriors are then constrained further still as additional information regarding the astrophysical population is included. There are a number of directions in which to extend our work. The first would be to account for selection effects on the hyperparameters of the GR deviation distribution; this is to be addressed in upcoming work [68]. Additionally, here we have assumed a strongly parameterized model for the astrophysical population. As the number of events used with these tests increases, and subtle features in the astrophysical population reveal themselves, we will likely need more flexible models [63; 64; 65; 66] to further avoid biases from misspecified population models [102; 103; 104]. Furthermore, in the case of PN coefficients, one would ideally constrain all orders simultaneously, in addition to the astrophysical parameters [105; 1]. Concurrently modeling the astrophysical population when testing GR is inevitable. Models that do not include a parameterized astrophysical population are implicitly assuming the sampling prior as the fixed population model. Such an assumption may induce systematic biases, cause false detections of GR violations, or incorrectly claim a stronger confirmation of GR than is warranted by the data. Moreover, even when accounting for the astrophysical population, correlations between GR deviation and astrophysical hyperparameters suggest that a true deviation could be absorbed into an unphysical inferred astrophysical population, a case that can only be noticed in studying the hyperposterior relating astrophysical to deviation parameters. Hierarchically modeling the astrophysical population while testing GR provides the solution to the implicit bias of assuming a fixed astrophysical population, and allows us to explore correlations between astrophysical parameters and deviations from GR, with fewer hidden assumptions. ## V Acknowledgements We thank Jacob Golomb and Alan Weinstein for insightful discussions, and Carl-Johan Haster for useful comments on the manuscript. Computing resources were provided by the Flatiron Institute. The Flatiron Institute is funded by the Simons Foundation. EP was supported by NSF Grant PHY-1764464. KC was supported by NSF Grant PHY-2110111. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation. This research has made use of data, software and/or web tools obtained from the Gravitational Wave Open Science Center ([https://www.gw-openscience.org](https://www.gw-openscience.org)), a service of LIGO Laboratory, the LIGO Scientific Collaboration and the Virgo Collaboration. Virgo is funded by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale della Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by Polish and Hungarian institutes. The authors are grateful for computational resources provided by the LIGO Laboratory and supported by National Science Foundation Grants PHY-0757058 and PHY-0823459. This manuscript carries LIGO Document Number #PP2300292. ## Appendix A Formulation of parameterized tests of general relativity In this appendix we outline the calculations required to constrain the graviton's mass (App. A.1) and infer the PN deviation parameters (App. A.2). ### Massive graviton measurements The impact of a massive graviton on the propagation of gravitational waves has been studied in Refs. [15; 16] and references therein. A graviton with mass \(m_{g}\) modifies the dispersion relation of the gravitational wave. In a cosmological background, \(g_{\mu\nu}\), \[g_{\mu\nu}p^{\mu}p^{\nu}=-m_{g}^{2} \tag{10}\] where \(p^{\mu}\) is the 4-momentum of the graviton. This leads to a dephasing of the gravitational wave, \(\delta\Phi(f)\), that scales with the distance over which the signal propagates, \[\delta\Phi(f)=-\frac{\pi(1+z)D_{L}^{2}m_{g}^{2}c^{3}}{D_{0}h^{2}}f^{-1}\,, \tag{11}\] where \(D_{L}\) is the luminosity distance, \(h\) is Planck's constant, and \[D_{0}=\frac{c(1+z)}{H_{0}}\int_{0}^{z}\mathrm{d}z^{\prime}\,\frac{(1+z^{\prime })^{-2}}{\sqrt{\Omega_{m}(1+z^{\prime})^{3}+\Omega_{\Lambda}}}\,. \tag{12}\] Here, \(H_{0}=67.9\,\mathrm{km\ s^{-1}\,Mpc^{-1}}\) is the Hubble constant, and \(\Omega_{m}=0.3065\) and \(\Omega_{\Lambda}=0.6935\) are the matter and dark energy density parameters, respectively, adopting the values used in previous analyses [7; 46; 106]. ### Post-Newtonian deviation tests Current parameterized PN tests are constructed by single-parameter modifications to the post-Newtonian description of the inspiral gravitational-wave phase in the Figure 9: Marginal two-dimensional posterior distributions for the 0PN deviation coefficient and the detector-frame chirp mass for the events analyzed under the broad prior assumptions (light red), informed PN deviation population only (yellow), and informed by the jointly inferred deviation and astrophysical populations (dark blue). Contours indicate the 90% credible regions. This result demonstrates that as additional information is incorporated into the population distribution, more stringent constraints on the deviation parameters are placed on an individual event level. In the case demonstrated here, this pulls the inferred value towards \(\delta\varphi=0\) for all events. frequency domain. This is given by [34; 107] \[\Phi(f) =2\pi ft_{c}-\phi_{c}-\frac{\pi}{4}+\frac{3}{128}\times\] \[\sum_{k=0}^{7}\frac{1}{\eta^{k/5}}\Big{(}\varphi_{k}+\varphi_{k,l} \ln\tilde{f}\Big{)}\tilde{f}^{(k-5)/3}\,. \tag{10}\] Here, \(\Phi(f)\) is the frequency-domain gravitational-wave phase under the stationary-phase approximation, \(\tilde{f}=\pi G\mathcal{M}(1+z)f/c^{3}\), where \(\mathcal{M}(1+z)\) is the redshifted chirp mass, \(\mathcal{M}=(m_{1}m_{2})^{3/5}/(m_{1}+m_{2})^{1/5}\) is the source-frame chirp mass, \(\eta=m_{1}m_{2}/M^{2}\) is the symmetric mass ratio, \(t_{c}\) and \(\phi_{c}\) are the coalescence time and phase of the binary; finally, \(k\) indexes the \(k/2\) PN order, and \(\varphi_{k}\) and \(\varphi_{k,l}\) are the PN coefficients. The logarithmic coefficients, \(\varphi_{k,l}\) only enter at 2.5 and 3.5 PN orders and otherwise vanish [108; 109]. In GR, the coefficients are functions of the intrinsic parameters of the binary, their masses and spins. From this prescription, modifications to GR are incorporated by modifying [12; 13; 32] \[\varphi_{k}\to(1+\delta\varphi_{k})\,\varphi_{k}\,, \tag{11}\] except for \(k\)'s for which \(\varphi_{k}=0\) in GR (\(k=-2,1\)); in these cases, the modification is \(\varphi_{k}\to\delta\varphi_{k}\), and \(\delta\varphi_{k}\) is an absolute deviation [110]. In practice, modifications to IMRPhenomPv2 [12; 13; 34; 35; 73; 76; 79] and SEOBNRv4 [14; 41; 42; 43; 44] waveforms are computed differently, then the latter is transformed to the former. For the modified SEOBNRv4 waveform, the deviation is applied as above [14]. While, IMRPhenomPv2 is modified to only apply the deviation is onto the nonspinning portion of the PN coefficient [12; 13]. We translate all inferred deviation parameters to the IMRPhenomPv2 deviation parameter \(\delta\varphi_{k}^{\rm IMR}\) for consistency, \[\delta\varphi_{k}^{\rm IMR}=\delta\varphi_{k}\frac{\varphi_{k}}{\varphi_{k}^{ \rm NS}}\,, \tag{12}\] where \(\varphi_{k}^{\rm NS}\) is the nonspinning value of the PN coefficient -- calculated by setting the spins to zero for a particular set of compact binary masses. Additionally, care needs to be taken when translating to a uniform prior on \(\delta\varphi_{k}^{\rm IMR}\), as the appropriate Jacobian, \[\frac{\mathrm{d}\delta\varphi_{k}^{\rm IMR}}{\mathrm{d}\delta\varphi_{k}}= \frac{\varphi_{k}}{\varphi_{k}^{\rm NS}}\,, \tag{13}\] is necessary. If the original prior is uniform on \(\delta\varphi_{k}\), then the \(\delta\varphi_{k}^{\rm IMR}\) must be weighted by the Jacobian to be effectively translated to another uniform prior. ## Appendix B Computing expected parameter correlations Correlations between GR deviation and astrophysical parameters can be analytically approximated by identifying regions of the parameter space that lead to a similar frequency evolution [28] and signal duration. The dominant correlation is the one between the detector-frame chirp mass, \(\mathcal{M}(1+z)\), and the symmetric mass ratio, \(\eta\). The duration of a gravitational-wave signal is related to the detector-frame chirp mass and some fiducial cut-off frequency [111], \[T\propto\mathcal{M}^{5/3}(1+z)^{5/3}f_{\rm cut}^{-8/3}\,. \tag{14}\] If we relate the final frequency to the innermost stable orbit or any cut-off which scales inversely with the binary's total mass, then \(T\propto\eta^{-8/5}\mathcal{M}^{13/3}(1+z)^{13/3}\). A constant duration then imlies \[\mathcal{M}(1+z)\propto\eta^{-24/65}\,. \tag{15}\] Here we have ignored both the contributions of a spin-induced "hang-up" effect [112] and GR deviations. Correlations between astrophysical parameters and GR deviations can then be computed at lowest order [28] by enforcing that the second-order derivative of the phase evolution as a function of frequency be constant. As an example, for the correlation in Fig. 1, we compare the phase evolution when \(\delta\varphi_{0}=0\) and when varying \(\delta\varphi_{0}\) at the leading PN order, resulting in \[\mathcal{M}_{0}^{-5/3}(1+z_{0})^{-5/3}\sim(1+\delta\varphi)\mathcal{M}^{-5/3}( 1+z)^{-5/3}\,. \tag{16}\] Here \(\mathcal{M}_{0}\) and \(z_{0}\) are the values of the chirp mass and redshift when there is no deviation. We find the 0PN deviation coefficient to only be directly correlated with the detector frame chirp mass, \[\delta\varphi_{0}\sim\left(\frac{\mathcal{M}(1+z)}{\mathcal{M}_{0}(1+z_{0})} \right)^{5/3}-1\,. \tag{17}\] This calculation can be repeated for higher PN orders as well, however care needs to be taken as lower PN orders need to be retained when computing higher PN deviation coefficient correlations. ## Appendix C Population likelihood approximation In practice, we carry out single-event parameter estimation with a fiducial sampling prior, \(\pi(\theta)\), before the hierarchical population analysis. We therefore do not possess representations of the individual event likelihoods, \(p(d|\theta)\), but rather samples drawn from the fiducial posterior distribution \(p(\theta|d)\propto p(d|\theta)\,\pi(\theta)\). Therefore, it is common to instead reformulate the integral within Eq. (1) as an average over samples drawn from each event's posterior distribution [47; 48; 49], \[p(\{d\}|\Lambda)\propto\frac{1}{\xi(\Lambda)^{N}}\prod_{i=1}^{N}\frac{1}{M_{i }}\sum_{k=1}^{M_{i}}\frac{\pi(\theta_{i,k}|\Lambda)}{\pi(\theta_{i,k})}\,, \tag{18}\] where \(M_{i}\) is the number of posterior samples for the \(i\)th event. It is possible for this Monte Carlo integration to not converge--particularly if the population distribution \(\pi(\theta|\Lambda)\) is narrower than posterior distributions for individual events [48; 61; 67; 113; 114]. This is particularly important in our scenario, since the inferred population of deviations from GR is typically narrower than marginal measurements from many individual events. This leads to a dearth of samples within the inferred GR deviation population, which subsequently leads to unreliable Monte Carlo integration in Eq. (10). To address this issue, we use Gaussian kernel density estimates to represent the individual-event posteriors in a number of parameters, and simplify the calculation analytically by leveraging Gaussian population models. Dividing the parameters into the subset described by the Gaussian population distributions, \(\theta^{\rm G}\), and the non-Gaussian distributions, \(\theta^{\rm NG}\), we can analytically integrate over the former without resorting to Eq. (10). The Gaussian population parameters are the GR deviation parameter and the binary-hole spin magnitudes, whereas the black-hole primary mass and mass ratio, redshift, and spin tilts (for the analysis in App. D) are included in the non-Gaussian set of parameters. For the kernel density estimation, we determine the corresponding covariance matrix for each individual event's distribution using Scott's rule [115], \[\Sigma_{BW,i}\approx\frac{\Sigma_{i}}{n_{\rm eff,\,i}^{\rm 2/(d+4)}}\,, \tag{11}\] where \(\Sigma_{i}\) is the weighted covariance matrix of the parameters being estimated, \(d\) is the number of Gaussian dimensions, and \(n_{\rm eff}\) is the effective number of samples [116; 117], \[n_{\rm eff,i}=\frac{\left(\sum_{k=1}^{M_{i}}w(\theta_{i,k}^{G}) \right)^{\rm 2}}{\sum_{k=1}^{M_{i}}w(\theta_{i,k}^{G})^{\rm 2}}\,, \tag{12}\] with the weights, \(w(\theta_{i,k}^{G})=1/\pi(\theta_{i,k}^{G})\). Since the integrand in the \(\theta^{\rm G}\)-space is a product of Gaussian distributions, the resulting integral is also a Gaussian [118]. This leads to the straightforward expression for the likelihood function \[p(\{d\}|\Lambda)\propto \frac{1}{\xi(\Lambda)^{N}}\prod_{i=1}^{N}\frac{1}{M_{i}}\sum_{k=1 }^{M_{i}}\frac{\pi(\theta_{i,k}^{\rm NG}|\Lambda)}{\pi(\theta_{i,k})}\times\] \[\mathcal{N}[\mu(\Lambda),\Sigma_{BW}+\Sigma(\Lambda)](\theta_{i,k }^{\rm G})\,, \tag{13}\] where \(\mu(\Lambda)=(\mu,\mu_{\chi},\mu_{\chi})\) and \(\Sigma(\Lambda)=\text{diag}(\sigma^{2},\sigma_{\chi}^{2},\sigma_{\chi}^{2})\), though more complicated structure can be imposed on the population model. Since this integral is computed analytically, we empirically find improved convergence. ## Appendix D Constraints from IMRPhenomPv2 While we have focused on results from SEOBNRv4 [41; 42; 43; 44; 14], these analyses do not include precessing degrees of freedom. However, evidence for precession has been found at the population level within gravitational-wave observations [30; 31]. Therefore, to explore if there are any major changes when incorporating precession effects, we use the 12 events from the first half of the third observing run analysed with IMRPhenomPv2 [12; 13; 34; 35; 73; 76; 79] which meet our selection criteria [6]. There are no equivalent results from the second half of the third observing run [7]. We show the summary of the marginal two-dimensional posterior distribution for the Gaussian population hyperparameters with and without the inclusion of astrophysical information in Fig. 10. Generally, these results are less constrained due to the smaller number of events, though we still witness a similar shift in the means of the Gaussian populations as in Fig. 4. We also summarize the quantiles at which the expectation from GR presides in Fig. 11. Generally, the IMRPhenomPv2 results are more consistent with GR than the equivalent SEOBNRv4 results presented in Sec. III.2. This could be a product of this waveform model incorporating precession, or simply that fewer events were analyzed, leading to a decrease in precision.
2309.03253
Is a recently discovered HI cloud near M94 a starless dark matter halo?
Observations with the Five-Hundred-Meter Aperture Spherical Telescope have revealed the presence of a marginally-resolved source of 21 cm emission from a location $\sim50'$ from the M94 galaxy, without a stellar counterpart down to the surface brightness limit of the DESI Imaging Legacy Survey ($\sim29.15$ mag arcsec$^{-2}$ in the $g$ band). The system (hereafter Cloud-9) has round column density isocontours and a line width consistent with thermal broadening from gas at $T\sim2\times10^4$ $K$. These properties are unlike those of previously detected dark HI clouds and similar to the expected properties of REionization-Limited-HI Cloud (RELHICs), namely, starless dark matter (DM) halos filled with gas in hydrostatic equilibrium and in thermal equilibrium with the cosmic ultraviolet background. At the distance of M94, $d\sim4.7$ Mpc, we find that Cloud-9 is consistent with being a RELHIC inhabiting a Navarro-Frenk-White (NFW) DM halo of mass, $M_{200}\sim5\times10^{9}$ $M_{\odot}$, and concentration, $c_{\rm NFW}\sim13$. Although the agreement between the model and observations is good, Cloud-9 appears to be slightly, but systematically, more extended than expected for $\Lambda$CDM RELHICs. This may imply either that Cloud-9 is much closer than implied by its recessional velocity, $v_{\rm CL9}\sim300$ km s$^{-1}$, or that its halo density profile is flatter than NFW, with a DM mass deficit greater than a factor of $10$ at radii $r\lesssim1$ kpc. Further observations may aid in constraining these scenarios better and help elucidate whether Cloud-9 is the first ever observed RELHIC, a cornerstone prediction of the $\Lambda$CDM model on the smallest scales.
Alejandro Benitez-Llambay, Julio F. Navarro
2023-09-06T18:00:00Z
http://arxiv.org/abs/2309.03253v1
# Is a recently discovered HI cloud near M94 a starless dark matter halo? ###### Abstract Observations with the Five-Hundred-Meter Aperture Spherical Telescope have revealed the presence of a marginally-resolved source of 21 cm emission from a location \(\sim 50^{\prime}\) from the M94 galaxy, without a stellar counterpart down to the surface brightness limit of the DESI Imaging Legacy Survey (\(\sim 29.15\) mag arcsec\({}^{-2}\) in the \(g\) band). The system (hereafter Cloud-9) has round column density isocontours and a line width consistent with thermal broadening from gas at \(T\sim 2\times 10^{4}\ K\). These properties are unlike those of previously detected dark HI clouds and similar to the expected properties of REionization-Limited-HI Cloud (REHLICs), namely, starless dark matter (DM) halos filled with gas in hydrostatic equilibrium and in thermal equilibrium with the cosmic ultraviolet background. At the distance of M94, \(d\sim 4.7\) Mpc, we find that Cloud-9 is consistent with being a RELHC inhabiting a Navarro-Frenk-White (NFW) DM halo of mass, \(M_{200}\sim 5\times 10^{9}\ M_{\odot}\), and concentration, \(c_{\rm NFW}\sim 13\). Although the agreement between the model and observations is good, Cloud-9 appears to be slightly, but systematically, more extended than expected for \(\Lambda\)CDM REHLICs. This may imply either that Cloud-9 is much closer than implied by its recessional velocity, \(v_{\rm CL9}\sim 300\) km s\({}^{-1}\), or that its halo density profile is flatter than NFW, with a DM mass deficit greater than a factor of 10 at radii \(r\lesssim 1\) kpc. Further observations may aid in constraining these scenarios better and help elucidate whether Cloud-9 is the first ever observed RELHC, a cornerstone prediction of the \(\Lambda\)CDM model on the smallest scales. Dark matter(353) -- Cosmology(343) -- Reionization(1383) -- 0000-0002-8870-7886]Alejandro Benitez-Llambay 0000-0002-4880-7886]Julio F. Navarro ## 1 Introduction A distinctive prediction of the Lambda-Cold Dark Matter (\(\Lambda\)CDM) model of structure formation is the existence of a vast number of collapsed halos, whose density follows a universal profile (Navarro et al., 1996, hereafter NFW), and whose abundance at the low-mass end scales as a power law of the mass, \(\propto M^{-1.9}\)(e.g. Press & Schechter, 1974; Bond et al., 1991; Jenkins et al., 2001; Angulo et al., 2012; Wang et al., 2020). This result, combined with the relatively flat faint-end of the galaxy luminosity function, implies that a large population of low-mass dark matter (DM) halos must remain "dark" or starless until the present day (see, e.g., Ferrero et al., 2012, and references therein). The origin of these "dark" halos in \(\Lambda\)CDM is well motivated theoretically: galaxies can only form in the center of halos whose mass exceeds a redshift-dependent critical mass, \(M_{\rm crit}(z)\). This critical mass corresponds, before cosmic reionization, to the halo mass above which atomic cooling becomes efficient (e.g. Blumenthal et al., 1984; Bromm & Yoshida, 2011), and after reionization, to the halo mass above which the pressure of the photoheated gas cannot overcome the gravitational force of the halo (Benitez-Llambay & Frenk, 2020, hereafter BLF20). Analytical models (Ikeuchi, 1986; Rees, 1986; Benitez-Llambay & Frenk, 2020) and results from hydrodynamical simulations (e.g. Hoeft et al., 2006; Okamoto et al., 2008; Benitez-Llambay et al., 2017), demonstrate that DM halos less massive than \(M_{\rm crit}\sim 7\times 10^{9}\ M_{\odot}\) today should contain gas in hydrostatic equilibrium with the gravitational potential of the halo and in thermal equilibrium with the external ultraviolet background radiation (UVB). Moreover, the models indicate that halos that never exceeded \(M_{\rm crit}(z)\) should remain devoid of stars to the present day. For the most massive "dark" halos, the high density and low temperature of their gas lead to the formation of neutral hydrogen (HI) in the center, making them detectable in 21 cm emission. This is why these systems were termed "Reionization-Limited-HI-Clouds" (REHLICs) by Benitez-Llambay et al. (2017) (hereafter, BL17), and are analogues of the mini-halos envisaged by Rees (1986) and Ikeuchi (1986) in the context of the early Ly\(\alpha\) forest models. The properties of RELHICs in \(\Lambda\)CDM were studied by BL17. These authors concluded that RELHICs should be nearly spherical extragalactic gas clouds in hydrostatic equilibrium with the underlying NFW halo. Their gas density profile is well specified because of the distinctive density-temperature relation that arises from the interplay between gas cooling and photoheating. As RELHICs are close to hydrostatic equilibrium, they lack significant velocity dispersion. Detecting RELHICs would represent a remarkable achievement mainly for two reasons. Above all, it would unequivocally confirm the presence of bound collapsed DM structures on mass scales below galaxies --a pivotal prediction of the \(\Lambda\)CDM model. Secondly, it would pave the way towards a novel and independent way to probe \(\Lambda\)CDM on small scales, where \(\Lambda\)CDM is still subject to heavy scrutiny (see, e.g., Bullock & Boylan-Kolchin, 2017, for a recent review of the small-scale challenges faced by \(\Lambda\)CDM). As discussed by BL17, the most promising RELHIC candidates to date have been some of the Ultra Compact High-Velocity Clouds (UCHVCs) identified in the ALFALFA catalog (Adams et al., 2013; Haynes et al., 2018). This catalog contains roughly 60 "dark" HI clouds whose sizes and fluxes are broadly consistent with RELHICs. Of these candidates, the systems that appear round in the sky display either a large broadening of their HI line compatible with non-zero velocity dispersion (or rotation) or negative recessional velocity, indicating they are likely nearby sources. On the other hand, HI clouds receding from us and having a small line width broadening display a highly irregular morphology. Thus, no observational analog entirely consistent with RELHICs has been positively identified to date. In this work, we focus on the discovery by Zhou et al. (2023) (hereafter Z23) of extended emission in an isolated field near M94. The system (termed Cloud-9; hereafter CL-9) was observed with the Five-Hundred-Meter Aperture Spherical Telescope (FAST), has no obvious luminous counterpart, and exhibits properties consistent with those expected for RELHICs. Using the models introduced by BL17 and BLF20, we address whether the Z23 observations are consistent with CL-9 being a \(\Lambda\)CDM RELHC. We refer the interested reader to those papers for further details. ## 2 Method ### Observations Recently, Z23 reported the detection of CL-9, a relatively isolated HI cloud without a luminous counterpart brighter than the surface brightness limit of the DESI Legacy Imaging Survey (DESI LS), namely, 29.15, 28.73, and 27.52 mag arcsec\({}^{-2}\) for the \(g,r,z\) filters, respectively (Martinez-Delgado et al., 2023). The system is at a projected angular distance \(\sim 51.87^{\prime}\) from the center of M94, a galaxy located at a distance, \(d\sim 4.66\) Mpc (e.g. Lee et al., 2011). This value is broadly consistent with the distance obtained from its radial velocity, \(v_{\rm M94}\sim 287\) km \(s^{-1}\), assuming it is receding from us on the Hubble flow. Other distance estimates for M94 also place the galaxy in the distance range, \(4\lesssim d\) / Mpc \(\lesssim 5\), (e.g., Karachentsev et al., 2004; Crook et al., 2007; Cappellari et al., 2011; Tully et al., 2016; Karachentsev et al., 2018). CL-9 has a recessional velocity similar to that of M94, \(v_{\rm CL9}\sim 300\) km s\({}^{-1}\)(Z23). This coincidence, together with the close angular separation, makes it likely that CL-9 is in the vicinity of M94. Assuming this is the Figure 1: CL-9’s observed column density isocontours (taken from Zhou et al., 2023) superimposed on a DESI LS color image of the same field. The white circle indicates the FAST beam size. We approximate the observed isocontours with the dashed circles to construct the observed column density profile displayed in the bottom panel. The error bars indicate \(3\sigma\) uncertainties. The coordinates are relative to the origin, \((\alpha,\delta)=(12^{h}51^{m}52^{s},+40^{\circ}17^{\prime}29^{\prime\prime})\). case, the projected distance between CL-9 and M94 corresponds to a physical separation greater than \(\sim 70\) kpc and to a maximum stellar mass for CL-9, \(M_{\rm str}\lesssim 10^{5}\)\(M_{\odot}\), as reported by Z23. If CL-9 is not near M94, then its recessional velocity makes it unlikely that the system is closer than 3 Mpc from us. Indeed, no galaxies with a reliable distance estimate closer than 3 Mpc have recessional velocities that reach this value (see, e.g., Karachentsev and Kaisina, 2019). This lower bound on the distance is further supported by the distance estimate based on the velocity field reconstruction of the Local Volume using CosmicFlows-3 (Tully et al., 2016), which returns a distance in the range \(3\lesssim d/{\rm Mpc}\lesssim 4\). The lower/higher value is obtained when the reconstruction uses the Numerical Action Method (Shaya et al., 2017)/Wiener filter model (Graziani et al., 2019)1. However, it is not possible to exclude the possibility that CL-9 is farther than M94 using the system's recessional velocity alone. Footnote 1: We queried the distance using the CosmicFlows-3 calculator available at [http://edd.ifa.hawaii.edu](http://edd.ifa.hawaii.edu) (Kourkchi et al., 2020). CL-9 appears round in the sky and displays a narrow broadening of its emission line (\(W_{50}\sim 20\) km s\({}^{-1}\) at its peak column density), consistent with thermal broadening arising from gas at \(T\sim 2\times 10^{4}\)\(K\). These properties are consistent with those expected for REHLICs (BL17). We note, however, that the inferred shape of CL-9 may be affected by the large FAST beam, whose size is comparable to the spatial extent of the detection. CL-9 is unlikely to be a self-gravitating system. If the system's sound-crossing time equals the free-fall time within the observed range, then the required HI mass for the system to be in equilibrium (given its linewidth and size at the distance of M94) is \(M_{\rm HI}\sim 4\times 10^{8}\)\(M_{\odot}\). This value is orders of magnitude higher than the derived HI mass given its total flux (\(M_{\rm HI}\sim 7\times 10^{5}\)\(M_{\odot}\)) (Z23), implying the presence of a large amount of gravitational mass other than neutral hydrogen. We show CL-9's observed column density isocontours, taken from the work of Z23, in the top panel of Fig. 1. Following Z23, we superimpose the contours over a DESI LS color image to emphasize that there is no obvious extended luminous counterpart within the surface brightness limit of the survey2. The outermost isocontour corresponds to a value equal to the \(3\sigma\) detection limit, \(N_{\rm HI}=6.7\times 10^{17}\) cm\({}^{-2}\), and the contour values increase in steps of \(6.7\times 10^{17}\) cm\({}^{-2}\), so that the maximum column density reached by the innermost contour is, \(N_{\rm HI}=3.35\times 10^{18}\) cm\({}^{-2}\). Footnote 2: Note that the presence of a bright star near CL-9 may affect somewhat the exact surface brightness limit reached by the DESI LS image at this location. Because the system's isocontours are round, we approximate them by the circles depicted by the dashed lines and use the circles' radius to produce the column density profile shown by the red dots in the bottom panel of Fig. 1. Since the innermost isocontour is elliptical, we will not use it for our analysis. ### RelhICs #### 2.2.1 Intrinsic column density profile and mock observations We model REHLICs following BL17. This implies assuming that REHLICs are spherical gaseous systems in hydrostatic equilibrium with an NFW DM halo and in thermal equilibrium with a Haardt and Madau (2012) UVB. We note that the presence of a stellar counterpart does not affect the structure nor the system's stability, provided the stars are negligible contributors to the gravitational potential. To solve the hydrostatic equilibrium equation, we use a boundary condition where the pressure at infinity equals the pressure of the intergalactic medium at the mean density of the Universe. With this condition, the model reproduces the detailed structure of stable gaseous halos in large high-resolution cosmological hydrodynamical simulations (BL17; BLF20). In this model, REHLICs are characterized by a distinctive maximum central density, which depends on the gas temperature, halo virial mass, \(M_{200}\), and concentration, \(c_{\rm NFW}\). To derive the HI density profile of REHLICs, we apply the Rahmati et al. (2013) results. Once the HI density profile is known, we calculate the intrinsic HI column density by projecting the HI density. To compare Z23 observations with a RELIHC model, we place REHLICs at the observed distance and convolve their intrinsic column density profile with a circular Gaussian beam with standard deviation, \(\sigma_{\rm beam}=1.23^{\prime}\), whose full width at half maximum matches that of the FAST beam, \(\sim 2.9^{\prime}\). Performing this convolution is crucial because we will compare models with observations on scales smaller than the FAST beam size. Moreover, at the M94 distance, the angular extent of the central HI core of REHLICs is comparable to the beam size. ## 3 Results ### CL-9 as a \(\Lambda\)CDM Relhic We now address whether the observed CL-9's column density profile is consistent with the system being a RELIHC. To this end, we consider REHLICs embedded within an NFW DM halo, a profile characterized by the halo concentration and virial mass. These two parameters fully specify the REHLICs' total HI mass, central density, and characteristic size. Therefore, we construct a grid of models as a function of virial mass and concentration, imposing the Ludlow et al. (2016) mass-concentration relation. We place the models at a fiducial distance of M94, \(d=4.66\) Mpc, and convolve them with the FAST Gaussian beam. We then adopt the best model as the model that matches CL-9's column density at the location of the second innermost isocontour, i.e., the highest signal-to-noise isocontour that has a reliable distance estimate from the center. We show the model results, compared with CL-9's observation, in the top panel of Fig. 2. The thin curves show examples of \(\Lambda\)CDM RELHICs within a very narrow range of halo mass centered at the mass of the best RELHIC model (thick line). Three outcomes of this exercise deserve particular attention. Firstly, it is remarkable that it is possible to match CL-9's isocontours by varying solely the halo mass of a RELHIC at the distance of M94 without further adjustments. Secondly, if CL-9 is indeed a RELHIC, its observed column density profile imposes a tight constraint on the mass of its DM halo, as small departures in halo mass relative to the best model produce models whose central column density quickly departs from observations. This implies that the derived properties are not very sensitive to the comparison between the model and observations at the adopted isocontour. Thirdly, the best RELHIC model contains an HI mass, \(M_{\rm HI}\sim 5.5\times 10^{5}\ M_{\odot}\), which is in excellent agreement with the value derived by Z23 for CL-93. In contrast, other models contain HI masses that significantly depart from the inferred value, as indicated by the labels in the top panel of Fig. 2. This demonstrates that CL-9 is fully consistent with a RELHIC even if it was considered an unresolved source. Footnote 3: CL–9’s integrated flux is \(S_{21}=0.14\pm 0.02\) Jy km s\({}^{-1}\) (Z23), implying an HI mass, \(M_{\rm HI}\sim(7.2\pm 1)\times 10^{5}\ M_{\odot}\) at the M94 distance. We thus conclude that if CL-9 is at the distance of M94, then its total HI mass and column density profile properties are fully consistent with a \(\Lambda\)CDM RELHIC of mass \(M_{200}\sim 5.04\times 10^{9}\ M_{\odot}\), and concentration, \(c_{\rm NFW}=12.94\). Although the best \(\Lambda\)CDM RELHIC shown by the thick solid line in Fig. 2 is consistent with observations, there are slight but systematic differences between the best-fit model and observations at larger radii. A priori, these could originate from: 1) CL-9 inhabiting a DM halo with lower-than-average concentration; 2) a wrong distance estimate to CL-9; and 3) departures of the structure of the DM halo compared to \(\Lambda\)CDM expectations. To address the first possibility, we constructed models with a lower-than-average concentration that match the second innermost CL-9's isocontour. The orange and green squares in the bottom panel of Fig. 2 show two extreme examples. Lowering the concentration increases the halo mass, but only mildly, demonstrating that the leading factor determining the central column density of RELHICs is the DM mass. In addition, the "ob Figure 2: **Top panel**: CL-9’s observed column density profile (circles), including 3\(\sigma\) uncertainties, together with mock-observed RELHICs (lines) at the distance of M94 and convolved with the FAST Gaussian beam. RELHICs were chosen to bracket CL-9’s observations while following the \(\Lambda\)CDM mass-concentration relation. The thick line highlights the best-fit model (see text for discussion), and the labels indicate the total HI mass of the model immediately below**Middle panel**: intrinsic column density profiles of the models shown in the top panel. The vertical line indicates the radial extent of the FAST beam. **Bottom panel**: mass-concentration relation of the models, together with the Ludlow et al. (2016) mass-concentration relation (line) and scatter (shaded region). The orange and green squares indicate examples with lower-than-average concentrations that also match observations (see the lines of the same color in the middle and top panels). served" models are only marginally more extended than the fiducial best-fit \(\Lambda\)CDM RELHIC (see the dashed and dot-dashed lines in the top and middle panels), indicating that halo concentration cannot resolve the tension between modeling and observations if CL-9 is at the distance of M94. We explore the other two possible sources of discrepancies in the following sections. ### Is CL-9 a \(\Lambda\)CDM RELHIC closer than M94? If CL-9 is indeed a \(\Lambda\)CDM RELHIC, the systematic difference between the model and observations in the outer regions may simply reflect a wrong distance estimate for CL-9; RELHICs would appear more extended in projection and be more consistent with CL-9 if we place the system closer to the observer. To explore the possibility that CL-9 is much closer than M94, we varied the distance to CL-9 and found that bringing the system to a distance \(d=500\) kpc from us improves the quality of the fit while still adopting the mean \(\Lambda\)CDM mass-concentration relation. This is shown in the top panel of Fig. 3, which is analogous to Fig. 2 but assumes \(d=500\) kpc. Although this smaller distance improves the agreement between the model and observations, it is disfavored by CL-9's high recessional velocity, as discussed in Sec. 2.1. Placing CL-9 much farther would only increase its neutral hydrogen mass without improving the fit to observations. As shown by BL17, RELHICs should have an HI mass, \(M_{\rm HI}\lesssim 3\times 10^{6}\ M_{\odot}\), which places an upper limit to CL-9's distance, \(d_{\rm CL9}\lesssim 10\) Mpc, if the system is indeed a RELHIC. Finally, changes in the distance estimate of CL-9 only have a minor impact on the derived DM halo parameters. Although we have changed CL-9's distance by almost an order of magnitude between Fig. 2 and Fig. 3, the resulting DM mass of the best model remains similar between the two models. It is now: \(M_{200}=4.5\times 10^{9}\ M_{\odot}\) (\(c_{\rm NFW}=13.02\)). This is because RELHICs' neutral hydrogen density is extremely sensitive to halo mass, thus making the distance a secondary parameter in the explored range. This is not the case for the HI mass, which depends on distance. The inferred HI mass for CL-9 at this lower distance is \(\sim(7\pm 1)\times 10^{3}\ M_{\odot}\), which is similar to the total HI mass of the best-fit model. Thus, we conclude that in the unlikely scenario in which CL-9 was as close as 500 kpc from us, it would still be possible to find a \(\Lambda\)CDM RELHIC that matches its observed column density. In addition, the maximum allowed distance for CL-9 to contain an HI mass compatible with the system being a RELHIC is \(d\sim 10\) Mpc, a distance at which CL-9 would still be compatible with a \(\Lambda\)CDM RELHIC. #### 3.2.1 Does CL-9 signal an inner DM deficit? An alternative interpretation of the systematic discrepancy between the modeling and observations in the outer regions is that it originates from a deficit of DM relative to a cuspy NFW in the inner regions. To explore this possibility, we now focus on a different model in which the gas in the halo is in hydrostatic equilibrium with a generalized NFW (gNFW) halo, whose logarithmic slope in the inner regions, \(\gamma\), is treated as a free parameter: \[\rho_{\rm dm}(r)=\rho_{s}\left(\frac{r}{r_{s}}\right)^{-\gamma}\left[1+\left( \frac{r}{r_{s}}\right)\right]^{\gamma-3} \tag{1}\] To impose a deficit of DM in the center relative to a cuspy NFW, we enforce a cored inner density profile by setting \(\gamma=0\). We consider a grid of RELHIC models, for which we vary both the halo mass and concentration independently of each other. We then fit the models, Figure 3: Identical to Fig. 2, but assuming CL-9 is at a distance, \(d=500\) kpc. placed at the same distance of M94 and convolved with the FAST beam, to CL-9's second innermost isocontour. The result of this procedure is shown in Fig. 4. With these changes, a RELHC inhabiting a "cored" DM halo of mass, \(M_{200}\sim 1.02\times 10^{10}\ M_{\odot}\), and concentration, \(c_{\rm gNFW}=r_{200}/r_{s}=5.15\), matches observations (see the orange line in the top panel of Fig. 4). Other models, found by varying both the halo mass and concentration until they match the central column density, fit the observed profile very poorly. Although the concentration of the best model is off of the Ludlow et al. (2016) mass-concentration relation (shown in the bottom panel of Fig. 4), there is no reason why a cored profile should follow this relation. The derived halo parameters thus imply a significant DM mass deficit greater than a factor of 2 for radii, \(r\lesssim 6\) kpc, compared to \(\Lambda\)CDM expectations. This is shown in Fig. 5, in which we plot the mean DM density profile of the best \(\Lambda\)CDM RELHC and the best gNFW RELHC. These parameters are uncomfortably large compared with the values expected from self-interacting DM (SIDM) models and difficult to reconcile with \(\Lambda\)CDM without a bright stellar counterpart. For example, Elbert et al. (2015) finds that the largest radius at which SIDM halos depart from \(\Lambda\)CDM is roughly a factor 3 smaller (\(\sim 2\) kpc). In addition, if CL-9 hosted a stellar counterpart, its low stellar mass would make it difficult for supernova-driven winds to perturb its inner DM at such large distances (e.g. Di Cintio et al., 2014; Tollet et al., 2016; Robles et al., 2017). Therefore, if CL-9 is confirmed to be a RELHC and further observations confirm the extended mass deficit, we anticipate challenges reconciling this system not only with \(\Lambda\)CDM but also with SIDM models. ## 4 Summary and Conclusions In this work, we explored whether the recently discovered extended HI gas cloud, CL-9, is consistent with being a \(\Lambda\)CDM RELHC. We find that CL-9's properties are consistent with the system being a RELHC, as recently argued by [23]. The match between the model column densities and observations, together with the large projected distance between CL-9 and M94, the round shape of CL-9's isocontours, the lack of a luminous counterpart, the small broadening of the emission line, and the total HI mass make CL-9 the first firm RELHC candidate in the local Universe. The analysis of this system demonstrates the potential of these objects as cosmological probes. We also find that CL-9's observations are limited by the FAST beam, whose size is comparable to that of the expected HI core of the most massive RELHICs at the fiducial distance. However, given the high sensitivity of the column density (and total HI mass) to halo mass for RELHICs, we conclude with high confidence that the observed system must inhabit a DM halo with mass in the range \(4\times 10^{9}\lesssim M_{200}/M_{\odot}\lesssim 5\times 10^{9}\) if its DM content follows a cuspy NFW profile. This conclusion is based on matching CL-9's total HI mass (and central column density) and, therefore, is independent of whether the system is marginally resolved or unresolved. Taken at face value, the marginally resolved CL-9 column density profile is systematically more extended than expected for a \(\Lambda\)CDM RELHC. If confirmed, this may suggest a slightly more massive halo, \(M_{200}\sim 10^{10}\)\(M_{\odot}\), but with a large inner core rather than a cusp. However, before drawing robust conclusions, it is crucial to observe CL-9 with higher spatial resolution. We envision a series of observations that may help constrain CL-9's parameters and nature. Firstly, the high sensitivity and smaller beam make the MeerKAT radio telescope an obvious choice to constrain CL-9's Figure 4: Identical to Fig. 2, but assuming RELHICs are embedded within gNFW halos with \(\gamma=0\). See Eq. 1. The solid line in the bottom panel shows the family of models that provide a good fit to the second innermost isocontour. column density profile better. However, the high declination of CL-9 (\(\delta\sim+40^{\circ}\)) places the system at the limit of what can be observed with MeerKAT. CL-9 is also at the reach of the Very Large Array (VLA), an instrument that would increase the spatial resolution of the observed profile. In addition, further observations with FAST may help decrease the beam's impact. Secondly, the derived halo mass for CL-9 makes the system an excellent candidate to look for the predicted RELHICs' ring-shaped H\(\alpha\) emission counterparts (Sykes et al., 2019). These observations could be performed with narrow-band H\(\alpha\) filters on the DragonFly Telephoto Array (Abraham & van Dokkum, 2014) and would provide data to constrain further the system's DM content and the local intensity of the UVB. Thirdly, follow-up observations with the Hubble Space Telescope that go fainter than the limit of the DESI LS may help to elucidate whether CL-9 has a luminous counterpart. Finally, observations of bright background sources that intersect CL-9 could help characterize the system's metallicity, thus helping to constrain the likelihood of CL-9 hosting a stellar counterpart. There is a high probability that CL-9 contains a luminous galaxy in its center. At the inferred mass, we expect more than 90% of the halos to host galaxies (see, e.g. Sawala et al., 2016; Benitez-Llambay et al., 2017; Benitez-Llambay & Frenk, 2020). Detecting a stellar counterpart would help constrain and break the current degeneracies and assess the quality of our predictions. Regardless of whether or not CL-9 has a stellar counterpart, its low stellar content, together with its HI reservoir, will still allow us to put joint constraints on its underlying DM distribution. Pursuing this path, although arduous, may be highly rewarding in the end. It will provide a unique opportunity to challenge \(\Lambda\)CDM and our fundamental understanding of how galaxies form at the smallest scales. We thank the anonymous referee for a constructive review that helped improve our presentation. ABL acknowledges support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (GA 101026328).
2309.11657
Distribution-Independent Regression for Generalized Linear Models with Oblivious Corruptions
We demonstrate the first algorithms for the problem of regression for generalized linear models (GLMs) in the presence of additive oblivious noise. We assume we have sample access to examples $(x, y)$ where $y$ is a noisy measurement of $g(w^* \cdot x)$. In particular, \new{the noisy labels are of the form} $y = g(w^* \cdot x) + \xi + \epsilon$, where $\xi$ is the oblivious noise drawn independently of $x$ \new{and satisfies} $\Pr[\xi = 0] \geq o(1)$, and $\epsilon \sim \mathcal N(0, \sigma^2)$. Our goal is to accurately recover a \new{parameter vector $w$ such that the} function $g(w \cdot x)$ \new{has} arbitrarily small error when compared to the true values $g(w^* \cdot x)$, rather than the noisy measurements $y$. We present an algorithm that tackles \new{this} problem in its most general distribution-independent setting, where the solution may not \new{even} be identifiable. \new{Our} algorithm returns \new{an accurate estimate of} the solution if it is identifiable, and otherwise returns a small list of candidates, one of which is close to the true solution. Furthermore, we \new{provide} a necessary and sufficient condition for identifiability, which holds in broad settings. \new{Specifically,} the problem is identifiable when the quantile at which $\xi + \epsilon = 0$ is known, or when the family of hypotheses does not contain candidates that are nearly equal to a translated $g(w^* \cdot x) + A$ for some real number $A$, while also having large error when compared to $g(w^* \cdot x)$. This is the first \new{algorithmic} result for GLM regression \new{with oblivious noise} which can handle more than half the samples being arbitrarily corrupted. Prior work focused largely on the setting of linear regression, and gave algorithms under restrictive assumptions.
Ilias Diakonikolas, Sushrut Karmalkar, Jongho Park, Christos Tzamos
2023-09-20T21:41:59Z
http://arxiv.org/abs/2309.11657v2
# Distribution-Independent Regression for Generalized Linear Models with Oblivious Corruptions ###### Abstract We demonstrate the first algorithms for the problem of regression for generalized linear models (GLMs) in the presence of additive oblivious noise. We assume we have sample access to examples \((x,y)\) where \(y\) is a noisy measurement of \(g(w^{*}\cdot x)\). In particular, the noisy labels are of the form \(y=g(w^{*}\cdot x)+\xi+\epsilon\), where \(\xi\) is the oblivious noise drawn independently of \(x\) and satisfies \(\Pr[\xi=0]\geq o(1)\), and \(\epsilon\sim\mathcal{N}(0,\sigma^{2})\). Our goal is to accurately recover a parameter vector \(w\) such that the function \(g(w\cdot x)\) has arbitrarily small error when compared to the true values \(g(w^{*}\cdot x)\), rather than the noisy measurements \(y\). We present an algorithm that tackles this problem in its most general distribution-independent setting, where the solution may not even be identifiable. Our algorithm returns an accurate estimate of the solution if it is identifiable, and otherwise returns a small list of candidates, one of which is close to the true solution. Furthermore, we provide a necessary and sufficient condition for identifiability, which holds in broad settings. Specifically, the problem is identifiable when the quantile at which \(\xi+\epsilon=0\) is known, or when the family of hypotheses does not contain candidates that are nearly equal to a translated \(g(w^{*}\cdot x)+A\) for some real number \(A\), while also having large error when compared to \(g(w^{*}\cdot x)\). This is the first algorithmic result for GLM regression with oblivious noise which can handle more than half the samples being arbitrarily corrupted. Prior work focused largely on the setting of linear regression, and gave algorithms under restrictive assumptions. Introduction Learning neural networks is a fundamental challenge in machine learning with various practical applications. Generalized Linear Models (GLMs) are the most fundamental building blocks of larger neural networks. These correspond to a linear function \(w^{*}\cdot x\) composed with a (typically non-linear) activation function \(g(\cdot)\). The problem of learning GLMs has received extensive attention in the past, especially for the case of ReLU activations. The simplest scenario is the "realizable setting", i.e., when the labels exactly match the target function, and can be solved efficiently with practical algorithms, such as gradient descent (see, e.g., [15]). In many real-world settings, noise comes from various sources, ranging from rare events and mistakes to skewed and corrupted measurements, making even simple regression problems computationally challenging. In contrast to the realizable setting, when even a small amount of data is adversarially labeled, computational hardness results are known even for approximate recovery [11, 12, 13] and under well-behaved distributions [1, 10, 11, 13, 14]. To investigate more realistic noise models, [15] and [16] study linear and ReLU regression in the Massart noise model, where an adversary has access to a _random_ subset of _at most half_ the samples and can perturb the labels arbitrarily after observing the uncorrupted samples. By tackling regression in an intermediate ("semi-random") noise model -- lying between the clean realizable and the adversarially labeled models -- these works recover \(w^{*}\) under only mild assumptions on the distribution. Interestingly, without any distributional assumptions, computational limitations have recently been established even in the Massart noise model [10, 11, 12, 13]. In this paper, we consider the problem of GLM regression under the _oblivious noise model_ (see Definition1.1), which is another intermediate model that allows the adversary to corrupt almost all the labels yet limits their capability by requiring the oblivious noise be determined independently of the samples. The only assumption on this additive and independent noise is that it takes the value \(0\) with _vanishingly small_ probability \(\alpha>0\). The oblivious noise model is a strong noise model that (information-theoretically) allows for _arbitrarily accurate_ recovery of the target function. This stands in stark contrast to Massart noise, where it is impossible to recover the target function if more than half of the labels are corrupted. On the other hand, oblivious noise allows for recovery even when noise overwhelms, i.e., as \(\alpha\to 0\). We formally define the problem of learning GLMs in the presence of additive oblivious noise below. As is the case with prior work on GLM regression (see, e.g., [11]), we make the standard assumptions that the data distribution is supported in the unit ball (i.e., \(\|x\|_{2}\leq 1\)) and that the parameter space of weight vectors is bounded (i.e, \(\|w^{*}\|_{2}\leq R\)). **Definition 1.1** (GLM-Regression with Oblivious Noise).: _We say that \((x,y)\sim\text{GLM-Ob}(g,\sigma,w^{*})\) if \(x\in\mathbb{R}^{d}\) is drawn from some distribution supported in the unit ball and \(y=g(w^{*}\cdot x)+\xi+\epsilon\), where \(\epsilon\) and \(\xi\) are drawn independently of \(x\) and satisfy \(\Pr[\xi=0]\geq\alpha=o(1)\) and \(\epsilon\sim\mathcal{N}(0,\sigma^{2})\). We assume that \(\|w^{*}\|_{2}\leq R\) and that \(g(\cdot)\) is \(1\)-Lipschitz and monotonically non-decreasing._ In recent years, there has been increased focus on the problem of linear regression in the presence of oblivious noise [16, 17, 18, 19, 20]. This line of work has culminated in consistent estimators when the fraction of clean data is \(\alpha=d^{-c}\), where \(c\) is a small constant [10]. In addition to linear regression, the oblivious noise model has also been studied for the problems of PCA, sparse recovery [16, 15], and in the online setting [16]. See Section1.3 for a detailed summary of related work. However, prior algorithms and analyses often contain somewhat restrictive assumptions and exploit symmetry that only arises for the special case of linear functions. In this work, we address the following shortcomings of previous work: 1. **Assumptions on \(\xi\) and marginal distribution**: Prior work either assumed that the oblivious noise was symmetric or made strong distributional assumptions on the \(x\)'s, such as mean-zero Gaussian or sub-Gaussian tails. We allow the distribution to be arbitrary (while being supported on the unit ball) and make no additional assumptions on the oblivious noise. 2. **Linear functions**: One useful technique to center an instance of the problem for linear functions is to take pairwise differences of the data to induce symmetry. This trick does not work for GLMs, since taking pairwise differences does not preserve the function class we are trying to learn. Similarly, existing approaches do not generalize beyond linear functions. Our algorithm works for a large variety of generalized models, including (but not restricted to) ReLUs and sigmoids. As our main result, we demonstrate an efficient algorithm to efficiently recover \(g(w^{*}\cdot x)\) if the distribution satisfies an efficient identifiability condition (see Definition1.2) and \(\alpha=d^{-c}\) for any constant \(c>0\). If the condition of Definition1.2 does not hold, our algorithm returns a list of candidates, each of which is an approximate translation of \(g(w^{*}\cdot x)\) and one of which is guaranteed to be as close to \(g(w^{*}\cdot x)\) as we would like. In fact, if the condition does not hold, it is information-theoretically impossible to learn a unique function that explains the data. ### Our Results We start by noting that, at the level of generality we consider, the learning problem we study is not identifiable, i.e., multiple candidates in our hypothesis class might explain the data equally well. As our first contribution, we identify a necessary and sufficient condition characterizing when a unique solution is identifiable. We describe the efficient identifiability condition below. **Definition 1.2** (Efficient Unique Identifiability).: _We say \(u\) and \(v\) are \(\Delta\)-separated if_ \[\mathop{\mathbf{E}}_{x}\left[\left|g(u\cdot x)-g(v\cdot x)\right|\right]>\Delta.\] _For any \(\tau>0\), an instance of the problem given in Definition1.1 is \((\Delta,\tau)\)-identifiable if any two \(\Delta\)-separated \(u,v\) satisfy \(\Pr_{x}\left[\left|g(u\cdot x)-g(v\cdot x)-A\right|>\tau\right]>\tau\) for all \(A\in\mathbb{R}\)._ Let \(\mathop{\mathbf{E}}_{x}\left[\left|g(w\cdot x)-g(w^{*}\cdot x)\right|\right]\) denote the "excess loss" of \(w\). Throughout the paper, we refer to \(\Delta\) as the upper bound on the "excess loss" we would like to achieve. When the problem is \((\Delta,\tau)\)-identifiable, the parameter \(\tau\) describes the anti-concentration on the clean label difference \(g(w\cdot x)-g(w^{*}\cdot x)\) centered around \(A\). Essentially, if there is a weight vector \(w\) that is \(\Delta\)-separated from \(w^{*}\), \((\Delta,\tau)\)-identifiability ensures that \(g(w\cdot x)\) is not close to a translation of \(g(w^{*}\cdot x)\). On the other hand, if \(g(w\cdot x)\) is approximately a translation of \(g(w^{*}\cdot x)\) for most \(x\), the following lower bound shows that the adversary can design oblivious noise distributions so that \(g(w\cdot x)\) and \(g(w^{*}\cdot x)\) are indistinguishable. **Theorem 1.3** (Necessity of Efficient Unique Identifiability).: _Suppose that GLM-\(\text{Ob}(g,\sigma,w^{*})\) is not \((\Delta,\tau)\)-identifiable, i.e., there exist \(u,v\in\mathbb{R}^{d}\) and \(A\in\mathbb{R}\) such that \(u,v\) are \(\Delta\)-separated and satisfy \(\Pr_{x}\left[\left|g(u\cdot x)-g(v\cdot x)-A\right|>\tau\right]\leq\tau\). Then any algorithm that distinguishes between \(u\) and \(v\) with probability at least \(1-\delta\) requires \(m=\Omega(\min(\sigma,1)\ln(1/\delta)/\tau)\) samples._ Note that any algorithm that solves the oblivious regression problem must be able to differentiate between \(w^{*}\) and any \(\Delta\)-separated candidate. Theorem1.3 explains the necessity of the efficient identifiability condition for such differentiation. If no \(\tau>0\) satisfies the condition, then Theorem1.3 implies that no algorithm with finite sample complexity can find a unique solution to oblivious regression. The result also shows that any \((\Delta,\tau)\)-identifiable instance requires a sample complexity dependent on \(1/\tau^{*}\), where the instance is \((\Delta,\tau)\)-identifiable for all \(\tau\leq\tau^{*}\). Our main result is an efficient algorithm that performs GLM regression for any Lipschitz monotone activation function \(g(\cdot)\). Our algorithm is qualitatively instance optimal - whenever the problem instance GLM-\(\text{Ob}(g,\sigma,w^{*})\) is \((\Delta,\tau)\)-identifiable, the algorithm returns a single candidate achieving excess loss of \(4\Delta\) with respect to \(g(w^{*}\cdot x)\). If not \((\Delta,\tau)\)-identifiable, then our algorithm returns a list of candidates, one element of which achieves excess loss of \(4\Delta\). **Theorem 1.4** (Main Result).: _There is an algorithm that given as input the desired accuracy \(\Delta>0\), an upper bound \(R\) on \(\|w^{*}\|_{2}\), \(\tau\), \(\alpha\) and \(\sigma\), it draws \(m=\operatorname{poly}(d,R,\alpha^{-1},\Delta^{-1},\sigma)\) samples from GLM-\(\text{Ob}(g,\sigma,w^{*})\), it runs in time \(\operatorname{poly}(m,d)\), and returns a \(\operatorname{poly}(m)\)-sized list of candidates, one of which achieves excess loss at most \(\Delta\), i.e., there exists \(\widehat{w}\in\mathbb{R}^{d}\) satisfying \(\mathbf{E}_{x}[|g(\widehat{w}\cdot x)-g(w^{*}\cdot x)|]\leq\Delta\)._ _Moreover, if the problem instance is \((\Delta,\tau)\)-identifiable (as in Definition1.2), then there is an algorithm which takes as input \(\Delta,R,\alpha,\sigma\) and \(\tau\), draws \(\operatorname{poly}(d,R,\sigma,\alpha^{-1},\Delta^{-1},\tau^{-1})\) samples, runs in time \(\operatorname{poly}(d,R,\sigma,\alpha^{-1},\Delta^{-1},\tau^{-1})\), and returns a single candidate._ Our results hold for polynomially bounded \(x\) and \(w^{*}\) as well, by running the algorithm after scaling the \(x\)'s and reparameterizing \(\Delta\). To see this, observe that we recover a \(\widehat{w}\) such that \(\mathbb{E}[|g(\widehat{w}\cdot x)-g(w^{*}\cdot x)|]\leq O(\Delta)\) for any choice of \(\Delta\) when \(\|x\|_{2}\leq 1\) and \(\|w\|_{2}\leq R\) for polynomially bounded \(R\). Suppose instead of the setting for the theorem, we have \(\|x\|_{2}\leq A\) and \(\|w\|_{2}\leq R\). We can then divide the \(x\)'s by \(A\) and interpret \(y(x)=g(w\cdot x)=g(Aw\cdot(x/A))\). We can then apply Theorem1.4 with the upper bound on \(w\) set to \(AR\) and recover \(\widehat{w}\), getting, \(\mathbf{E}[|g((\widehat{w}/A)\cdot x)-g(w^{*}\cdot x)|]=\mathbf{E}[|g( \widehat{w}\cdot(x/A))-g(w^{*}\cdot x)|]\leq O(\Delta)\). Prior work on linear regression with oblivious noise either assumed that the oblivious noise was symmetric or that the mean of the underlying distribution was zero. Our result holds in a significantly more general setting, even for the special case of linear regression, since we make no assumptions on the quantile of the oblivious noise or the mean of the underlying distribution. At a high-level, we prove Theorem1.4 in three steps: (1) We create an oracle that, given a sufficiently close estimate of \(\Pr[\xi\leq 0]\), generates a hyperplane that separates vectors achieving large loss with respect to \(g(w^{*}\cdot x)\) from those achieving small loss, (2) We use online gradient descent to produce a list of candidate solutions, one of which is close to the actual solution, (3) We apply a unique tournament-style pruning procedure that eliminates all candidates far away from \(w^{*}\). Since we do not have a good estimate of \(\Pr[\xi\leq 0]\), we run steps (1) and (2) for each candidate value of \(1-2\Pr[\xi\leq 0]\) chosen from a uniform partition of \([-1,1]\) and then perform (3) on the union of all these candidates. ### Technical Overview For simplicity of exposition, we will analyze the problem without additive Gaussian noise and when the oblivious noise \(\xi\) is symmetric. This is the typical scenario for linear regression with oblivious noise in the context of general distributions. Inspired by the fact that the median of a dataset can be expressed as the \(\ell_{1}\) minimizer of the dataset, a natural idea is to minimize the \(\ell_{1}\) loss \(\mathcal{L}_{g}(w):=\frac{1}{m}\sum_{i=1}^{m}|y_{i}-g(x_{i}\cdot w)|\). This simple approach has been used in the context of linear regression with oblivious noise [11] and also ReLU-regression for Massart noise [10]. Unfortunately, if the activation function \(g\) is not linear, the loss \(\mathcal{L}_{g}(w)\) is not convex. Let \(\mathcal{L}_{g}^{*}(w):=\frac{1}{m}\sum_{i=1}^{m}|g(w^{*}\cdot x_{i})-g(w\cdot x _{i})|\) denote the clean loss. To solve the problem of optimizing a nonconvex function, instead of using gradient-based methods, we can create an oracle that produces a separating hyperplane between points achieving a large clean loss and those achieving a small clean loss. The oracle produces a vector \(H(w)\) satisfying \(H(w)\cdot(w-w^{*})\geq c>0\). We then reduce the problem to online convex optimization (OCO). Oracle for Separating HyperplaneUnfortunately, unlike the case of convex functions -- or as it was used in [4] to perform ReLU regression -- we cannot use \(\nabla\mathcal{L}_{g}(w)=\mathbf{E}_{x,y}[\operatorname{sign}(g(w\cdot x)-y) \mathbf{1}(w\cdot x\geq 0)\ x]\) as an oracle for generating a separating hyperplane, since it cannot distinguish \(w=0\) from \(w^{*}\) even when \(w^{*}\neq 0\). This is illustrated in Figure 1 for \(g=\operatorname{ReLU}\). We instead take inspiration from the gradient of a _linear regression_ problem. Suppose we are given samples \((x_{i},y_{i})\) such that \(y_{i}=w^{*}\cdot x_{i}+\xi^{\prime}\), where \(\xi^{\prime}\) is symmetric oblivious noise such that \(\Pr[\xi^{\prime}=0]\geq\alpha\), and the goal is to recover \(\widehat{w}\) which is the \(\ell_{1}\) minimizer, i.e., \(\widehat{w}:=\operatorname{argmin}_{w\in\mathbb{R}^{d}}\mathcal{L}^{\prime}( w):=\frac{1}{m}\sum_{i=1}^{m}\lvert y_{i}-w\cdot x_{i}\rvert.\) This is a convex program, and a subgradient of \(\mathcal{L}^{\prime}(w)\) is given by \(\nabla\mathcal{L}^{\prime}(w)=\mathbf{E}_{x,y}[\operatorname{sign}(w\cdot x -y)\ x]=\mathbf{E}_{x}\left[\mathbf{E}_{\xi^{\prime}}\left[\operatorname{ sign}((w\cdot x-w^{*}\cdot x)-\xi^{\prime})\right]x\right].\) We now examine the expectation over \(\xi^{\prime}\). Since the median of \(\xi^{\prime}\) is \(0\) and it takes the value \(0\) with probability at least \(\alpha\), the probability that \(\operatorname{sign}((w\cdot x-w^{*}\cdot x)-\xi^{\prime})=\operatorname{sign} (w\cdot x-w^{*}\cdot x)\) is at least \(\frac{1+\alpha}{2}\). This implies that \(\mathbf{E}_{\xi^{\prime}}\left[\operatorname{sign}((w\cdot x-w^{*}\cdot x)- \xi^{\prime})\right]\geq\alpha\ \operatorname{sign}(w\cdot x-w^{*}\cdot x)\), since \((w\cdot x-w^{*}\cdot x)-\xi^{\prime}\) is more often biased towards \((w\cdot x-w^{*}\cdot x)\) than it is towards \(-(w\cdot x-w^{*}\cdot x)\) and \(\Pr[\xi=0]\geq\alpha\). Therefore, \(\nabla\mathcal{L}^{\prime}(w)\cdot(w-w^{*})\geq\alpha\,\mathbf{E}_{x}[\lvert w \cdot x-w^{*}\cdot x\rvert]\). While we do not have access to \(w^{*}\cdot x+\xi^{\prime}\) we do have access to \(g(w^{*}\cdot x)+\xi\). At this point we make two observations: (1) Since \(g\) is monotonically non-decreasing, it follows that \(\operatorname{sign}(g(w\cdot x)-g(w^{*}\cdot x))=\operatorname{sign}(w\cdot x -w^{*}\cdot x)\) whenever \(g(w\cdot x)\neq g(w^{*}\cdot x)\). (2) Since \(g\) is \(1\)-Lipschitz, it follows that \(\lvert w\cdot x-w^{*}\cdot x\rvert\geq\lvert g(w\cdot x)-g(w^{*}\cdot x)\rvert\). An argument analogous to the one above then shows us that \(H(w):=\mathbf{E}_{x,y}\left[\operatorname{sign}(g(w\cdot x)-y)\ x\right]\) satisfies \(H(w)\cdot(w-w^{*})\geq\alpha\,\mathbf{E}_{x}[\lvert g(w\cdot x)-g(w^{*}\cdot x )\rvert]\), hence allowing us to separate \(w\)'s which achieve small clean loss from those which achieve larger clean loss. In Lemma3.1, we demonstrate this in the presence of additive Gaussian noise and without the assumption of symmetry on \(\xi\). Reduction to Online Convex OptimizationIn Lemma3.4, we show that if we have a good estimate of the quantile at which \(\xi\) is \(0\), we can use our separating hyperplane oracle as the gradient oracle for online gradient descent to optimize the clean loss \(\mathcal{L}_{g}^{*}(w):=\mathbf{E}_{x}\left[\lvert g(w\cdot x)-g(w^{*}\cdot x )\rvert\right]\). Since this function is nonconvex, our reduction leaves us with a set of candidates which are iterates of our online gradient descent procedure. Our minimizer is one of these candidates. We then prune out candidates which do not explain the data. Pruning Bad CandidatesFinally, Lemma C.1 shows that we can efficiently prune implausible candidates if the list of candidates contains a vector close to \(w^{*}\). For simplicity of exposition, assume for now that \(w^{*}\in\mathcal{W}\). Our pruning procedure relies on the following two observations: (1) There is no way to find disjoint subsets of the space of \(x\)'s such that \((y-g(w^{*}\cdot x))\) takes the value \(0\) at different quantiles when conditioned on these subsets. (2) Suppose that, for some \(A\), we identify \(E^{+}\) and \(E^{-}\) such that \(x\in E^{+}\) implies \(g(w\cdot x)-g(w^{*}\cdot x)-A>\tau\) and \(x\in E^{-}\) implies \(g(w\cdot x)-g(w^{*}\cdot x)-A<-\tau\). Then the quantiles at which \((y-g(w\cdot x))=(g(w^{*}\cdot x)-g(w\cdot x)+\xi+\epsilon)\) take the value \(0\) in these two sets differ by at least \(\alpha\). We can use these observations to determine if a given candidate is equal to \(w^{*}\) or not, by looking at the quantity \(g(w\cdot x)-g(w^{*}\cdot x)-A\). Specifically, we try to find two subsets \(E^{+}\) and \(E^{-}\) such that \(g(w\cdot x)-g(w^{*}\cdot x)\) is large and positive when \(x\in E^{+}\), and is large and negative when \(x\in E^{-}\). We reject \(w\) by comparing the quantiles of \((y-g(w\cdot x))\) when conditioned on \(x\) belonging to \(E^{+}\) and \(E^{-}\). While we do not know what \(w^{*}\) is beforehand, we know that \(w^{*}\in\mathcal{W}\), and so we iterate over elements in \(\mathcal{W}\) to check for the existence of a partition which will allow us to reject \(w\). If \(w=w^{*}\) such a partition is not possible, and \(w\) will not be rejected. On the other hand, each candidate remaining in the list will be close to a translation of \(g(w^{*}\cdot x)\), and one of the candidates will be \(w^{*}\). ### Prior Work Given the extensive literature on robust regression, here we focus on the most relevant work. GLM regressionVarious formalizations of GLM regression have been studied extensively over the past decades; see, e.g., [23, 24, 25, 26]. Recently, there has been increased focus on GLM regression for activation functions that are popular in deep learning, including ReLUs. This problem has previously been considered both in the context of weaker noise models, such as the realizable/random additive noise setting [19, 26, 27], as well as for more challenging noise models, including adversarial label noise [1, 25, 24, 27, 28, 29, 30]. Even in the realizable setting (i.e., with clean labels), it turns out that the squared loss has exponentially many local minima for the logistic activation function [1]. On the positive side, [29] gave an efficient learner achieving a constant factor approximation in the presence of adversarial label noise under isotropic logconcave distributions. This algorithmic result was generalized to broader families of activations under much milder distributional assumptions in [29, 30]. On the other hand, without distributional assumptions, even approximate learning is computationally hard [10, 26, 27]. In a related direction, the problem has been studied in the distribution-free setting under semi-random label noise. Specifically, [13] focused on the setting of bounded (Massart) noise, where the adversary can arbitrarily corrupt a randomly selected subset of at most half the samples. Even earlier, [26] studied the realizable setting under a noise model similar to (but more restrictive than) the Massart noise model, while [28] studied a classification version of learning GLMs with Massart noise. In our work, we study the problem of distribution-free learning of general GLMs in the presence of oblivious noise, with the goal of being able to tolerate \(1-o(1)\) fraction of the samples being corrupted. In this setting, we recover the candidate solution to arbitrarily small precision in \(\ell_{1}\) norm (with respect to the objective). Since \(\|x\|_{2}\leq 1\) and \(\|w\|_{2}\leq R\), it is easy to also provide the corresponding guarantees in the \(\ell_{2}\) norm, which was the convention in some earlier works [11, 12]. Algorithmic Robust StatisticsA long line of work, initiated in [10, 13], has focused on designing robust estimators for a range of learning tasks (both supervised and unsupervised) in the presence of a small constant fraction of adversarial corruptions; see [10] for a textbook overview of this field. In the supervised setting, the underlying contamination model allows for corruptions in both \(x\) and \(y\) and makes no further assumptions on the power of the adversary. A limitation of these results is the assumption that the good data comes from a well-behaved distribution. On the other hand, without distributional assumptions on the clean data, these problems become computationally intractable. This fact motivates the study of weaker -- but still realistic -- noise models in which efficient noise-tolerant algorithms are possible in the distribution-free setting. Adversarial Label CorruptionsIn addition to adversarial corruptions in both \(x\) and \(y\), the adversarial label corruption model is another model which has been extensively studied in the context of linear and polynomial regression problems [1, 2, 1, 1, 13, 14, 15]. These results make strong assumptions on the distribution over the marginals. In contrast, we make no assumptions beyond the marginals coming from a bounded norm distribution. The Oblivious Noise ModelThe oblivious noise model could be viewed as an attempt at characterizing the most general noise model that allows almost all points to be arbitrarily corrupted, while still allowing for recovery of the target function with vanishing error. This model has been considered for natural statistical problems, including PCA, sparse recovery [10, 11], as well as linear regression in the online setting [11] and the problem of estimating a signal \(x^{*}\) with additive oblivious noise [13]. The setting closest to the one considered in this paper is that of linear regression. Until very recently, the problem had been studied primarily in the context of Gaussian design matrices, i.e., when \(x\)'s are drawn from \(\mathcal{N}(0,\Sigma)\). One of the main goals in this line of work is to design an algorithm that can tolerate the largest possible fraction of the labels being corrupted. Initial works on linear regression either were not consistent as the error did not go to \(0\) with increasing samples [12, 13] or failed to achieve the right convergence rates or breakdown point [11, 13]. [15] provided the first consistent estimator achieving an error of \(O(d/\alpha^{2}m)\) for any \(\alpha>1/\log\log(m)\). Later, [13] improved this rate to \(\alpha>1/d^{c}\) for constant \(c\), while also generalizing the class of design matrices. Most of these prior results focused on either the oblivious noise being symmetric (or median \(0\)), or the underlying distribution being (sub)-Gaussian. In some of these settings (such as that of linear regression) it is possible to reduce the general problem to this restrictive setting, as is done in [13]. However, for GLM regression, we cannot exploit the symmetry that is either induced by the distribution or the class of linear functions. In terms of lower bounds, [10] identify a "well-spreadness" condition (the column space of the measurements being far from sparse vectors) as a property that is necessary for recovery even when the oblivious noise is symmetric. Notably, these lower bounds are relevant when the goal is to perform parameter recovery or achieve a rate better than \(\sigma^{2}/m\). In our paper, we instead give the first result for a far more general problem and with the objective of minimizing the clean loss, but not necessarily parameter recovery. Our lower bound follows from the fact that we cannot distinguish between translations of the target function from the data without making any assumptions on the oblivious noise. Preliminaries Basic NotationWe use \(\mathbb{R}\) to denote the set of real numbers. For \(n\in\mathbb{Z}_{+}\) we denote \([n]:=\{1,\ldots,n\}\). We assume \(\operatorname{sign}(0)=0\). We denote by \(\mathbf{1}(E)\) the indicator function of the event \(E\). We use \(\operatorname{poly}(\cdot)\) to indicate a quantity that is polynomial in its arguments. Similarly, \(\operatorname{polylog}(\cdot)\) denotes a quantity that is polynomial in the logarithm of its arguments. For two functions \(f,g:\mathbb{R}\to\mathbb{R}\), we say \(f\lesssim g\) if there exist constants \(C_{1},C_{2}>0\) such that for all \(x\geq C_{1}\), \(f(x)\leq C_{2}g(x)\). For two numbers \(a,b\in\mathbb{R}\), \(\min(a,b)\) returns the smaller of the two. We say that a function \(f\) is \(L\)-Lipschitz if \(f(x)-f(y)\leq L\|x-y\|_{2}\). Linear Algebra NotationWe typically use small case letters for deterministic vectors and scalars. For a vector \(v\), we let \(\|v\|_{2}\) denote its \(\ell_{2}\)-norm. We denote the inner product of two vectors \(u,v\) by \(u\cdot v\). We denote the \(d\)-dimensional radius-\(R\) ball centered at the origin by \(B_{d}(R)\). Probability NotationFor a random variable \(X\), we use \(\mathbf{E}[X]\) for its expectation and \(\Pr[X\in E]\) for the probability of the random variable belonging to the set \(E\). We use \(\mathcal{N}(\mu,\sigma^{2})\) to denote the Gaussian distribution with mean \(\mu\) and variance \(\sigma^{2}\). When \(D\) is a distribution, we use \(X\sim D\) to denote that the random variable \(X\) is distributed according to \(D\). When \(S\) is a set, we let \(\mathbf{E}_{X\sim S}[\cdot]\) denote the expectation under the uniform distribution over \(S\). When clear from context, we denote the empirical expectation and probability by \(\widehat{\mathbf{E}}\) and \(\widehat{\Pr}\). Basic Technical FactsThe proofs of the following facts can be found in Appendix B. **Fact 2.1**.: _Let \(\xi\) be oblivious noise such that \(\Pr[\xi=0]\geq\alpha\). Then the quantity_ \[F_{\sigma,\xi}(t):=\mathop{\mathbf{E}}_{\epsilon,\xi}[\operatorname{sign}(t+ \epsilon+\xi)]-\mathop{\mathbf{E}}_{\epsilon,\xi}[\operatorname{sign}( \epsilon+\xi)]\] _satisfies the following: (1) \(F_{\sigma,\xi}\) is strictly increasing, (2) \(\operatorname{sign}(F_{\sigma,\xi}(t))=\operatorname{sign}(t)\), and (3) For any \(\gamma\leq 2\), whenever \(|t|\geq\gamma\sigma\), \(|F_{\sigma,\xi}(t)|>(\gamma\alpha/4)\) and whenever \(|t|\leq\gamma\sigma\), \(|F_{\sigma,\xi}(t)|\leq(\alpha t/4\sigma)\)._ **Fact 2.2**.: _Let \(X\) be a random variable on \(\mathbb{R}\). Fix \(\tau>0\) and \(\eta>0\). Define the events \(E_{A}^{+}\) and \(E_{A}^{-}\) such that \(\Pr[E_{A}^{+}]=\Pr[X>A+\tau]\) and \(\Pr[E_{A}^{-}]=\Pr[X<A-\tau]\). Then if the following first condition is not true, the second condition is: (1) \(\exists A\in\mathbb{R}\) such that \(\Pr[E_{A}^{+}]\geq\eta\) and \(\Pr[E_{A}^{-}]\geq\eta\). (2) \(\exists A^{*}\in\mathbb{R}\) such that \(\Pr[E_{A^{*}}^{+}]\leq\eta\) and \(\Pr[E_{A^{*}}^{-}]\leq\eta\)._ ## 3 Oblivious Regression via Online Convex Optimization ### A Direction of Improvement We assume prior knowledge of a constant \(c\) that approximates \(\mathbf{E}_{\xi,\epsilon}[\operatorname{sign}(\xi+\epsilon)]\). In the following key lemma, we demonstrate an oracle for a hyperplane that separates all vectors that are \(\Delta\)-separated from \(w^{*}\). For the following results in Section 3 and later in the paper, we use \(\gamma\) to denote \(\min(\Delta/4\sigma,1/2)\). **Lemma 3.1** (Separating Hyperplane).: _Let \(\mathcal{D}=\text{GLM-Ob}(g,\sigma,w^{*})\) as defined in Definition 1.1 and define \(\gamma=\min(\Delta/4\sigma,1/2)\). Suppose \(c\in\mathbb{R}\) such that \(|\mathbf{E}_{\xi,\epsilon}[\operatorname{sign}(\xi+\epsilon)]-c|\leq\gamma \alpha\Delta/32R\). Then, for \(w\in B_{d}(R)\), \(H_{c}(w):=\mathbf{E}_{x,y}\left[(\operatorname{sign}(y-g(w\cdot x))-c\ \ x]\ satisfies\right.\)_ \[H_{c}(w)\cdot(w^{*}-w)\geq(\gamma\alpha/4)\mathop{\mathbf{E}}_{x}\left[|(g(w^ {*}\cdot x)-g(w\cdot x))|\right]-(\gamma^{2}\alpha\sigma/4)-(\gamma\alpha \Delta/16).\] _Specifically, if \(\mathbf{E}_{x}[|(g(w^{*}\cdot x)-g(w\cdot x))|]>\Delta\), we have that \(H_{c}(w)\cdot(w^{*}-w)\geq(\alpha\Delta^{2})/(32\sigma)\) if \(\Delta\leq 2\sigma\), and \(H_{c}(w)\cdot(w^{*}-w)\geq\alpha\Delta/8\) if \(\Delta>2\sigma\)._ Proof.: Let \(F_{\sigma,\xi}(t):=\mathbf{E}_{\epsilon,\xi}[\operatorname{sign}(t+\epsilon+\xi)- \operatorname{sign}(\epsilon+\xi)]\). Then we can write \[H_{c}(w)\cdot(w^{*}-w) =\operatorname*{\mathbf{E}}_{x,\epsilon,\xi}[(\operatorname{sign} (g(w^{*}\cdot x)-g(w\cdot x)+\epsilon+\xi)-c)\;\left(x\cdot(w^{*}-w)\right)]\] \[=\operatorname*{\mathbf{E}}_{x}\left[\left(F_{\sigma,\xi}(g(w^{* }\cdot x)-g(w\cdot x))+(\operatorname*{\mathbf{E}}_{\xi,\epsilon}[ \operatorname{sign}(\xi+\epsilon)]-c)\right)\;(x\cdot(w^{*}-w))\right]\] \[=\operatorname*{\mathbf{E}}_{x}\left[F_{\sigma,\xi}(g(w^{*}\cdot x )-g(w\cdot x))(x\cdot(w^{*}-w))\right]\] \[\qquad+(\operatorname*{\mathbf{E}}_{\xi,\epsilon}[\operatorname{ sign}(\xi+\epsilon)]-c)\;\operatorname*{\mathbf{E}}_{x}\left[x\cdot(w^{*}-w) \right].\] By creftype2.1 and the fact that \(g\) is monotone, it follows that \(\operatorname{sign}(F_{\sigma,\xi}(g(w^{*}\cdot x)-g(w\cdot x)))= \operatorname{sign}(g(w^{*}\cdot x)-g(w\cdot x))=\operatorname{sign}(x\cdot( w^{*}-w))\) whenever \(g(w^{*}\cdot x)\neq g(w\cdot x)\). Combining this with the fact that \(g(\cdot)\) is \(1\)-Lipschitz, we get \[\operatorname*{\mathbf{E}}_{x}\left[F_{\sigma,\xi}(g(w^{*}\cdot x )-g(w\cdot x))(x\cdot(w^{*}-w))\right]\] \[\geq\operatorname*{\mathbf{E}}_{x}\left[F_{\sigma,\xi}(g(w^{*} \cdot x)-g(w\cdot x))(g(w^{*}\cdot x)-g(w\cdot x))\right]\;.\] Continuing the calculation above, we see \[H_{c}(w)\cdot(w^{*}-w) \geq\operatorname*{\mathbf{E}}_{x}\left[F_{\sigma,\xi}(g(w^{*} \cdot x)-g(w\cdot x))(g(w^{*}\cdot x)-g(w\cdot x))\right]\] \[\qquad-2R\;|\operatorname*{\mathbf{E}}_{\xi,\epsilon}[ \operatorname{sign}(\xi+\epsilon)]-c|\;,\] where the bound on the second quantity follows from the fact that \(\|w\|_{2},\|w^{*}\|_{2}\leq R\) and \(\|x\|_{2}\leq 1\). creftype2.1 implies that \(|F_{\sigma,\xi}(t)|\geq\gamma\alpha/4\) if \(|t|\geq\gamma\sigma\), whenever \(\gamma\leq 2\). We now consider the event \(E_{\gamma}:=\{x\mid|g(w^{*}\cdot x)-g(w\cdot x)|\geq\gamma\sigma\}\), which describes the region where there is significant difference between the hypothesis \(w\) and the target \(w^{*}\). Then we can write \[H_{c}(w)\cdot(w^{*}-w) \geq(\gamma\alpha/4)\;\operatorname*{\mathbf{E}}_{x}\left[|(g(w^{ *}\cdot x)-g(w\cdot x))|\mathbf{1}(x\in E_{\gamma})\right]-2R\;|\operatorname* {\mathbf{E}}_{\xi,\epsilon}[\operatorname{sign}(\xi+\epsilon)]-c|\] \[\geq(\gamma\alpha/4)(\operatorname*{\mathbf{E}}_{x}\left[|(g(w^{ *}\cdot x)-g(w\cdot x))|\right]-\gamma\sigma)-2R\;|\operatorname*{\mathbf{E}} _{\xi,\epsilon}[\operatorname{sign}(\xi+\epsilon)]-c|\] \[\geq(\gamma\alpha/4)\operatorname*{\mathbf{E}}_{x}\left[|(g(w^{*} \cdot x)-g(w\cdot x))|\right]-(\gamma^{2}\alpha\sigma/4)-2R\;|\operatorname*{ \mathbf{E}}_{\xi,\epsilon}[\operatorname{sign}(\xi+\epsilon)]-c|\;.\] In the case that \(\operatorname*{\mathbf{E}}_{x}\left[|(g(w^{*}\cdot x)-g(w\cdot x))|\right]>\Delta\), we would like to set the parameter \(\gamma\) such that \((\gamma^{2}\alpha\sigma/4)+2R\;|\operatorname*{\mathbf{E}}_{\xi,\epsilon}[ \operatorname{sign}(\xi+\epsilon)]-c|\leq\gamma\alpha\Delta/8\), ensuring that the right hand side above is strictly positive. By assumption, we know that \(c\) satisfies \(|\operatorname*{\mathbf{E}}_{\xi,\epsilon}[\operatorname{sign}(\xi+\epsilon)]-c |\leq(\gamma\alpha\Delta/32R)\), so it suffices for \(\gamma\) to satisfy \((\gamma^{2}\alpha\sigma/2)\leq\gamma\alpha\Delta/8\), i.e., \(\gamma\leq\Delta/4\sigma\), in addition to \(\gamma\leq 2\). Here, we set \(\gamma=\min(\Delta/4\sigma,1/2)\). Putting these together, we see that when \(\Delta\leq 2\sigma\), it holds \[H_{c}(w)\cdot(w^{*}-w) \geq(\gamma\alpha/4)\operatorname*{\mathbf{E}}_{x}\left[|(g(w^{*} \cdot x)-g(w\cdot x))|\right]-(\gamma^{2}\alpha\sigma/4)-2R\;|\operatorname*{ \mathbf{E}}_{\xi,\epsilon}[\operatorname{sign}(\xi+\epsilon)]-c|\] \[\geq(\gamma\alpha/4)\operatorname*{\mathbf{E}}_{x}\left[|(g(w^{*} \cdot x)-g(w\cdot x))|\right]-(\gamma^{2}\alpha\sigma/4)-(\gamma\alpha\Delta/16)\] \[\geq(\alpha\Delta)/(16\sigma)\operatorname*{\mathbf{E}}_{x}\left[|(g (w^{*}\cdot x)-g(w\cdot x))|\right]-(\alpha\Delta^{2})/(64\sigma)-(\alpha \Delta^{2}/64\sigma).\] In the case that we look at a vector \(w\) that is \(\Delta\)-separated from \(w^{*}\), the lower bound we get is \((\alpha\Delta^{2})/(32\sigma)\) when \(\Delta\leq 2\sigma\), while the lower bound is \(\alpha\Delta/8\) when \(\Delta>2\sigma\). The following corollary allows us to extend creftype3.1 to the empirical setting. The proof of the corollary can be found in Appendix A. **Corollary 3.2** (Empirical Separating Hyperplane).: _Let \((x_{i},y_{i})_{i=1}^{m}\sim\) GLM-Ob\((g,\sigma,w^{*})^{m}\), where \(m\gtrsim R^{2}\ln(1/\delta)/(\gamma\alpha\Delta)^{2}\). Assume \(c\) satisfies the assumption in Lemma3.1. Define \(\widehat{H}_{c}(w):=(1/m)\sum_{i=1}^{m}\left[\left(\operatorname{sign}(g(w \cdot x_{i})-y_{i})-c\right)\ x_{i}\right]\). Then, for any \(w\), it holds_ \[\widehat{H}_{c}(w)\cdot(w-w^{*})\geq(\gamma\alpha/4)\operatorname*{\mathds{E} }_{x}\left[\left|(g(w^{*}\cdot x)-g(w\cdot x))\right|\right]-\gamma^{2}\alpha \sigma/4-3\ (\gamma\alpha\Delta/32)\] _with probability at least \(1-\delta\)._ While not directly useful in the proof we present here, as pointed out by a reviewer, we note that our direction of improvement \(\widehat{H}_{c}(w)\) as defined in Corollary3.2 can be interpreted to be the gradient of the convex surrogate loss \((1/m)\ \sum_{i=1}^{m}\int_{0}^{w\cdot x_{i}}(\operatorname{sign}(g(z)-y_{i})+c)\ dz\). This has an analogy to the "matching loss" \((1/m)\ \sum_{i=1}^{m}\int_{0}^{w\cdot x_{i}}(g(z)-y_{i})\ dz\) as considered for the case of \(\ell_{2}\) GLM regression introduced in the work of [1] and used extensively in subsequent works. ### Reduction to Online Convex Optimization If \(c\) is a good approximation of \(\operatorname{\mathds{E}}_{\xi,\epsilon}[\operatorname{sign}(\xi+\epsilon)]\), we can reduce the problem to online convex optimization to now get a set of candidates, one of which is close to the true solution. OCO SettingThe typical online convex optimization scenario can be modelled as the following game: at time \(t-1\) the player must pick a candidate point \(w_{t}\) belonging to a certain constrained set \(W\). At time \(t\) the true convex loss \(f_{t}(\cdot)\) is revealed and the player suffers a loss of \(f_{t}(w_{t})\). This continues for a total of \(T\) rounds. Algorithms for these settings typically upper bound the regret (\(R(\{w_{i}\}_{i=1}^{T})\)), which is the performance with respect to the optimal fixed point in hindsight, \(R(\{w_{i}\}_{i=1}^{T}):=\sum_{i=1}^{T}f_{t}(w_{t})-\min_{w^{*}\in W}\left(\sum_ {i=1}^{T}f_{t}(w^{*})\right)\). We specialize Theorem3.1 from [1] to our setting to get the following lemma. **Lemma 3.3** (see, e.g., Theorem3.1 from [1]).: _Suppose \(v_{1},\ldots,v_{T}\in\mathbb{R}^{d}\) such that for all \(t\in[T]\) and \(\|v_{t}\|_{2}\leq G\). Then online gradient descent with step sizes \(\{\eta_{t}=\frac{R}{G\sqrt{t}}\mid t\in[T]\}\), for linear cost functions \(f_{t}(w):=v_{t}\cdot w\), outputs a sequence of predictions \(w_{1},\ldots,w_{T}\in B_{d}(R)\) such that \(\sum_{t=1}^{T}f_{t}(w_{t})-\min_{\|w\|_{2}\leq R}\sum_{t=1}^{T}f_{t}(w)\leq(3/2 )\ GR\sqrt{T}\)._ An application of this lemma then gives us our result for reducing the problem to OCO. **Lemma 3.4** (Reduction to Oco).: _Suppose \((x_{1},y_{1}),\ldots,(x_{m},y_{m})\) are drawn from GLM-Ob\((g,\sigma,w^{*})\) and \(c\) satisfies the assumption in Lemma3.1. Let \(T\gtrsim(R/\gamma\alpha)^{2}\) and \(m\gtrsim R^{2}\ln(T/\delta)/(\gamma\alpha\Delta)^{2}\). Then there is an algorithm which recovers a set of candidates \(w_{1},\ldots,w_{T}\) with probability \(1-\delta\) such that_ \[\min_{w_{t}}\left\{\operatorname*{\mathds{E}}\left[\left|g(w_{t}\cdot x)-g(w ^{*}\cdot x)\right|\right]\right\}\leq 3\Delta.\] Proof.: At round \(t\), the player proposes weight vector \(w_{t}\), at which point the function \(f_{t}(\cdot)\) is revealed to be \(f_{t}(w):=v_{t}\cdot w\) where \(v_{t}:=\widehat{H}_{c}(w_{t})\) as defined in Corollary3.2. Note that a union bound over the \(T\) final candidates will ensure that with \(m\gtrsim R^{2}\ln(T/\delta)/(\gamma\alpha\Delta)^{2}\) samples, with probability \(1-\delta\), for every \(t\in[1,T]\), \(\widehat{H}_{c}(w_{t})\) satisfies the conclusion of Corollary3.2. An application of Lemma3.3 to this setting gives \[\frac{1}{T}\sum_{t=1}^{T}f_{t}(w_{t})\leq\min_{\|w\|\leq R}\left(\frac{1}{T} \sum_{t=1}^{T}f_{t}(w)\right)+\frac{(3/2)GR}{\sqrt{T}}\leq\frac{1}{T}\sum_{t=1 }^{T}f_{t}(w^{*})+\frac{(3/2)GR}{\sqrt{T}}\;.\] Rearranging this and applying Corollary 3.2 we get \[\frac{(3/2)GR}{\sqrt{T}} \geq\frac{1}{T}\left(\sum_{t=1}^{T}f_{t}(w_{t})-\sum_{t=1}^{T}f_{t}( w^{*})\right)=\frac{1}{T}\left(\sum_{t=1}^{T}v_{t}\cdot(w_{t}-w^{*})\right)\] \[\geq\frac{\gamma\alpha}{4T}\ \left(\sum_{t=1}^{T}\mathop{\mathbf{E}}_{x} \left[|g(x\cdot w_{t})-g(x\cdot w^{*})|\right]\right)-\gamma^{2}\alpha\sigma/4- 3\ (\gamma\alpha\Delta/32)\] \[\geq(\gamma\alpha/4)\min_{w_{t}}\left\{\mathop{\mathbf{E}}_{x} \left[|g(x\cdot w_{t})-g(x\cdot w^{*})|\right]\right\}-\gamma^{2}\alpha\sigma /4-3\ (\gamma\alpha\Delta/32)\;,\] where the final inequality follows from the fact that the minimum is smaller than the average. Rearranging this gives us \(\frac{6GR}{\gamma\alpha\sqrt{T}}+\gamma\sigma+(3/8)\Delta\geq\min_{w_{t}} \left\{\mathop{\mathbf{E}}_{x}\left[|g(x\cdot w_{t})-g(x\cdot w^{*})|\right]\right\}\). Substituting \(\|v_{t}\|_{2}\leq G=2\) and \(\gamma=\min(\Delta/4\sigma,1/2)\), we get \[O\left(\frac{R}{\gamma\alpha\sqrt{T}}\right)+2\Delta\geq\min_{w_{t}}\left\{ \mathop{\mathbf{E}}_{x}\left[|g(x\cdot w_{t})-g(x\cdot w^{*})|\right]\right\}\] and so, setting \(T\gtrsim(R/\gamma\alpha)^{2}\) ensures that we achieve an error of \(3\Delta\). Note that if the desired lower bound was a convex function (instead of \(\mathop{\mathbf{E}}_{x}[|g(x\cdot w)-g(x\cdot w^{*})|]\)), we would not have to take the minimum of all the iterates in the proof. We could instead use Jensen's inequality to take the loss of the average iterates. Unfortunately, because the objective can be non-convex due to the nonlinearity of the activation function \(g\), we can't just use the averaged iterates. ## 4 Pruning Implausible Candidates ``` input:\(\tau,\alpha,\sigma,R,\mathcal{W}=\{w_{1},\ldots,w_{p}\}\) Draw \(m=C\log(|\mathcal{W}|^{2}/\delta)/(\alpha\tau(\min\{\tau/2\sigma,1\}))^{6}\) samples \(\{(x_{k},y_{k})\}_{k=1}^{m}\) for some constant \(C\). for\(i\gets 1...p\)do for\(j\gets i+1...p\)do Let \(E_{A}^{+}:=\{x_{k}|g(w_{i}\cdot x_{k})-g(w_{j}\cdot x_{k})>A\}\) and \(E_{A}^{-}:=\{x_{k}|g(w_{i}\cdot x_{k})-g(w_{j}\cdot x_{k})<A\}\) Compute range \(U^{+}\) of \(A\) such that \(|E_{A+\tau/2}^{+}|\geq\alpha m\min\{\tau/2\sigma,1/4\}\) via binary search on at most \(m\) distinct \(g(w_{i}\cdot x_{k})-g(w_{j}\cdot x_{k})-\tau/2\) and similarly \(U^{-}\) for \(|E_{A-\tau/2}^{-}|\) Let \(A\leftarrow\) any number in \(U^{+}\cap U^{-}\) ifno such \(A\) existsthen continue to \((j+1)\)-th inner loop end if Compute \(R^{+}=\{r|r=y_{i}-g(w_{i}\cdot x)\text{ for }x\in E_{A+\tau/2}^{+}\}\) and similarly \(R^{-}\) for \(E_{A-\tau/2}^{-}\) if\(|\widehat{\mathbf{E}}[\operatorname{sign}(r-A)\mid r\in R^{+}]-\widehat{ \mathbf{E}}[\operatorname{sign}(r-A)\mid r\in R^{-}]|>\alpha\min\{\tau/16 \sigma,1/8\}\)then reject \(w_{i}\) and continue with \((i+1)\)-th outer loop end if end for for\(i\gets 1\ldots p\)do for\(j\gets 1\ldots p\)do if\(\frac{1}{m}\sum_{t=1}^{m}|g(w_{i}\cdot x_{t})-g(w_{j}\cdot x_{t})|>3\Delta\)then return\(\mathcal{W}\) end if end for Sample \(\widehat{w}\) uniformly from \(\mathcal{W}\). return\(\{\widehat{w}\}\). ``` **Algorithm 1** Prune Implausible Candidates Lemma 3.4 can generate potential solutions to achieve a low clean loss with respect to \(g(w^{*}\cdot x)\) if \(c\) is a good approximation of \(\mathbf{E}_{\xi,\epsilon}[\operatorname{sign}(\xi+\epsilon)]\). Unfortunately, it is difficult to verify the accuracy of these candidates on the data since it is impossible to differentiate between translations of \(g(w^{*}\cdot x)\) due to the generality of the setting and since \(\mathbf{E}_{\xi,\epsilon}[\operatorname{sign}(\xi+\epsilon)]\) is unknown. Our algorithm generates \(T\) candidates for each value of \(c\) in a uniform partition of \([-1,1]\). One the candidates is close to \(w^{*}\), however, the problem of spurious candidates still remains. In this section, we discuss how to determine which of the candidate solutions is the best fit for the data. Even though it is difficult to test if a single hypothesis achieves a small clean loss, it is surprisingly possible to find a good hypothesis out of a list of candidates. Algorithm 3 describes a tournament-style testing procedure which produces a set of candidates approximately equal to \(g(w^{*}\cdot x)\), and if efficient identifiability holds for the instance, this list will only contain one candidate. The proof of Lemma C.1 is presented in Appendix C. **Lemma 4.1** (Pruning bad candidates).: _Let \(\delta>0\). Suppose \(\exists\widehat{w}\in\mathcal{W}\) such that_ \[\mathop{\mathbf{E}}_{x}\left[|g(\widehat{w}\cdot x)-g(w^{*}\cdot x)|\right] \leq\min\{\Delta,\tau^{2}/16\}.\] _Then Algorithm 3 draws \(m\gtrsim\log(|\mathcal{W}|^{2}/\delta)/(\alpha^{2}\tau^{4}(\min\{\tau/\sigma,1\}) ^{2})+R^{2}\log(|\mathcal{W}|^{2}/\delta)/\Delta^{2}\) samples, runs in time \(\tilde{O}(dm|\mathcal{W}|^{2})\), and with probability \(1-\delta\) returns a list of candidates containing \(\widehat{w}\) such that each candidate satisfies \(\Pr[|g(w^{*}\cdot x)-g(w\cdot x)-A_{w}|>\tau]\leq 1-\tau\) for some \(A_{w}\in\mathbb{R}\). If \((\Delta,\tau)\)-identifiability (Definition 1.2) holds, the algorithm only returns a single candidate \(\widehat{w}\) which achieves a clean loss of \(4\Delta\)._ Proof SketchFor the sake of exposition, suppose \(\widehat{w}=w^{*}\) and the emprical estimates equal the true expectation. Define the events \(E_{s}^{+}(u,v):=\{x\mid g(u\cdot x)-g(v\cdot x)>s\}\) and \(E_{s}^{-}(u,v):=\{x\mid g(u\cdot x)-g(v\cdot x)<s\}\). An application of creftype 2.2 to the random variable \(g(w^{*}\cdot x)-g(w\cdot x)\) implies that for any \(\tau\) if the following first condition is false, then the second condition is true: 1. \(\exists A\in\mathbb{R}\) such that \(\Pr[E_{A+\tau}^{+}(w^{*},w)]\geq\tau/2\) and \(\Pr[E_{A-\tau}^{-}(w^{*},w)]\geq\tau/2\). 2. \(\exists A\in\mathbb{R}\) such that \(\Pr[E_{A+\tau}^{+}(w^{*},w)]\leq\tau/2\) and \(\Pr[E_{A-\tau}^{-}(w^{*},w)]\leq\tau/2\). If \(w\) satisfies Condition 1, then \(g(w^{*}\cdot x)-g(w\cdot x)-A\) takes values \(>\tau\) and \(\leq\tau\) when \(x\in E_{A+\tau}^{+}(w^{*},w)\) and \(E_{A-\tau}^{-}(w^{*},w)\) respectively. This means the quantile at which \((y-g(w\cdot x))\) takes the value \(0\) is different conditioned on \(x\) coming from both these sets. Let \(R^{+}:=\{(y-g(w\cdot x))\mid x\in E_{A+\tau}^{+}(w^{*},w)\}\) and \(R^{-}:=\{(y-g(w\cdot x))\mid x\in E_{A-\tau}^{-}(w^{*},w)\}\) Our algorithm rejects \(w\) if there is an \(A\) such that \(|\widehat{\mathbf{E}}[\text{sign}(r-A)\mid r\in R^{+}]-\widehat{\mathbf{E}}[ \text{sign}(r-A)\mid r\in R^{-}]|\) is large. This will be the case since elements of \(R^{+}\) and \(R^{-}\) are drawn from the distribution of \(\xi+\epsilon\) shifted by at least \(\tau\) in opposite directions, and \(\xi\) places a mass of \(\alpha\) at \(0\). Hence, all remaining candidates satisfy Condition 2, which means they are approximate translations of \(g(w^{*}\cdot x)\). Also, since \(w^{*}\) is never rejected, we know that \(w^{*}\) also belongs to this list. If \((\Delta,\tau)\)-identifiability holds, every element of the final list achieves clean loss \(\Delta\). We can test this by checking of every pair of candidates in the list is \(2\Delta\)-close, and if they are, returning any element of the list. ## 5 Main Results ``` input:\(\{(x_{i},y_{i})\mid i\in[m]\}\sim\text{GLM-Ob}(g,\sigma,w^{*})^{m},R,\sigma,\tau,\alpha\) where \(w^{*}\) is unknown and \(\|w^{*}\|_{2}\leq R\). Let \(P\) be a uniform parititon of \([-1,1]\) with granularity \(\gamma\alpha\Delta/64R\). for\(c\) in \(P\)do Set the parameter \(\Delta\) in Lemma 3.4 to be \(\min(\Delta/3,\tau^{2}/48)\). Generate a list of \(T\) candidates \(\mathcal{W}_{c}\) given by each step of the algorithm in Lemma 3.4. end for Run Algorithm 3 with parameters \(\alpha,\sigma,\cup_{c\in P}\mathcal{W}_{c}\) to get list \(\mathcal{L}\). Return \(\mathcal{L}\). ``` **Algorithm 2** Oblivious GLM Regression Finally, we state and prove our two main results. Our first result is a lower bound, demonstrating the necessity of our condition for efficient identifiability. Our second result is our algorithmic guarantee, demonstrating that if efficient identifiability holds, our algorithm returns a hypothesis achieving a small clean loss. ### Necessity of the Identifiability Condition for Unique Recovery **Theorem 5.1**.: _Suppose GLM-\(\text{Ob}(g,\sigma,w^{*})\) is not \((\Delta,\tau)\)-identifiable, and suppose \(u,v\in\mathbb{R}^{d}\) and \(A\in\mathbb{R}\) witness this, i.e. \(u,v\) are \(\Delta\)-separated but satisfy \(\Pr_{x}\left[|g(u\cdot x)-g(v\cdot x)-A|>\tau\right]\leq\tau\). Then any algorithm that distinguishes between \(u\) and \(v\) with probability at least \(1-\delta\) requires \(m=\Omega(\min(\sigma,1)\ln(1/\delta)/\tau)\) samples._ Proof.: Given \(A\) and \(\tau\), consider the event \(E\) defined by \(|g(u\cdot x_{i})-g(v\cdot x_{i})-A|>\tau\). This occurs with probability \(\leq\tau\). A single sample observed in event \(E\) can be enough to tell the difference between \(u\) and \(v\), and so, to distinguish between \(u\) and \(v\) with a probability of at least \(1-\delta\), one must observe \(\Omega(\ln(1/\delta)/\tau)\) samples from \(E\). If no samples from \(E\) are observed, then all \((x_{i},y_{i})\) satisfy \(|g(u\cdot x_{i})-g(v\cdot x_{i})-A|\leq\tau\). In this case, an oblivious noise adversary can construct oblivious noises \(\xi_{u}\), \(\xi_{v}\) for instances of \(u\), \(v\) such that the corrupted labels \(g(u\cdot x_{i})+\xi_{u}\) and \(g(v\cdot x_{i})+\xi_{v}\) only differ by at most \(\tau\). This means that \(y_{i}\) can either be generated from \(g(u\cdot x_{i})+\xi_{u}+\epsilon\) or \(g(v\cdot x_{i})+\xi_{v}+\epsilon\), which are close to each other in total variation distance. By creftype B.5, any algorithm to distinguish \(u\) and \(v\) using inliers requires at least \(\Omega(\sigma\ln(1/\delta)/\tau)\) samples. The lower bound corresponds to the minimum of the two sample complexities, so any algorithm to distinguish \(u\) and \(v\) with probability at least \(1-\delta\) needs \(\Omega(\min(\sigma,1)\ln(1/\delta)/\tau)\) samples. ### Main Algorithmic Result Here, we state the formal version of Theorem1.4. This follows from putting together Lemma3.4 and LemmaC.1, applied to Algorithm2. We restate and prove this in AppendixD. **Theorem 5.2** (Main Result).: _We first define a few variables and their relationships to \(\Delta\) (the desired final accuracy), \(\alpha\) (the probability of being an inlier), \(R\) (an upper bound on \(\|w^{*}\|_{2}\)) and \(\sigma\) (the standard deviation of the additive Gaussian noise)._ _Let \(\Delta^{\prime}=\min(\Delta,\tau^{2}/16)\). \(\gamma=\min(\Delta/4\sigma,1/2)\), \(T\gtrsim(R/\gamma\alpha)^{2}\), \(m_{1}\gtrsim R^{2}\ln(T/\delta)/(\gamma\alpha\Delta)^{2}\) and \(W\gtrsim T(\gamma\alpha\Delta/64R)\)._ _There is an algorithm, which, given \(\Delta,\alpha,R\) and \(\sigma\) runs in time \(O(dTm_{1})\), draws \(m_{1}\gtrsim\alpha^{-2}\log(R/\Delta\alpha\delta)\left(R^{2}\sigma^{2}/\Delta^ {4}\right)\) samples from GLM-\(\text{Ob}(g,\sigma,w^{*})\) and returns a \(T(\gamma\alpha\Delta/64R)\)-sized list of candidates, one of which achieves excess loss at most \(\Delta\)._ _Moreover, if the instance is \((\Delta,\tau)\)-identifiable then, there is an algorithm which takes the parameters \(\Delta,\alpha,\sigma,R\) and \(\tau^{\prime}\leq\tau\), draws_ \[m\gtrsim\alpha^{-2}\log(W/\delta)\left(R^{2}\sigma^{2}/(\Delta^{\prime 4}+1/( \tau^{\prime}\min(\tau^{\prime}/\sigma,1))^{2}\right)\] _samples from GLM-\(\text{Ob}(g,\sigma,w^{*})\), runs in time \(O(dmW^{2})\) and returns a single candidate._ **Remark 5.3**.: Our pruning algorithm requires a priori knowledge of the parameter \(\tau^{*}\). However, it is possible to make it independent of \(\tau^{*}\) by guessing the value of some \(\tau<\tau^{*}\) via the following process. To find such a \(\tau\), we can search for a value of \(k\) that allows us to set \(\tau=2^{-k}\) in the pruning algorithm and obtain a single potential candidate. To begin, we decide on the number of tests we are willing to perform, which we will denote by \(K\). Each test involves performing pruning and checking if the resulting list contains only one element. Each test provides the expected answer for that particular \(\tau\) with probability of \(1-\delta\). In other words, with probability \(1-\delta\), \(\widehat{w}\) is never rejected, and if \(\tau<\tau^{*}\) the test will yield a single such candidate. By increasing the number of samples by a factor of \(O(\log(K))\), we can guarantee that, with probability of \(1-\delta\), we will either obtain a single favorable candidate or a list of candidates among which at least one achieves an excess loss of \(\Delta\).
2309.08306
Cyclic nearly invariant subspaces for semigroups of isometries
In this paper, the structure of the nearly invariant subspaces for discrete semigroups generated by several (even infinitely many) automorphisms of the unit disc is described. As part of this work, the near $S^*$-invariance property of the image space $C_\varphi({\rm ker\, } T)$ is explored for composition operators $C_\varphi$, induced by inner functions $\varphi$, and Toeplitz operators $T$. After that, the analysis of nearly invariant subspaces for strongly continuous multiplication semigroups of isometries is developed with a study of cyclic subspaces generated by a single Hardy class function. These are characterised in terms of model spaces in all cases when the outer factor is a product of an invertible function and a rational (not necessarily invertible) function. Techniques used include the theory of Toeplitz kernels and reproducing kernels.
Yuxia Liang, Jonathan R. Partington
2023-09-15T10:53:41Z
http://arxiv.org/abs/2309.08306v2
# Cyclic nearly invariant subspaces for semigroups of isometries ###### Abstract. In this paper, the structure of the nearly invariant subspaces for discrete semigroups generated by several (even infinitely many) automorphisms of the unit disc is described. As part of this work, the near \(S^{*}\)-invariance property of the image space \(C_{\varphi}(\ker T)\) is explored for composition operators \(C_{\varphi},\) induced by inner functions \(\varphi,\) and Toeplitz operators \(T\). After that, the analysis of nearly invariant subspaces for strongly continuous multiplication semigroups of isometries is developed with a study of cyclic subspaces generated by a single Hardy class function. These are characterised in terms of model spaces in all cases when the outer factor is a product of an invertible function and a rational (not necessarily invertible) function. Techniques used include the theory of Toeplitz kernels and reproducing kernels. Key words and phrases:Nearly invariant subspace, semigroup, model space, Toeplitz kernel, composition operator 2010 Mathematics Subject Classification: 47B38, 47A15, 43A15 ## 1. Introduction The main aim of this paper is to investigate the properties of cyclic nearly invariant subspaces for semigroups of isometries, including the discrete semigroup \(\{T_{\psi_{1}^{m}\psi_{2}^{\mathbb{Z}}}:\;m,n\in\mathbb{N}_{0}\},\) where \(\mathbb{N}_{0}=\mathbb{N}\cup\{0\},\) generated by two different automorphisms \(\psi_{1},\;\psi_{2}\) of the unit disc and also the shift semigroup \(\{S(t)\}_{t\geq 0}\) on \(L^{2}(0,\infty),\) as defined below in (1.1). The subspaces in these examples show a new understanding of near invariance, composition operators and model spaces. Let \(\mathcal{H}\) denote a separable infinite-dimensional Hilbert space and \(\mathcal{B}(\mathcal{H})\) the space of bounded linear operators on \(\mathcal{H}.\) If \(\{\mathcal{M}_{i}\}_{i\in\Lambda}\) is a family of subsets of the Hilbert space \(\mathcal{H},\) we denote \(\bigvee_{i\in\Lambda}\mathcal{M}_{i}\) the closed linear span generated by \(\bigcup_{i\in\Lambda}\mathcal{M}_{i}\). Let \(\overline{\mathcal{M}}\) denote the closure of \(\mathcal{M}\) for any subset \(\mathcal{M}\) of \(\mathcal{H}.\) In the sequel, a _subspace_ always means a _closed subspace_. Let \(\mathbb{D}\) denote the unit disc and \(\mathbb{C}_{+}=\{s=x+iy:\;x>0\}\) denote the right half-plane. And then \(H(\mathbb{D})\) (or \(H(\mathbb{C}_{+})\)) stands for the space of all analytic functions on \(\mathbb{D}\) (or \(\mathbb{C}_{+}\)). The space \(H^{\infty}(\mathbb{D})\) (or \(H^{\infty}(\mathbb{C}_{+})\)) is the Banach algebra of bounded analytic functions on \(\mathbb{D}\) (or \(\mathbb{C}_{+}\)). Furthermore, we say \(f\) belongs to the Smirnov class if \(f\in H(\mathbb{D})\) and \[\lim_{r\to 1^{-}}\int_{\mathbb{T}}\log(1+|f(rz)|)dm(z)=\int_{\mathbb{T}}\log(1+ |f(z)|)dm(z)<\infty.\] In this paper, \(H^{2}(\mathbb{D})\) is the Hardy Hilbert space defined as \[H^{2}(\mathbb{D})=\{f\in H(\mathbb{D}):\;f(z)=\sum_{k=0}^{\infty}a_{k}z^{k},\; \|f\|^{2}=\sum_{k=0}^{\infty}|a_{k}|^{2}<\infty\}.\] Similarly, the Hardy space \(H^{2}(\mathbb{C}_{+})\) on the right half-plane \(\mathbb{C}_{+}\) contains all \(f\in H(\mathbb{C}_{+})\) such that \[\|f\|_{H^{2}(\mathbb{C}_{+})}^{2}=\sup_{x>0}\int_{-\infty}^{\infty}|f(x+iy)|^{ 2}dy<\infty.\] Recall that the unilateral shift \(S:\;H^{2}(\mathbb{D})\to H^{2}(\mathbb{D})\) is defined as \([Sf](z)=zf(z).\) Its adjoint operator on \(H^{2}(\mathbb{D})\) is the backward shift \([S^{*}f](z)=(f(z)-f(0))/z.\) Given \(\phi\in L^{\infty}(\mathbb{T}),\) the Toeplitz operator \(T_{\phi}:\ H^{2}(\mathbb{D})\to H^{2}(\mathbb{D})\) is defined as \[(T_{\phi}f)(\lambda)=P_{H^{2}}(\phi\cdot f)(\lambda)=\int_{\mathbb{T}}\frac{\phi (\zeta)f(\zeta)}{1-\overline{\zeta}\lambda}dm(\zeta),\] where \(P_{H^{2}}\) is the orthogonal projection from \(L^{2}(\mathbb{T})\) onto \(H^{2}(\mathbb{D}).\) Here \[L^{2}(\mathbb{T})=H^{2}(\mathbb{D})\oplus\overline{H^{2}_{0}(\mathbb{D})}=H^{2 }(\mathbb{D})\oplus\overline{zH^{2}(\mathbb{D})},\] where \(H^{2}(\mathbb{D})=\bigvee\{z^{n}:\ n\geq 0\}\) and \(\overline{H^{2}_{0}(\mathbb{D})}=\bigvee\{z^{n}:\ n<0\}.\) It is well known that a general Hardy class function can be decomposed as the product of an inner factor and an outer factor. Examples of inner function include Blaschke products. In particular, when a Blaschke product has degree \(1\), it is also called an automorphism and denoted by \(\psi\) in this paper. It is clear that \(\psi\) is a one-to-one analytic map \(\mathbb{D}\) onto \(\mathbb{D}\), given by \[\psi(z)=\lambda\frac{a-z}{1-\overline{a}z},\quad\text{where}\quad|\lambda|=1 \text{ and }a\in\mathbb{D}.\] For any finite Blaschke product, there is a Taylor series converging uniformly on the closed disc (in fact with absolutely summable coefficients). Then truncating the series and taking the polynomials of \(S\), we see that _all the Beurling-type invariant subspaces \(\theta H^{2}(\mathbb{D})\) for some inner function \(\theta\) are also \(T_{\psi}\)-invariant subspaces, and conversely. Meanwhile, the invariant subspaces for \(T_{\psi}^{-1}\) are the model spaces \(K_{\theta}=H^{2}\ominus\theta H^{2}.\)_ With the development of the \(S\)-invariant subspaces in spaces on various domains, the concept of near invariance emerges. Recall that a closed subspace \(\mathcal{M}\subseteq H^{2}(\mathbb{D})\) is said to be nearly \(S^{*}\)-invariant (or weakly invariant) if whenever \(f\in\mathcal{M}\) and \(f(0)=0,\) then \(S^{*}f\in\mathcal{M}.\) Informally, \(\mathcal{M}\subseteq H^{2}(\mathbb{D})\) is nearly \(S^{*}\)-invariant if the zeros of functions in \(\mathcal{M}\) can be divided out without leaving the space. The study of nearly invariant subspaces for the backward shift \(S^{*}\) in \(H^{2}(\mathbb{D})\) was first explored by Hayashi [11], Hitt [12], and then Sarason [18, 19] in relation with kernels of Toeplitz operators. Afterwards, Camara and Partington continued the systematic investigations on near invariance and Toeplitz kernels (see, e.g. [2, 3]). In 1988, Hitt proved the following most widely known characterization of nearly \(S^{*}\)-invariant subspaces in \(H^{2}(\mathbb{D})\), of which a vectorial generalization can be found in [6, Theorem 4.4]. **Theorem 1.1**.: _[_12_, Proposition 3]_ _The nearly \(S^{*}\)-invariant subspaces of \(H^{2}(\mathbb{D})\) have the form \(\mathcal{M}=uK\), with \(u\in\mathcal{M}\) of unit norm, \(u(0)>0,\) and \(u\) orthogonal to all elements of \(\mathcal{M}\) vanishing at the origin, \(K\) an \(S^{*}\)-invariant subspace, and the operator of multiplication by \(u\) isometric from \(K\) into \(H^{2}(\mathbb{D})\)._ A shift operator on \(\mathcal{H}\) can be viewed as a direct generalization of the unilateral shift \(S\) and analytic Toeplitz operator \(T_{\theta}\) on \(H^{2}(\mathbb{D})\), where \(\theta\) is an inner function. More precisely, an operator \(T\in\mathcal{B}(\mathcal{H})\) is a shift operator if \(T\) is an isometry and \(\|T^{*n}f\|\to 0\) for all \(f\in\mathcal{H}\) as \(n\to\infty\) (see, e.g. [17, Chapter 1]). Note that the shift operator \(T\) is left invertible, so the authors defined the nearly \(T^{-1}\)-invariant subspace in \(\mathcal{H}\) as below. Given a left invertible \(T\in\mathcal{B}(\mathcal{H})\), a subspace \(\mathcal{M}\subseteq\mathcal{H}\) is said to be nearly \(T^{-1}\)-invariant if whenever \(g\in\mathcal{H}\) such that \(Tg\in\mathcal{M}\), then \(g\in\mathcal{M}\) (see, [13, Definition 1.2]). Furthermore, we have shown that the nearly \(T^{-1}\)-invariant subspace of a shift operator \(T\) with finite multiplicity (i.e., the dimension of \(\ker T^{*}\) is finite) can be represented in terms of invariant subspaces in vector-valued Hardy space under the backward shift (see, [13, Theorem 2.4]). Especially, this result provides a vector-valued characterization of the nearly \(T_{B_{m}}^{-1}\)-invariant subspaces in \(H^{2}(\mathbb{D})\) when \(B_{m}\) is a finite Blaschke product with degree \(m\) (see, [13, Corollary 2.6]). The above research leads us naturally to ask two direct questions. (1) _Is there a Hitt-like formula for nearly \(T_{\psi}^{-1}\)-invariant subspace for the automorphism \(\psi\)?_ (2) _How can we represent the nearly \(T_{\theta}^{-1}\)-invariant subspaces in \(H^{2}(\mathbb{D})\)_ (or \(H^{2}(\mathbb{C}_{+})\) ) _for an infinite degree Blaschke product \(\theta\)?_ For Question (1), this will be solved by using composition operators \(C_{\varphi}\) and model spaces \(K_{\theta}\) (see Section 2). After that, we continue to examine the nearly invariant subspaces with respect to the finitely-generated semigroup \(\{T^{-1}_{\psi_{1}^{n}\psi_{2}^{n}}:\ m,n\in\mathbb{N}_{0}\}\) with automorphisms \(\psi_{1}\) and \(\psi_{2}\) on \(\mathbb{D}\). During this process, we obtain several interesting results. Especially, for any non-automorphic inner function \(\varphi\), we prove that the subspace \(C_{\varphi}(\ker T)\) is not nearly \(S^{*}\)-invariant for any Toeplitz operator \(T\) with \(\dim\ker T\geq 2\). For Question (2), Caradus showed that the adjoint Toeplitz operator \(T^{*}_{\theta}=T^{-1}_{\theta}:H^{2}(\mathbb{D})\to H^{2}(\mathbb{D})\), with an infinite Blaschke product \(\theta\), is universal (see, e.g., [5]) and it is similar to the backward shift \(S(1)^{*}\) on \(L^{2}(0,\infty)\), given by \(S(1)^{*}f(t)=f(t+1)\). For the shift on weighted space \(L^{2}((0,\infty),\tilde{w}(t)dt)\), one may refer to [9]. \(S(1)^{*}\) is a special case of the adjoint semigroup \(\{S(t)^{*}\}_{t\geq 0}\) defined by \((S(t)^{*}f)(\zeta)=f(\zeta+t)\). Recall that the shift semigroup \(\{S(t)\}_{t\geq 0}\) on \(L^{2}(0,\infty)\) is \[(S(t)f)(\zeta)=\left\{\begin{array}{ll}0,&\zeta\leq t,\\ f(\zeta-t),&\zeta>t,\end{array}\right. \tag{1.1}\] which is a \(C_{0}\)-semigroup (see, e.g.[16]). Then the following commutative diagrams hold for the shift semigroup on \(L^{2}(0,\infty)\) and multiplication semigroups on Hardy spaces. \[\begin{CD}L^{2}(0,\infty)@>{S(t)}>{}>L^{2}(0,\infty)\\ @V{}V{\mathcal{L}}V@V{}V{\mathcal{L}}V\\ H^{2}(\mathbb{C}_{+})@>{M(t)}>{}>H^{2}(\mathbb{C}_{+})\\ @V{}V{V^{-1}}V@V{}V{V^{-1}}V\\ H^{2}(\mathbb{D})@>{T(t)}>{}>H^{2}(\mathbb{D}).\end{CD}\] The Laplace transform \(\mathcal{L}\) and the isometric isomorphism \(V:\ H^{2}(\mathbb{D})\to H^{2}(\mathbb{C}_{+})\) are defined as \[(\mathcal{L}f)(s)=\int_{0}^{\infty}e^{-st}f(t)dt,\ \ (V^{-1}g)(z)=\frac{2 \sqrt{\pi}}{1+z}g\left(\frac{1-z}{1+z}\right). \tag{1.2}\] The multiplication semigroups \(\{M(t)\}_{t\geq 0}\) on \(H^{2}(\mathbb{C}_{+})\) and \(\{T(t)\}_{t\geq 0}\) on \(H^{2}(\mathbb{D})\) are defined by \[(M(t)g)(s)=e^{-st}g(s)\ \text{and}\ (T(t)h)(z)=\phi^{t}(z)h(z) \tag{1.3}\] with \(\phi^{t}(z):=\exp{(-t(1-z)/(1+z))}\). Meanwhile, their adjoint semigroups are \[(M(t)^{*}g)(s)=P_{H^{2}(\mathbb{C}_{+})}(e^{st}g(s))\quad\text{and}\quad(T(t)^ {*}h)(z)=P_{H^{2}(\mathbb{D})}(\phi^{-t}(z)h(z)). \tag{1.4}\] The above fact implies that the answer to Question (2) should have links with the near invariance of the shift semigroup. So we recall the following definition. **Definition 1.2**.: [14, Definition 1.4] Let \(\{T(t)\}_{t\geq 0}\) be a \(C_{0}\)-semigroup in \(\mathcal{B}(\mathcal{H})\) and \(\mathcal{M}\subseteq\mathcal{H}\) be a subspace. If for every \(f\in\mathcal{H}\) with \(T(t)f\in\mathcal{M}\) for some \(t>0\) we have \(f\in\mathcal{M}\), we call \(\mathcal{M}\) a nearly \(\{T(t)^{*}\}_{t\geq 0}\) invariant subspace. In our recent paper [14], we mainly demonstrated a series of prototypical examples for minimal nearly \(\{S(t)^{*}\}_{t\geq 0}\) invariant subspaces in \(L^{2}(0,\infty)\), closely related with nearly \(\{M(t)^{*}\}_{t\geq 0}\) invariance on \(H^{2}(\mathbb{C}_{+})\). As a subsequent work, we use the new techniques to determine the nontrivial cyclic nearly invariant subspaces for shift semigroups in much greater generality. To be specific, the article is organized as follows. In Section 2, we concentrate on the near invariance characterization for a classical discrete semigroup of isometries induced by two or more (even infinitely many) different automorphisms and related questions, especially give the solution to Question (1). Section 3 and Section 4 are committed to Question (2). We creatively employ the reproducing kernel, minimal Toeplitz kernel and model space to formulate a prototypical class of cyclic nearly invariant subspaces \(N(g)=\bigvee\{ge^{-\lambda s}:\ 0\leq\lambda\leq\delta\}\) with \(g(s)=1/(1+s)^{n+1}\) (\(n\in\mathbb{N}_{0}\)) in Section 3. These are required for Section 4, which is concerned with characterizing the subspace \(N(g)\) with a rational outer function \(g\), which is expressed in terms of a model space in the Hardy space \(H^{2}(\mathbb{C}_{+})\). This leads to the more general result for those \(g\) where a rational outer function is multiplied by a function invertible in \(L^{\infty}\). The corresponding descriptions in \(H^{2}(\mathbb{D})\) are also addressed. ## 2. Near invariance for discrete semigroups and related questions In this section, we will use repeatedly the automorphism on \(\mathbb{D}\) defined by \[\psi(z)=\lambda\frac{a-z}{1-\overline{a}z},\ |\lambda|=1\ \text{and}\ a\in \mathbb{D}.\] In the subsequent subsections, we first answer Question (1) and then investigate some related issues that arose during this process, including the complementary subspace \(K_{z\theta\circ\psi}\ominus C_{\psi}(K_{\theta})\) and near \(S^{*}\)-invariance of \(C_{\varphi}(\ker T)\) with inner function \(\varphi\) and Toeplitz kernel \(\ker T\) (including model space). ### Nearly \(\{T_{\psi_{1}^{n}\psi_{2}^{n}}^{-1}:\ m,n\in\mathbb{N}_{0}\}\) invariant subspaces In this subsection, we will take two automorphisms \(\psi_{1},\psi_{2}\) with zeros \(a_{1},a_{2}\in\mathbb{D}\). We are interested in finding the closed subspaces of \(H^{2}(\mathbb{D})\) that are both nearly invariant with respect to \(T_{\psi_{1}}^{-1}\) and \(T_{\psi_{2}}^{-1}\), respectively. In this case \(\mathcal{M}\) is nearly \(T_{\psi_{1}^{n}\psi_{2}^{n}}^{-1}\)-invariant for all \(m,n\geq 0.\) This forms a finitely-generated commutative isometric operator semigroup denoted by \(\{T_{\psi_{1}^{n}\psi_{2}^{n}}^{-1}:\ m,n\in\mathbb{N}_{0}\}\). In what follows, we will represent \(\mathcal{M}\) in terms of a Hitt-like subspace. For an automorphism \(\psi\), we recall the bounded composition operator \(C_{\psi}:\ H^{2}(\mathbb{D})\to H^{2}(\mathbb{D})\) by \(C_{\psi}f=f\circ\psi.\) Then it holds that \[C_{\psi}T_{z}C_{\psi}^{-1}=C_{\psi}T_{z}C_{\psi^{-1}}=T_{\psi}. \tag{2.1}\] That is, \(T_{\psi}\) is similar to \(T_{z}\) via \(C_{\psi}\). By Hitt's result (Theorem 1.1), \(uK_{\theta}\) is nearly \(T_{\bar{z}}\)-invariant, where \(\theta\) is inner and \(K_{\theta}\) is the model space, \(u(0)\neq 0\) and multiplication by \(u\) is isometric from \(K_{\theta}\) to \(H^{2}\). So the similarity relationship (2.1) yields that \(C_{\psi}(uK_{\theta})=(u\circ\psi)C_{\psi}(K_{\theta})\) is nearly \(T_{\psi}^{-1}\)-invariant with \((u\circ\psi)(a)=u(0)\neq 0.\) Since the adjoint operator \(C_{\psi}^{*}\) is generally not a composition operator (see, e.g. [7, Theorem 9.2]), \(C_{\psi}(K_{\theta})\) is not a model space in general, even though \(C_{\psi}(\theta H^{2})\) has the form \((\theta\circ\psi)H^{2}\). In order to study \(C_{\psi}(K_{\theta})\) in detail, we first explore its near \(S^{*}\)-invariance. Further note the model space is a special Toeplitz kernel, so we translate the above question into research on Toeplitz kernels. The next theorem indicates that the image space of any Toeplitz kernel under a composition operator induced by an automorphism is also a Toeplitz kernel. **Theorem 2.1**.: _Let \(\psi\) be an automorphism and \(\ker T_{F}\) be a Toeplitz kernel with \(F\in L^{\infty}(\mathbb{T}),\) it follows that_ \[C_{\psi}(\ker T_{F})=\ker T_{G},\ \text{where}\ G=(F\circ\psi)\frac{\psi}{z}.\] Proof.: We take \(f\in\ker T_{F}\) and show first that \(f\circ\psi\in\ker T_{G}\). It follows that \(Ff=g\) for some \(g\in\bar{z}\overline{H^{2}}\). So \[G(f\circ\psi)=(F\circ\psi)\frac{\psi}{z}(f\circ\psi)=\frac{\psi}{z}(g\circ \psi).\] Now since \(g\in\overline{z}\overline{H^{2}}\), then \[G(f\circ\psi)=\frac{\psi}{z}(g\circ\psi)\in\frac{\psi}{z}(\overline{\psi} \overline{H^{2}})=\overline{z}\overline{H^{2}}.\] This implies \(f\circ\psi\in\ker T_{G}\). So \(C_{\psi}(\ker T_{F})\subseteq\ker T_{G}\). For the automorphism \(\psi^{-1}\), the same argument shows that \(C_{\psi^{-1}}(\ker T_{G})\subseteq\ker T_{H}\), where \[H=(G\circ\psi^{-1})\frac{\psi^{-1}}{z}=F\frac{z}{\psi^{-1}}\frac{\psi^{-1}}{z}=F.\] Since \(C_{\psi}^{-1}=C_{\psi^{-1}}\), we see that \(C_{\psi}\) acts as a bijection between \(\ker T_{F}\) and \(\ker T_{G}\), so the desired result follows. Letting \(F=\overline{\theta}\) in Theorem 2.1, a corollary follows. **Corollary 2.2**.: _Let \(\psi\) be an automorphism and \(\theta\) be an inner function, then it follows that_ \[C_{\psi}(K_{\theta})=\ker T_{(\overline{\theta}_{0}\psi)\psi/z}.\] The following theorem expresses \(C_{\psi}(K_{\theta})\) as an invertible function times a model space. **Theorem 2.3**.: _[_10_, Theorem 6.8]_ _If \(\theta\) is an inner function and \(\psi\) is an automorphism, then_ \[f\to(\sqrt{\psi^{\prime}}C_{\psi})f\] _defines a unitary operator from \(K_{\theta}\) onto \(K_{\theta\circ\psi}\)._ Summarizing Corollary 2.2 and Theorem 2.3 yields the following corollary. **Corollary 2.4**.: _Let \(\psi\) be an automorphism and \(\theta\) be an inner function, then_ \[C_{\psi}(K_{\theta})=\ker T_{(\overline{\theta}_{0}\psi)\psi/z}=\frac{1}{ \sqrt{\psi^{\prime}}}K_{\theta\circ\psi}.\] This indicates that \[C_{\psi}(K_{\theta})=(1-\overline{a}z)K_{\theta\circ\psi}\subsetneq K_{z \theta\circ\psi}, \tag{2.2}\] where \(\psi(a)=0.\) Next we describe the subspace \(C_{\psi}(K_{\theta})\) more concretely. _Remark 2.5_.: (1) If \(\theta(0)=0\) then \(C_{\psi}(K_{\theta})=K_{z(\theta\circ\psi)/\psi}\). This can be deduced from Corollary 2.2, since \(C_{\psi}(K_{\theta})=\ker T_{(\overline{\theta}_{0}\psi)\psi/z}\), and the complex conjugate of this symbol is \(z(\theta\circ\psi)/\psi\), which is an inner function due to \(\psi\) divides \(\theta\circ\psi\). (2) If \(\theta(0)\neq 0\), it follows \(C_{\psi}(K_{\theta})\) is not a model space, which is a proper subspace of \(K_{z(\theta\circ\psi)}\). Still, using Corollary 2.2, we have \(C_{\psi}(K_{\theta})=\ker T_{(\overline{\theta}_{0}\psi)\psi/z}.\) For \[f\in\ker T_{(\overline{\theta}_{0}\psi)\psi/z},\text{ so }\frac{(\overline{ \theta}\circ\psi)\psi}{z}f=g\] with some \(g\in\overline{zH^{2}}\), then \[\overline{z}(\overline{\theta}\circ\psi)f=g\overline{\psi}\in\overline{zH^{2} },\text{ so }f\in\text{Ker}T_{\overline{z}(\overline{\theta}_{0}\psi)}=K_{z(\theta\circ \psi)}.\] Here is an example to illustrate the results above. _Example 2.6_.: Let \(\theta(z)=b(z)=(a-z)/(1-\overline{a}z)\) with \(a\in\mathbb{D}\setminus\{0\}\). That is \(\theta(0)=a\neq 0.\) Then \(K_{b}\) is spanned by the reproducing kernel \(1/(1-\overline{a}z)\). Now the subspace \(C_{b}(K_{b})\) is spanned by \[\frac{1}{1-\overline{a}b(z)}=\frac{1-\overline{a}z}{1-|a|^{2}}.\] This means \(C_{b}(K_{b})=(1-\bar{a}z)K_{z}\) as shown in (2.2). Corollary 2.2 also confirms that it is the kernel of \(T_{(\overline{\theta}_{0}b)b/z}=T_{b/z^{2}}\), although it is not itself a model space. These results are summarized in a Hitt-like formula for nearly \(T_{\psi}^{-1}\)-invariant subspaces. **Theorem 2.7**.: _Let \(\psi\) be an automorphism with zero \(a\in\mathbb{D}\), then a space is nearly \(T_{\psi}^{-1}\)-invariant if and only if it behaves as \(C_{\psi}(uK_{\theta})=(u\circ\psi)\ker T_{(\overline{\theta}\circ\psi)\psi/z}\), where \(\theta\) is inner, \(u(0)\neq 0\) and the multiplication by \(u\) is isometric from \(K_{\theta}\) to \(H^{2}\). Furthermore, it has the following specific form_ \[\left\{\begin{array}{ll}(u\circ\psi)\ker T_{(\overline{\theta}\circ\psi) \psi/z}=(u\circ\psi)K_{z(\theta\circ\psi)/\psi},&\theta(0)=0,\\ (u\circ\psi)\ker T_{(\overline{\theta}\circ\psi)\psi/z}=(1-\overline{a}z)(u \circ\psi)K_{\theta\circ\psi},&\theta(0)\neq 0.\end{array}\right.\] Theorem 2.7 further implies the structure of nearly invariant subspaces for discrete semigroups generated by two different automorphisms on \(\mathbb{D}\). **Theorem 2.8**.: _Let \(\psi_{1}\) and \(\psi_{2}\) be two different automorphisms with zeros \(a_{1}\neq a_{2}\in\mathbb{D}\), then a space is nearly invariant with respect to the discrete semigroup \(\{T_{\psi_{1}^{m}\psi_{2}^{n}}^{-1}:\;m,n\in\mathbb{N}_{0}\}\) if and only if it has the form \((u\circ\psi_{1})\ker T_{(\overline{\theta}\circ\psi_{1})\psi_{1}/z}\), where \(\theta\) is inner, \(u\) satisfies \(u(0)\neq 0\) and \(u(\psi_{1}(a_{2}))\neq 0\), with multiplication by \(u\) is isometric from \(K_{\theta}\) to \(H^{2}\)._ Proof.: Theorem 2.7 implies the nearly \(T_{\psi_{1}}^{-1}\)-invariant subspace behaves as \[C_{\psi_{1}}(uK_{\theta})=(u\circ\psi_{1})\ker T_{(\overline{\theta}\circ\psi _{1})\psi_{1}/z},\;u(0)\neq 0,\] and the multiplication by \(u\) is isometric from \(K_{\theta}\) to \(H^{2}\). Next we show \(C_{\psi_{1}}(uK_{\theta})\) is nearly \(T_{\psi_{2}}^{-1}\)-invariant if and only if \(u(\psi_{1}(a_{2}))\neq 0\). For sufficiency, suppose a function \(f=gh\in C_{\psi_{1}}(uK_{\theta})\) with \(g=u\circ\psi_{1}\) and \(h\in\ker T_{(\overline{\theta}\circ\psi_{1})\psi_{1}/z}\) satisfies \(f\overline{\psi_{2}}\in H^{2}(\mathbb{D})\), then we have \(f(a_{2})=u(\psi_{1}(a_{2}))h(a_{2})=0.\) Since \(u(\psi_{1}(a_{2}))\neq 0\), it follows that \(h(a_{2})=0\) and so \(h\overline{\psi_{2}}\in\ker T_{(\overline{\theta}\circ\psi_{1})\psi_{1}/z}\) due to the near invariance of Toeplitz kernels. This yields that \[f\overline{\psi_{2}}=u\circ\psi_{1}(a_{2})h\overline{\psi_{2}}\in C_{\psi_{1 }}(uK_{\theta}).\] For necessity, suppose, on the contrary, that \(u(\psi_{1}(a_{2}))=0\), we can always find an \(h\in\ker T_{(\overline{\theta}\circ\psi_{1})\psi_{1}/z}\) with \(h(a_{2})\neq 0\), which can be obtained by dividing by a power of \(\psi_{2}.\) Since \(C_{\psi_{1}}(uK_{\theta})\) is nearly \(T_{\psi_{2}}^{-1}\)-invariant, then \[(u\circ\psi_{1})\overline{\psi_{2}}h=(u\circ\psi_{1})\tilde{h}\] with \(\tilde{h}\in\ker T_{(\overline{\theta}\circ\psi_{1})\psi_{1}/z}.\) But that implies \(h=\psi_{2}\tilde{h}\) and so \(h(a_{2})=0\), which is a contradiction. With these at hand, we are in a position to generalize Theorem 2.8 to the semigroup generated by even infinitely many automorphisms. _Remark 2.9_.: Let \(\psi_{k}\) be different automorphisms with zeros \(a_{k}\in\mathbb{D}\), \(k\in\mathbb{N}\), then a space is nearly invariant with respect to the discrete semigroup \[\{\prod_{i=1}^{\infty}T_{\psi_{i}^{m}}^{-1}:\;m_{i}\in\mathbb{N}_{0},m_{i}=0 \text{ except for finitely-many }i\}\] if and only if it has the form \((u\circ\psi_{1})\ker T_{(\overline{\theta}\circ\psi_{1})\psi_{1}/z}\), where \(\theta\) is inner, \(u\) satisfies \(u(0)\neq 0\) and \(u(\psi_{1}(a_{k}))\neq 0\), \(k\in\mathbb{N}\) with multiplication by \(u\) is isometric from \(K_{\theta}\) to \(H^{2}\). ### The subspace \(K_{z\theta\circ\psi}\ominus C_{\psi}(K_{\theta})\) In this subsection, we are inspired by the formulas in (2.2) and continue to explore the orthogonal complement \(C_{\psi}(K_{\theta})=(1-\overline{a}z)K_{\theta\circ\psi}\) in \(K_{z\theta\circ\psi}\) with \(\psi(a)=0.\) So there follows a more general question. General Question. _How to find a vector spanning the one-dimensional subspace \(K_{z^{m}\theta}\ominus\prod_{i=1}^{m}(1-\overline{a_{i}}z)K_{\theta}\) for an inner function \(\theta\) and distinct points \(a_{i}\in\mathbb{D}\), \(1\leq i\leq m\)?_ In order to solve the General Question, we will show the following formula \[v=T_{\prod_{i=1}^{m}\frac{1}{(x-a_{i})}}(z^{m}\theta)\in K_{z^{m}\theta}\ominus \prod_{i=1}^{m}(1-\overline{a_{i}}z)K_{\theta}. \tag{2.3}\] It is easy to check the adjoint operator \[T^{*}_{\prod_{i=1}^{m}\frac{1}{(s-a_{j})}}=T_{z^{m}\prod_{i=1}^{m}\frac{1}{(1- \overline{a}_{i}z)}}.\] On the one hand, for all \(h\in H^{2}(\mathbb{D})\), we have \[\langle v,z^{m}\theta h\rangle =\langle z^{m}\theta,T_{z^{m}\prod_{i=1}^{m}\frac{1}{(1-\overline {a}_{i}z)}}(z^{m}\theta h)\rangle\] \[=\langle z^{m}\theta,\frac{z^{2m}\theta h}{\prod_{i=1}^{m}(1- \overline{a}_{i}z)}\rangle\] \[=\langle 1,\frac{z^{m}h}{\prod_{i=1}^{m}(1-\overline{a}_{i}z)} \rangle=0.\] This means \(v\in K_{z^{n}\theta}.\) On the other hand, for all \(k\in K_{\theta},\) it follows that \[\langle v,\prod_{i=1}^{m}(1-\overline{a}_{i}z)k\rangle =\langle z^{m}\theta,T_{z^{m}\prod_{i=1}^{m}\frac{1}{(1-\overline {a}_{i}z)}}\Big{(}\prod_{i=1}^{m}(1-\overline{a}_{i}z)k\Big{)}\Big{)}\rangle\] \[=\langle z^{m}\theta,z^{m}k\rangle=\langle\theta,k\rangle=0,\] implying \(v\perp\prod_{i=1}^{m}(1-\overline{a}_{i}z)K_{\theta}.\) Summarizing the above equations, we deduce (2.3). Next we present a lemma to formulate the concrete expression of \(v\) in (2.3). **Lemma 2.10**.: _Let \(f\in H^{2}(\mathbb{D})\) and \(a_{1},\cdots,a_{m}\in\mathbb{D}\) be distinct points, then the orthogonal projection \(P_{H^{2}}(g)\) of \(g(z):=f(z)/\prod_{j=1}^{m}(z-a_{j})\) onto \(H^{2}(\mathbb{D})\) is_ \[P_{H^{2}}(g)(z)=g(z)-\sum_{j=1}^{m}\frac{\operatorname{Res}(g,a_{j})}{z-a_{j}}\] _where \(\operatorname{Res}\) denotes the residue._ Proof.: We can see that \(1/(z-a_{j})\) is a function in \((H^{2}(\mathbb{D}))^{\perp}\) from the fact \[\langle\frac{1}{z-a_{j}},h\rangle=\langle 1,\frac{h(z)z}{1-\overline{a}_{j}z} \rangle=0\] for all \(h\in H^{2}(\mathbb{D}).\) The formula for \(P_{H^{2}}(g)\) gives a function in \(H^{2}(\mathbb{D}),\) which has no singularities in \(\mathbb{D}\). So we conclude the orthogonal projection of \(g\) is \[P_{H^{2}}(g)=g-g_{-},\] where \(g_{-}\in(H^{2}(\mathbb{D}))^{\perp}\). Combining the definition of residue, the desired result follows. By Lemma 2.10, the answer to the General Question follows. **Proposition 2.11**.: _Let \(\theta\) be an inner function in \(H^{2}(\mathbb{D})\) and \(a_{i}\in\mathbb{D}\) be distinct points for \(1\leq i\leq m\), it holds that_ \[\frac{z^{m}\theta}{\prod_{j=1}^{m}(z-a_{j})}-\sum_{j=1}^{m}\frac{a_{j}^{m} \theta(a_{j})}{(z-a_{j})\prod_{k\neq j}(a_{j}-a_{k})}\in K_{z^{m}\theta}\ominus \prod_{i=1}^{m}(1-\overline{a_{i}}z)K_{\theta}. \tag{2.4}\] Finally, replacing \(\theta\) by \(\theta\circ\psi\) and letting \(m=1\) in (2.4), we deduce the subspace \(K_{z\theta\circ\psi}\ominus C_{\psi}(K_{\theta}).\) **Corollary 2.12**.: _Let \(\psi\) be an automorphism with zero \(a\in\mathbb{D}\) and \(\theta\) be inner, then_ \[K_{z\theta\circ\psi}\ominus C_{\psi}(K_{\theta})=\mathbb{C}\frac{z\theta \circ\psi-a\theta(0)}{z-a}=\frac{z\theta\circ\psi-a\theta(0)}{\psi}K_{\psi}.\] More properties of \(C_{\varphi}(\ker T)\) for inner functions \(\varphi\) and Toeplitz kernels \(\ker T\) In this subsection, we take further the study of whether Theorem 2.1 holds for the composition operator induced by a general inner function \(\varphi\) that is not an automorphism. Surprisingly, we have the following theorem. **Theorem 2.13**.: _Let \(\mathcal{M}\) be a subspace of \(H^{2}(\mathbb{D})\) of dimension at least \(2\), and let \(\varphi\) be an inner function that is not an automorphism on \(\mathbb{D}\). Then \(C_{\varphi}(\mathcal{M})\) is not nearly \(S^{*}\)-invariant._ Before the proof, we recall from [10, Section 2.6], for \(\zeta\in\mathbb{D}\), a Frostman shift of an inner function \(\varphi\) is defined as \[\varphi_{\zeta}(z)=\frac{\zeta-\varphi(z)}{1-\overline{\zeta}\varphi(z)}.\] It follows that the function \(\varphi_{\zeta}\) is also an inner function for any \(\zeta\in\mathbb{D}.\) Frostman's Theorem (see, e.g. [15, p 45]) indicates that \(\varphi_{\zeta}\) is actually a Blaschke product for almost all values of \(\zeta\in\mathbb{D}\) (in fact, except for a set of capacity \(0\)). Now we can start with a lemma. **Lemma 2.14**.: _Let \(\varphi\) be an inner function that is not an automorphism on \(\mathbb{D}\). Then for almost all \(\zeta\in\mathbb{D}\) there are distinct points \(\alpha,\ \beta\in\mathbb{D}\) such that \(\varphi(\alpha)=\varphi(\beta)=\zeta.\)_ Proof.: On the one hand, if \(\varphi\) is a finite Blaschke product of degree \(n\geq 2\) then \(\varphi\) gives an \(n\)-to-\(1\) mapping of \(\mathbb{D}\) onto \(\mathbb{D}\) (counting multiplicities), which follows easily from the argument principle as \(\varphi(\mathbb{T})\) winds round \(\mathbb{T}\)\(n\) times. Now if we exclude the images of the finitely-many points at which \(\varphi^{\prime}=0\) then each remaining point \(\zeta\) has \(n\) distinct preimages under \(\varphi\). On the other hand, if \(\varphi\) is an irrational inner function then by Frostman's theorem \(\varphi_{\zeta}(z)=(\zeta-\varphi(z))/(1-\overline{\zeta}\varphi(z))\) is a Blaschke product for almost all \(\zeta\in\mathbb{D}\), in which case \(\varphi_{\zeta}(z)=0\) has infinitely many distinct solutions and so \(\varphi(z)=\zeta\) has infinitely many distinct solutions. In sum, there are always \(\alpha\neq\beta\in\mathbb{D}\) such that \(\varphi(\alpha)=\varphi(\beta)=\zeta\) for almost all \(\zeta\in\mathbb{D}\). Now we can proceed with the proof of Theorem 2.13. Proof.: Since \(C_{\varphi}\) is injective, it follows that dim \(C_{\varphi}(\mathcal{M})\geq 2\), and thus by taking a nontrivial linear combination of two independent functions in \(C_{\varphi}(\mathcal{M})\), it contains a function \(C_{\varphi}f\) such that \(f\neq 0\) and \((C_{\varphi}f)(0)=0.\) Suppose, to get a contradiction, that \(C_{\varphi}(\mathcal{M})\) is nearly \(S^{*}\)-invariant. Then there exists a \(g\in\mathcal{M}\) such that \[(C_{\varphi}f)(z)=z(C_{\varphi}g)(z).\] Now, by Lemma 2.14, for almost all \(\zeta\in\mathbb{D}\), there exist \(\alpha\neq\beta\) in \(\mathbb{D}\) such that \(\varphi(\alpha)=\varphi(\beta)=\zeta\). In that case, \[f(\zeta)=f(\varphi(\alpha))=\alpha g(\varphi(\alpha))=\alpha g(\zeta)\] and similarly \(f(\zeta)=\beta g(\zeta)\), so \(\alpha g(\zeta)=\beta g(\zeta)\). Since \(\alpha\neq\beta\) we conclude that \(g(\zeta)=0\) and so \(f(\zeta)=0.\) This means \(f(z)=0\) a.e. for \(z\in\mathbb{D}\). Since an \(H^{2}\) function that vanishes a.e. is the zero function (since its Bergman norm is \(0\), so its Taylor coefficients are all \(0\)). This contradiction proves the theorem. Since Toeplitz kernels, and in particular model spaces, are always nearly \(S^{*}\)-invariant we have the following obvious consequence. **Corollary 2.15**.: _Let \(\varphi\) be an inner function that is not an automorphism on \(\mathbb{D}\). Then \(C_{\varphi}(\ker T_{F})\) is not a Toeplitz kernel for any \(F\in L^{\infty}(\mathbb{T})\) such that dim \(\ker T_{F}\geq 2\), and \(C_{\varphi}(K_{\theta})\) is not a model space for any model space \(K_{\theta}\) with dim \(K_{\theta}\geq 2\)._ Finally, summarizing from Theorem 2.1 and Theorem 2.13, we end this section with near \(S^{*}\)-invariance of \(C_{\varphi}(\ker T_{F})\). **Theorem 2.16**.: _Let \(F\in L^{\infty}(\mathbb{T})\) be such that dim \(\ker T_{F}\geq 2\) and \(\varphi\) be an inner function, then \(C_{\varphi}(\ker T_{F})\) is nearly \(S^{*}\)-invariant if and only if \(\varphi\) is an automorphism._ ## 3. The subspace \(N(1/(1+s)^{n+1})\) in \(H^{2}(\mathbb{C}_{+})\) In this section, we begin to consider Question (2) by analysing an example which will be fundamental in describing the general case. To be specific, we creatively employ the reproducing kernel, minimal Toeplitz kernel and model space to reformulate a classical type of cyclic nearly invariant subspace in Hardy spaces over \(\mathbb{D}\) and \(\mathbb{C}_{+}.\) Following our recent paper [14], we also denote the smallest (cyclic) nearly \(\{S(t)^{*}\}_{t\geq 0}\) invariant subspace containing some nonzero vector \(f\) by \([f]_{s}.\) We have proved the initial proposition for \(f=e_{\delta}\) as below. **Proposition 3.1**.: _[_14_, Proposition 2.1]_ _In \(L^{2}(0,\infty),\) the smallest nearly \(\{S(t)^{*}\}_{t\geq 0}\) invariant subspace containing \(e_{\delta}(\zeta):=e^{-\zeta}\chi_{(\delta,\infty)}(\zeta)\) with some \(\delta>0\) has the form_ \[[e_{\delta}]_{s}:=\bigvee\{e_{\lambda}:\;0\leq\lambda\leq\delta\}=L^{2}(0, \delta)+\mathbb{C}e^{-\zeta}.\] We shall use the following notation: given \(\delta>0\) and \(g\in H^{2}(\mathbb{C}_{+}),\) let \(N(g)\) denote the smallest closed subspace in \(H^{2}(\mathbb{C}_{+})\) containing all \(g(s)e^{-\lambda s}\) for \(0\leq\lambda\leq\delta\). That is, \[N(g):=\bigvee\{ge^{-\lambda s}:\;0\leq\lambda\leq\delta\}.\] Similarly, given \(h\in H^{2}(\mathbb{D}),\) the smallest closed subspace in \(H^{2}(\mathbb{D})\) containing all \(h(z)\phi^{\lambda}(z)\) for \(0\leq\lambda\leq\delta\) is denoted by \[A(h):=\bigvee\{h\phi^{\lambda}:\;0\leq\lambda\leq\delta\},\] where \(\phi^{\lambda}(z)=\exp(-\lambda(1-z)/(1+z)).\) Two corollaries follow on Hardy spaces over \(\mathbb{C}_{+}\) and \(\mathbb{D}.\) **Corollary 3.2**.: _[_14_, Corollary 2.3]_ _In \(H^{2}(\mathbb{C}_{+}),\) the Laplace transform of \([e_{\delta}]_{s}\) is_ \[\mathcal{L}([e_{\delta}]_{s})=N\left(\frac{1}{1+s}\right)=K_{\frac{1-s}{1+s}e^ {-\delta s}},\] _where \(K_{\frac{1-s}{1+s}e^{-\delta s}}\) is a model space in \(H^{2}(\mathbb{C}_{+}).\)_ **Corollary 3.3**.: _[_14_, Corollary 2.4]_ _In \(H^{2}(\mathbb{D}),\) it holds that_ \[A(1)=K_{z\phi^{\delta}}.\] After that, Proposition 3.1 was generalized into more general \([f_{\delta,n}(\zeta)]_{s}\) in \(L^{2}(0,\infty)\) with \[f_{\delta,n}(\zeta)=\frac{(\zeta-\delta)^{n}}{n!}e_{\delta}(\zeta),\;n\in \mathbb{N}_{0}.\] By the Laplace transform \(\mathcal{L}\) in (1.2), it follows that \(\mathcal{L}([f_{\delta,n}(\zeta)]_{s})=N(1/(1+s)^{n+1})\). And the corresponding (cyclic) nearly invariant subspaces in Hardy spaces are presented in the following theorem, which is the generalization of Corollaries 3.2 and 3.3. **Theorem 3.4**.: _[_14_, Theorem 3.5]_ _For any \(n\in\mathbb{N}_{0}\) and \(\delta>0\), the following statements are true._ (1) _In \(H^{2}(\mathbb{C}_{+}),\) it holds that the cyclic nearly \(\{M(t)^{*}\}_{t\geq 0}\) invariant subspace_ \[N\left(\frac{1}{(1+s)^{n+1}}\right)=K_{\left\{\frac{1-s}{1+s}\right\}^{n+1}e^ {-\delta s}};\] (2) _In \(H^{2}(\mathbb{D}),\) it holds that the cyclic nearly \(\{T(t)^{*}\}_{t\geq 0}\) invariant subspace_ \[A((1+z)^{n})=K_{z^{n+1}\phi^{\delta}}.\] It is observed the above cyclic nearly invariant subspaces all behave as model spaces, which are invariant subspaces for backward shift semigroup \(\{M(t)^{*}\}_{t\geq 0}\) (defined in (1.4)). Recall from [16, Theorem 3.1.5], a non-zero closed subspace \(\mathcal{M}\subseteq H^{2}(\mathbb{C}_{+})\) satisfies \(M(t)\mathcal{M}\subseteq\mathcal{M}\) for all \(t\geq 0\) if and only if \(\mathcal{M}=\phi H^{2}(\mathbb{C}_{+})\) for some inner function \(\phi\in H^{\infty}(\mathbb{C}_{+}).\) So a subspace of \(H^{2}(\mathbb{C}_{+})\) is a model space if and only if it is invariant under \(\{M(t)^{*}\}_{t\geq 0}\). Next we will recall two lemmas and provide a new derivation of the above subspace \(N(g)\) precisely. The first lemma asserts that every nonzero function in \(H^{2}\) is contained in a minimal Toeplitz kernel. **Lemma 3.5**.: _[_1_, Theorem 5.1]_ _Let \(f\in H^{2}\setminus\{0\}\) and let \(f=IO\) be its inner-outer factorization. Then there exists a minimal Toeplitz kernel \(K\) with \(f\in K\), which we write as \(K_{\min}(f)\); moreover,_ \[K_{\min}(f)=\ker T_{\overline{z}\overline{IO}/O}.\] Here the minimal Toeplitz kernel \(K_{\min}(f)\) means that \(K_{\min}(f)\subseteq\ker T_{h}\) for any Toeplitz kernel \(\ker T_{h}\) containing \(f.\) And then the second lemma is proved in [14], which has many powerful applications. **Lemma 3.6**.: _[_14_, Lemma 2.7]_ _If \(g(s)\), \(sg(s)\in H^{2}(\mathbb{C}_{+})\), then_ \[sg(s)e^{-st}\in\bigvee\{g(s)e^{-\lambda s},\;|\lambda-t|<\epsilon\}\] _for all \(\epsilon>0\)._ Recall the reproducing kernel of \(H^{2}(\mathbb{C}_{+})\) is \[k_{w}(s)=\frac{1}{2\pi(s+\overline{w})}.\] Then for \(w\in\mathbb{C}_{+}\) and \(f\in H^{2}(\mathbb{C}_{+})\), it holds that \[f(w)=\langle f,k_{w}\rangle=\int_{-\infty}^{\infty}f(iy)\overline{k_{w}(iy)}dy. \tag{3.1}\] Now we can reformulate (1) of Theorem 3.4 for the case \(n=0\). **Proposition 3.7**.: _The subspace \(N(1/(1+s))\) is backward shift invariant and_ \[N\Big{(}\frac{1}{1+s}\Big{)}=K_{\frac{1-s}{1+s}e^{-s_{s}}}.\] Proof.: First we prove \(N(1/(1+s))\) is invariant with respect to \(\{M(t)^{*}\}_{t\geq 0}\). Take \(0\leq\lambda\leq\delta\). For the case \(\alpha\leq\lambda\), \(e^{\alpha s}e^{-\lambda s}/(1+s)\) is clearly in \(N(1/(1+s))\). For the other case \(\alpha>\lambda\), we use (3.1) to obtain the projection of \(e^{\alpha s}e^{-\lambda s}/(1+s)\) into \(H^{2}\) by \[\left\langle e^{\alpha s}\frac{e^{-\lambda s}}{1+s},k_{w}\right\rangle=\left \langle\frac{1}{2\pi(1+s)},\frac{e^{(\lambda-\alpha)s}}{s+\overline{w}} \right\rangle=\frac{e^{\alpha-\lambda}}{1+w},\] due to \(1/(2\pi(1+s))\) is a reproducing kernel. In sum, \(N(1/(1+s))\) is a model space. Further, Lemma 3.5 implies that \[K_{\min}\left(\frac{1}{1+s}e^{-\delta s}\right)=\ker T_{\frac{1}{1+s}e^{- \delta s}}=K_{\frac{1-s}{1+s}e^{-\delta s}}.\] Since every Toeplitz kernel containing \(e^{-\delta s}/(1+s)\) also contains \(e^{-\lambda s}/(1+s)\) for \(0\leq\lambda\leq\delta\), so the minimal Toeplitz kernel containing \(e^{-\delta s}/(1+s)\) equals \(N(1/(1+s))\), ending the proof. After that, the derivative-kernel property is used to exhibit (1) of Theorem 3.4 for general \(n\in\mathbb{N}_{0}\). **Theorem 3.8**.: _For any \(n\in\mathbb{N}_{0},\) the subspace \(N(1/(1+s)^{n+1})\) is backward shift invariant and_ \[N\Big{(}\frac{1}{(1+s)^{n+1}}\Big{)}=K_{\big{(}\frac{1-s}{1+s}\big{)}^{n+1}e^{- \delta s}}. \tag{3.2}\] Proof.: By (3.1), it holds that \[\left\langle\frac{1}{2\pi(s+w)},f\right\rangle=\overline{f(\overline{w})}\] for any \(f\in H^{2}(\mathbb{C}_{+}).\) Then differentiating \(n\) times, \[\left\langle\frac{(-1)^{n}n!}{2\pi(s+w)^{n+1}},f\right\rangle=\overline{f^{(n)} (\overline{w})}.\] Thus \[\left\langle f,\frac{1}{2\pi(s+\overline{w})^{n+1}}\right\rangle=(-1)^{n} \frac{f^{(n)}(w)}{n!}. \tag{3.3}\] Again we take \(0\leq\lambda\leq\delta\). On the one hand, if \(\alpha\leq\lambda\), then \(e^{\alpha s}e^{-\lambda s}/(1+s)^{n+1}\in N(1/(1+s)^{n+1}).\) On the other hand, if \(\alpha>\lambda\), using (3.3), we have \[\left\langle\frac{e^{\alpha s}e^{-\lambda s}}{(1+s)^{n+1}},k_{w}\right\rangle = \left\langle\frac{1}{(1+s)^{n+1}},e^{(\lambda-\alpha)s}k_{w}\right\rangle\] \[= \frac{2\pi(-1)^{n}}{n!}\overline{\left(e^{(\lambda-\alpha)s}k_{w }(s)\right)^{(n)}}\Big{|}_{s=1},\] which has an expression involving a linear combination of the terms \(1/(1+w),\cdots,\)\(1/(1+w)^{n+1}\). In view of Lemma 3.6, we see that \(s/(1+s)^{n+1}\) lies in \(N(1/(1+s)^{n+1})\) by differentiating with respect to \(\lambda\), and hence so does \(1/(1+s)^{n}.\) This shows that \[P_{H^{2}(\mathbb{C}_{+})}\left[e^{\alpha s}\frac{e^{-\lambda s}}{(1+s)^{n+1}} \right]\in N(1/(1+s)^{n+1})\;\;\text{for}\;\;\alpha\geq\lambda.\] In sum, \(N(1/(1+s)^{n+1})\) is backward shift invariant and so it is a model space. Similarly, since every Toeplitz kernel containing \(e^{-\delta s}/(1+s)^{n+1}\) also contains \(e^{-\lambda s}/(1+s)^{n+1}\) for \(0\leq\lambda\leq\delta\), so the minimal Toeplitz kernel containing \(e^{-\delta s}/(1+s)^{n+1}\) equals \(N(1/(1+s)^{n+1}).\) Lemma 3.5 implies the minimal Toeplitz kernel containing \(e^{-\delta s}/(1+s)^{n+1}\) is \[K_{\min}\left(\frac{e^{-\delta s}}{(1+s)^{n+1}}\right)=K_{\left(\frac{1-s}{1+ s}\right)^{n+1}e^{-\delta s}},\] which establishes (3.2). More generally, given an invertible function \(G\in H^{\infty}(\mathbb{C}_{+})\), although Theorem 3.8 implies \[N\left(\frac{G(s)}{(1+s)^{n+1}}\right)=G(s)N\left(\frac{1}{(1+s)^{n+1}}\right) =G(s)K_{\left(\frac{1-s}{1+s}\right)^{n+1}e^{-\delta s}},\] we cannot even assert the subspace \(N(G(s)/(1+s)^{n+1})\) is backward shift invariant for the invertible rational function \(G\in H^{\infty}(\mathbb{C}_{+})\). In order to formulate \(N(G(s)/(1+s)^{n+1})\), we need to recall a result on the analytic multipliers between Toeplitz kernels. In the following lemma, the analytic function \(k\in\mathcal{C}(\ker T_{g})\) means \(|k^{2}|dm\) is a Carleson measure for \(\ker T_{g}\), that is, \(k\ker T_{g}\subseteq L^{2}(\mathbb{T}).\) And we say \(\mu\) is a Carleson measure for a subspace \(X\) of \(H^{2}(\mathbb{D})\) if there is a constant \(C>0\) such that \[\int_{\mathbb{T}}|f|^{2}d\mu\leq C\|f\|_{2}^{2}\;\;\text{for all}\;f\in X,\] where \(X\) is the Toeplitz kernel, including model space. **Lemma 3.9**.: _[_4_, Theorem 3.3]_ _Let \(g,h\in L^{\infty}(\mathbb{T})\) such that \(\ker T_{g}\) and \(\ker T_{h}\) are nontrivial. For \(k\in H(\mathbb{D})\), it holds that \(k\ker T_{g}=\ker T_{h}\) if and only if \(k\in\mathcal{C}(\ker T_{g}),\;k^{-1}\in\mathcal{C}(\ker T_{h})\) and_ \[h=g\frac{\overline{k}}{k}\frac{\overline{q}}{\overline{p}} \tag{3.4}\] _for some outer functions \(p,q\in H^{2}(\mathbb{D})\)._ Applying Lemma 3.9 with inner functions \(g=\overline{\theta}\) and \(h=\overline{\psi}\), a remark follows. _Remark 3.10_.: Suppose that \(\theta\) and \(\psi\) are inner functions and \(kK_{\theta}=K_{\psi}\) with \(k\in H(\mathbb{D})\), then it holds that \(K_{\psi}=\ker T_{\overline{\theta}\overline{k}/k}=K_{\theta k/\overline{k}}\). Proof.: Firstly, we note \(\theta k/\overline{k}\) is unimodular in \(L^{\infty}(\mathbb{T})\). It also equals \(\psi p/q\) from (3.4), which is in the Smirnov class, so \(\theta k/\overline{k}\) lies in \(H^{\infty}(\mathbb{D})\) and inner. This entails \(\psi=\gamma\theta k/\overline{k}\), with \(\gamma\in\mathbb{C}\) and \(|\gamma|=1\), which is Crofoot's result from [8]. Now Lemma 3.9 is applied to formulate \(N(G(s)/(1+s)^{n+1})\) for an invertible rational \(G\). **Proposition 3.11**.: _Let \(G\) be a rational and invertible function in \(H^{\infty}(\mathbb{C}_{+})\), then for any non-negative integer \(n,\) it holds that_ \[N\left(\frac{G(s)}{(1+s)^{n+1}}\right)=G(s)K_{\theta}=\ker T_{h}\] _if and only if \(G\in\mathcal{C}(\ker T_{\overline{\theta}})\) and \(G^{-1}\in\mathcal{C}(\ker T_{h})\) with_ \[\theta(s)=\left(\frac{1-s}{1+s}\right)^{n+1}e^{-\delta s}\;\;\text{and}\;\;h= \overline{\theta}\overline{G}\frac{\overline{q}}{\overline{p}}\] _for some outer functions \(p,q\in H^{2}(\mathbb{C}_{+})\)._ Proof.: Since \(G\in H^{\infty}(\mathbb{C}_{+})\) is rational and invertible, it follows that \[N\left(\frac{G(s)}{(1+s)^{n+1}}\right)=G(s)K_{\theta}\;\;\text{with}\;\theta( s)=\left(\frac{1-s}{1+s}\right)^{n+1}e^{-\delta s}.\] Replacing \(k\) and \(g\) by \(G\) and \(\overline{\theta}\) in Lemma 3.9, the equivalence clearly follows. Then Proposition 3.11 is used in Example 3.12 to show \(N(G(s)/(1+s)^{n+1})\) can sometimes be a Toeplitz kernel but not a model space for an invertible rational \(G\in H^{\infty}(\mathbb{C}_{+})\). _Example 3.12_.: It holds that \[N\left(\frac{s+3}{(1+s)(s+2)}\right)=GK_{\theta}=\ker T_{\overline{\theta} \overline{G}\overline{G}/\overline{G}\overline{p}} \tag{3.5}\] where \(G(s)=(s+3)/(s+2),\;\theta(s)=(1-s)e^{-\delta s}/(1+s)\) and \(p,q\) outer. Proof.: Firstly taking out an invertible factor, Proposition 3.7 shows that \[N\left(\frac{s+3}{(1+s)(s+2)}\right)=\frac{s+3}{s+2}N\left(\frac{1}{1+s} \right)=GK_{\theta}\] with \(G\) and \(\theta\) given in this example. We conclude that \(GK_{\theta}\) is not a model space, otherwise Remark 3.10 implies that \(\theta G/\overline{G}\) would be inner, a contradiction, since it has a pole at \(s=3\). Next we show \(GK_{\theta}\) is a Toeplitz kernel \(\ker T_{h}\). Proposition 3.11 yields that \[h=\bar{\theta}\overline{\frac{G\overline{q}}{G\overline{p}}}\] with outer functions \(p,q\) and \(G\in\mathcal{C}(\ker T_{\overline{p}})\), \(G^{-1}\in\mathcal{C}(\ker T_{h})\). Meanwhile \[\overline{\theta}\frac{\overline{G}}{G}=e^{\delta s}\frac{1+s}{1-s}\frac{2+s}{ 2-s}\frac{3-s}{3+s},\] which is clearly a unimodular symbol. So the subspace in (3.5) follows. For completeness, we exhibit the following example with a rational non-invertible \(G(s)=1/(s+2)\), which behaves as a model space. _Example 3.13_.: It holds that \[N\left(\frac{1}{(1+s)(s+2)}\right)=K_{\frac{1-s}{1+s}\frac{2-s}{2+s}e^{-\delta s}}.\] Proof.: Using the bijection \(V^{-1}\) in (1.2) to map \(N\left(1/(1+s)(s+2)\right)\subseteq H^{2}(\mathbb{C}_{+})\operatorname{onto}A \left((z+1)/(z+3)\right)\subseteq H^{2}(\mathbb{D}),\) Theorem 3.4 implies that \[A\left(\frac{z+1}{z+3}\right)=\frac{1}{z+3}K_{z^{2}\phi^{\delta}}.\] Further employing Remark 3.10, it holds that \[\frac{1}{z+3}K_{z^{2}\phi^{\delta}}=K_{z\phi^{\frac{z+1}{2+\delta}}}.\] Then mapping the above subspace back into \(H^{2}(\mathbb{C}_{+}),\) it yields the desired model space. ## 4. The subspace \(N(g)\) with a rational outer function \(g\) In this section, we concentrate on finding the subspace \(N(g)\) with a rational outer function \(g\in H^{2}(\mathbb{C}_{+})\) noting that \(N(\theta h)=\theta N(h)\) for an inner function \(\theta.\) The case \(g\) is a rational outer function is fundamental. At this time, \(g\) can be factored as the product of an invertible function and one with zeros on the imaginary axis including infinity. By the isometric isomorphism \(V^{-1}\) in (1.2), we can map the function with zeros in \(i\mathbb{R}\cup\{\infty\}\) into a function in \(H^{2}(\mathbb{D})\) with zeros in the unit circle \(\mathbb{T}\). In what follows, we turn to formulate \(A(h)\) for a rational function \(h\) with \(n\) zeros on \(\mathbb{T}\). In our recent paper [14], we obtained the following proposition. **Proposition 4.1**.: _[_14_, Proposition 3.10]_ _Let \(\widetilde{p}_{n}(z):=\prod_{j=1}^{n}(z+w_{j})\) with \(w_{j}\in\mathbb{T}\), \(j=1,\cdots,n\); then it follows that_ \[A(\widetilde{p}_{n})+\phi^{\delta}K_{z^{n}}=K_{z^{n+1}\phi^{\delta}}. \tag{4.1}\] In order to explore the concrete description of \(A(\widetilde{p}_{n}),\) several lemmas are cited for properties of model spaces. **Lemma 4.2**.: _[_10_, Proposition 5.5]_ _If \(\varphi\in H^{\infty}\) and \(\theta\) is inner, then \(T_{\overline{\varphi}}K_{\theta}\subseteq K_{\theta}.\)_ **Lemma 4.3**.: _[_10_, Lemma 5.10]_ _Let \(\theta_{1}\) and \(\theta_{2}\) be inner. Then_ \[K_{\theta_{1}\theta_{2}}=K_{\theta_{1}}\oplus\theta_{1}K_{\theta_{2}}.\] **Lemma 4.4**.: _[_10_, Proposition 5.4]_ _For an inner function \(\theta\), the model space \(K_{\theta}\) is the set of all \(f\in H^{2}\) such that_ \[f=\overline{g}\overline{z}\theta\] _almost everywhere on \(\mathbb{T}\) for some \(g\in H^{2}.\)_ _Remark 4.5_.: From the above formula, it also holds \(g=\overline{f}\overline{z}\theta\), so \(g\in K_{\theta}\) too. The proposition below is crucial, which involves Proposition 4.1 with \(n=1\). **Proposition 4.6**.: _For \(w\in\mathbb{T},\) it holds that_ \[A(z+w)=K_{z^{2}\phi^{\delta}}.\] Proof.: We will compute the orthogonal complement of \(A(z+w)\) in \(K_{z^{2}\phi^{\delta}}.\) Lemma 4.3 and Proposition 4.1 imply \[A(z+w)+\phi^{\delta}K_{z}=K_{z^{2}\phi^{\delta}}=K_{z^{2}\phi^{\delta}}\oplus z \phi^{\delta}K_{z}.\] On the one hand, for the vector \(\alpha z\phi^{\delta}\in z\phi^{\delta}K_{z},\ \alpha\in\mathbb{C},\) it holds that \[\langle(z+w)\phi^{\lambda},\alpha z\phi^{\delta}\rangle=\langle 1,\alpha\phi^{ \delta-\lambda}\rangle-w\langle 1,\alpha z\phi^{\delta-\lambda}\rangle= \overline{\alpha\phi^{\delta-\lambda}(0)}=0\] for all \(0\leq\lambda\leq\delta\) if and only if \(\alpha=0\). This means \[z\phi^{\delta}K_{z}\cap[A(z+w)]^{\perp}=\{0\}.\] On the other hand, we assume \(f\in K_{z\phi^{\delta}}\) such that \(\langle(z+w)\phi^{\lambda},f\rangle=0\) for all \(0\leq\lambda\leq\delta.\) By Lemma 4.4, we suppose \(f=\overline{g}\overline{z}z\phi^{\delta}=\overline{g}\phi^{\delta}\) with \(g\in K_{z\phi^{\delta}}\) by Remark 4.5. It turns out \[\langle(z+w)\phi^{\lambda},f\rangle =\langle(z+w)\phi^{\lambda},\overline{g}\phi^{\delta}\rangle\] \[=\langle(z+w)g,\phi^{\delta-\lambda}\rangle=0,\] for all \(0\leq\lambda\leq\delta.\) Since \(\bigvee\{\phi^{\delta-\lambda}:\ 0\leq\lambda\leq\delta\}=K_{z\phi^{\delta}}\) (see [14, Corollary 2.4]), we have that \((z+w)g\in z\phi^{\delta}H^{2}\). By the uniqueness of inner-outer factorization, we suppose \[(z+w)g(z)=z\phi^{\delta}h\] for some \(h\in H^{2}.\) Considering \(g\in K_{z\phi^{\delta}}=\ker T_{z\phi^{\delta}}\), we obtain \[\overline{z\phi^{\delta}}(z+w)g=h\in(z+w)\overline{zH^{2}}.\] We further denote \(h=(z+w)\overline{zk}=(1+w\overline{z})\overline{k}\in H^{2}\) for some \(k\in H^{2}.\) This gives \(k=0\), \(h=0\) and \(g=0\), then \(f=0\). Thus \[K_{z\phi^{\delta}}\cap[A(z+w)]^{\perp}=\{0\}.\] Finally, we take \(f+\alpha z\phi^{\delta}\in K_{z^{2}\phi^{\delta}}\) with \(f=\overline{g}\phi^{\delta}\in K_{z\phi}\) and \(\alpha\neq 0\) such that \[\langle(z+w)\phi^{\lambda},f+\alpha z\phi^{\delta}\rangle\] \[=\langle\overline{\alpha},\phi^{\delta-\lambda}\rangle+\langle(z +w)g,\phi^{\delta-\lambda}\rangle\] \[=\langle\overline{\alpha}+(z+w)g,\phi^{\delta-\lambda}\rangle=0 \ \text{ for all }0\leq\lambda\leq\delta.\] This implies \(\overline{\alpha}+(z+w)g\in z\phi^{\delta}H^{2}.\) Still assume \(\overline{\alpha}+(z+w)g=z\phi^{\delta}\hat{h}\) for some \(\hat{h}\in H^{2}.\) And then \((z+w)g=z\phi^{\delta}\hat{h}-\overline{\alpha}\). Due to \(g\in K_{z\phi^{\delta}}\), we have that \[(z+w)\overline{z\phi^{\delta}}g=\overline{z\phi^{\delta}}(z\phi^{\delta}\hat{ h}-\overline{\alpha})\in(z+w)\overline{zH^{2}}.\] This, together with \(w\in\mathbb{T}\), imply that \[\hat{h}-\overline{\alpha z\phi^{\delta}}\in(\overline{z+\overline{w}^{-1}})H ^{2}\Rightarrow\hat{h}-\overline{\alpha z\phi^{\delta}}\in\overline{H^{2}}.\] Since \(\alpha\neq 0\), it follows that \(\hat{h}\in H^{2}\cap\overline{H^{2}}.\) So \(\hat{h}=\hat{h}(0):=\beta\in\mathbb{C}.\) So that \((z+w)g=\beta z\phi^{\delta}-\overline{\alpha}\) and then \[\overline{(z+w)}f=\overline{(z+w)}g\phi^{\delta}=(\overline{\beta z\phi^{ \delta}}-\alpha)\phi^{\delta}=\overline{\beta z}-\alpha\phi^{\delta}.\] Since \(f\in H^{2}\), so letting \(z\to-w\) in the above formula, it follows that \(\overline{\beta}=-w\alpha\phi^{\delta}(-w).\) And then it holds that \[(\overline{z+w})f=\alpha(-w\phi^{\delta}(-w)\overline{z}-\phi^{\delta}),\ \alpha\in\mathbb{C}\setminus\{0\}.\] Letting \(z=0\) in the above formula, it follows that \[f(0)=-\alpha we^{-\delta}. \tag{4.2}\] Next we show \(f\notin K_{z\phi^{\delta}}.\) On the contrary, suppose \(f\in K_{z\phi^{\delta}}\), Lemma 4.2 implies \(\overline{z}(f-f(0))+\overline{w}f=(\overline{z+w})f-\overline{z}f(0)\in K_{z \phi^{\delta}}\). It turns out that \[\alpha(-w\phi^{\delta}(-w)\overline{z}-\phi^{\delta})-f(0)\overline{z}\in K_ {z\phi^{\delta}},\] implying \(f(0)=-\alpha w\phi^{\delta}(-w)\) due to \(\phi^{\delta}\in K_{z\phi^{\delta}}\). This contradicts with (4.2) due to \(w\in\mathbb{T}\), so \(f\notin K_{z\phi^{\delta}}.\) In sum, we conclude \(A(z+w)=K_{z^{2}\phi^{\delta}}\). Now the concrete form of \(A\Big{(}\prod_{j=1}^{n}(z+w_{j})\Big{)}\) with \(w_{j}\in\mathbb{T}\), \(j=1,\cdots,n\) can be determined by Proposition 4.6 and mathematical induction. This improves (4.1). **Proposition 4.7**.: _Let \(\widetilde{p}_{n}(z):=\prod_{j=1}^{n}(z+w_{j})\) with \(w_{j}\in\mathbb{T}\), \(j=1,\cdots,n,\) it follows that_ \[A(\widetilde{p}_{n})=K_{z^{n+1}\phi^{\delta}}. \tag{4.3}\] Proof.: Proposition 4.6 implies (4.3) holds for \(n=1\). And then we suppose (4.3) holds for the case \(n-1\), we will prove for the general \(n\). Firstly, we have \[A(\widetilde{p}_{n})=\overline{(z+w_{n})A(\widetilde{p}_{n-1})}=\overline{(z+w_ {n})K_{z^{n}\phi^{k}}}\subseteq\overline{zK_{z^{n}\phi^{k}}+K_{z^{n}\phi^{s}}}.\] Since \[\langle\prod_{j=1}^{n-2}(z+w_{j})\phi^{\lambda},z^{n}\phi^{\delta}\rangle=0\ \text{ and }\ \langle z\prod_{j=1}^{n-2}(z+w_{j})\phi^{\lambda},z^{n}\phi^{\delta}\rangle=0\] for \(0\leq\lambda\leq\delta,\) it follows that \[z(z+w_{n})\prod_{j=1}^{n-2}(z+w_{j})\phi^{\lambda},\ (z+w_{n})\prod_{j=1}^{n-2}(z+ w_{j})\phi^{\lambda}\in\overline{(z+w_{n})K_{z^{n}\phi^{s}}}\] for \(0\leq\lambda\leq\delta.\) The result of the case \(n-1\) implies that \[A\Big{(}(z+w_{n})\prod_{j=1}^{n-2}(z+w_{j})\Big{)}=K_{z^{n}\phi^{s}}.\] Thus it follows that \[\overline{zK_{z^{n}\phi^{s}}+K_{z^{n}\phi^{s}}}\subseteq A(\widetilde{p}_{n}).\] In sum, we conclude that \[A(\widetilde{p}_{n})=\overline{zK_{z^{n}\phi^{s}}+K_{z^{n}\phi^{s}}}. \tag{4.4}\] Now for any \(f\perp A(\widetilde{p}_{n}),\) (4.4) implies \(f\perp K_{z^{n}\phi^{s}}\) and \(f\perp zK_{z^{n}\phi^{s}},\) which means \(f\in z^{n}\phi^{\delta}H^{2}(\mathbb{D})\) and \(S^{*}f\in z^{n}\phi^{\delta}H^{2}(\mathbb{D}).\) So we can suppose \(f=z^{n}\phi^{\delta}h\) with some \(h\in H^{2}(\mathbb{D}),\) and then \(S^{*}f=z^{n-1}\phi^{\delta}h\in z^{n}\phi^{\delta}H^{2}(\mathbb{D}),\) verifying \(z\) divides \(h\). Hence \(f\in z^{n+1}\phi^{\delta}H^{2}(\mathbb{D}),\) this shows \[K_{z^{n+1}\phi^{s}}\subseteq A(\widetilde{p}_{n}).\] Further, since \(K_{z^{n}\phi^{\delta}}\subseteq K_{z^{n+1}\phi^{s}}\) and \(zK_{z^{n}\phi^{s}}\subseteq K_{z^{n+1}\phi^{s}},\) so combining with (4.4) we obtain \(A(\widetilde{p}_{n})\subseteq K_{z^{n+1}\phi^{\delta}}\). In sum, we verify (4.3). For a general rational outer function \(h\in H^{2}(\mathbb{D}),\) it can be factored as the product of an invertible function and a function with zeros on the unit circle. By this time, combining Remark 3.10, we can present the characterization of \(A(h)\) which essentially improves [14, Theorem 3.11]. **Theorem 4.8**.: _Suppose \(h(z)=\widetilde{p}_{n}(z)q(z)\) with \(\widetilde{p}_{n}(z):=\prod_{j=1}^{n}(z+w_{j})\), where \(w_{j}\in\mathbb{T}\), \(j=1,\cdots,n,\) and \(q\) is an invertible rational function in \(L^{\infty}(\mathbb{T})\). Then_ \[A(h)=qK_{z^{n+1}\phi^{s}}.\] Accordingly, a general rational function \(g\in H^{2}(\mathbb{C}_{+})\) can be decomposed as \(g=G_{1}G_{2}\), where \(G_{1}\) is invertible in \(L^{\infty}(i\mathbb{R})\) and \(G_{2}\) only has zeros in \(i\mathbb{R}\cup\{\infty\}.\) We can further make the denominator of \(G_{2}\) equal to a power of \((1+s)\) and there will always be at least \(1\) as the function is in \(H^{2}(\mathbb{C}_{+}).\) Now suppose the degrees of the numerator and denominator of \(G_{2}\) are \(m\) and \(n\), respectively. This means \(m\) is the number of imaginary axis zeros of \(g\) and \(g\) is asymptotic to \(s^{m-n}\) at \(\infty\) so \(n>m\). Particularly, we write \(G_{2}(s)=\prod_{k=1}^{m}(s-y_{k})/(1+s)^{n}\) with all \(y_{k}\in i\mathbb{R}\). Using the relation between \(H^{2}(\mathbb{D})\) and \(H^{2}(\mathbb{C}_{+}),\) we deduce the formula for \[N(g)=N(G_{1}G_{2})=G_{1}N\left(\frac{\prod_{k=1}^{m}(s-y_{k})}{(1+s)^{n}} \right)\ \text{with}\ n>m.\] Now for integers \(n>m\geq 1\), Proposition 4.7 implies the subspace \(N\left(\prod_{k=1}^{m}(s-y_{k})/(1+s)^{n}\right)\) corresponds to \[A\Big{(}(z+1)^{n-m-1}\prod_{k=1}^{m}(z-w_{k})\Big{)}=K_{z^{n}\phi^{s}}\] with \(w_{k}=(1-y_{k})/(1+y_{k})\in\mathbb{T},\ k=1,\cdots,m.\) So mapping by the isometric isomorphism \(V\), the proposition below is true. **Proposition 4.9**.: _For integers \(n>m\geq 1,\) and \(y_{k}\in i\mathbb{R},\)\(k=1,2,\cdots,m\), it holds that_ \[N\left(\frac{\prod_{k=1}^{m}(s-y_{k})}{(1+s)^{n}}\right)=K_{\left(\frac{1-s}{ 1+s}\right)^{n}e^{-s_{s}}}.\] In sum, we can obtain the characterization for \(N(g)\) with a rational outer function \(g\in H^{2}(\mathbb{C}_{+}),\) which essentially improves [14, Theorem 3.13]. **Theorem 4.10**.: _Let \(g\in H^{2}(\mathbb{C}_{+})\) be rational with \(m\) zeros on the imaginary axis and let \(n>m\) such that \(s^{n-m}g(s)\) tends to a finite nonzero limit at \(\infty.\) Then \(g\) can be written as \(g=G_{1}G_{2}\), where \(G_{1}\) is rational and invertible in \(L^{\infty}(i\mathbb{R})\) and \(G_{2}(s)=\prod_{k=1}^{m}(s-y_{k})/(1+s)^{n}\) with all \(y_{k}\in i\mathbb{R}.\) Then it holds that_ \[N(g)=G_{1}(s)K_{\left(\frac{1-s}{1+s}\right)^{n}e^{-s_{s}}}.\] For a general function \(h\in H^{2}(\mathbb{D}),\) there follow two remarks on \(A(h).\) _Remark 4.11_.: (1) Suppose \(h\in H^{2}(\mathbb{D})\) has an inner factor, then \(A(h)\) is still of the form \(uK\) with model space \(K\) but now \(u\) is not outer. So \(A(h)\) is not a Toeplitz kernel due to the near invariance of Toeplitz kernels. (2) Generally, factorize the function \(h\) as \(h=\theta up\) where \(\theta\) is inner and \(u\) is invertible in \(H^{\infty}\) (hence outer). Then \(p\) is non-invertible and outer. If \(p\) is rational then it has factors \(z+w_{j}\) with \(w_{j}\in\mathbb{T}\) and the characterization of \(A(h)=\theta uA(p)\) follows from Theorem 4.8. As the extension of Proposition 3.1, Theorem 4.10 is applied to concretely illustrate two smallest nearly \(\{S(t)^{*}\}_{t\geq 0}\) invariant subspaces containing the functions related with \(e_{\delta}(\zeta):=e^{-\zeta}\chi_{(\delta,\infty)}(\zeta).\) _Example 4.12_.: (1) For \(m\in\mathbb{N}\) and \(\delta>0,\) consider the smallest cyclic subspace \([f_{m}]_{s}\) containing the function \[f_{m}(\zeta)=\sum_{k=0}^{m}f_{\delta,k}(\zeta)=\sum_{k=0}^{m}\frac{(\zeta- \delta)^{k}}{k!}e_{\delta}(\zeta)\in L^{2}(0,\infty).\] By the Laplace transform \(\mathcal{L}\) in (1.2), we map \([f_{m}]_{s}\) onto \[N\left(\sum_{k=0}^{m}\frac{1}{(1+s)^{k+1}}\right)=F_{m}(s)K_{\frac{1-s}{1+s}e^ {-s_{s}}},\] where \[F_{m}(s):=\sum_{k=0}^{m}\frac{1}{(1+s)^{k+1}}=\frac{(1+s)^{m+1}-1}{s(1+s)^{m+1}}\] is rational invertible in \(H^{2}(\mathbb{C}_{+}).\) (2) For \(m\in\mathbb{N}\) and \(0<\delta_{1}<\delta_{2}<\cdots<\delta_{m},\) consider the smallest cyclic subspace \([g_{m}]_{s}\) containing the function \[g_{m}(\zeta)=\sum_{k=1}^{m}e_{\delta_{k}}(\zeta)\in L^{2}(0,\infty).\] Similarly, employing the Laplace transform \(\mathcal{L}\) in (1.2), \([g_{m}]_{s}\) is mapped onto \[N\left(\sum_{k=1}^{m}e^{-(\delta_{k}-\delta_{1})}e^{-(\delta_{k}-\delta_{1})s }\right)=H_{m}(s)K_{\frac{1-s}{1+s}e^{-\delta_{1}s}},\] where \[H_{m}(s):=\sum_{k=1}^{m}e^{-(\delta_{k}-\delta_{1})}e^{-(\delta_{k}-\delta_{1 })s}\] is an invertible function. _Open Question._ How can one characterize the subspace \(A(q)\subseteq H^{2}(\mathbb{D})\) when \(q\) is non-invertible with an irrational outer factor?
2309.17226
Differentiable Optimization Based Time-Varying Control Barrier Functions for Dynamic Obstacle Avoidance
Control barrier functions (CBFs) provide a simple yet effective way for safe control synthesis. Recently, work has been done using differentiable optimization (diffOpt) based methods to systematically construct CBFs for static obstacle avoidance tasks between geometric shapes. In this work, we extend the application of diffOpt CBFs to perform dynamic obstacle avoidance tasks. We show that by using the time-varying CBF (TVCBF) formulation, we can perform obstacle avoidance for dynamic geometric obstacles. Additionally, we show how to extend the TVCBF constraint to consider measurement noise and actuation limits. To demonstrate the efficacy of our proposed approach, we first compare its performance with a model predictive control based method and a circular CBF based method on a simulated dynamic obstacle avoidance task. Then, we demonstrate the performance of our proposed approach in experimental studies using a 7-degree-of-freedom Franka Research 3 robotic manipulator.
Bolun Dai, Rooholla Khorrambakht, Prashanth Krishnamurthy, Farshad Khorrami
2023-09-29T13:25:24Z
http://arxiv.org/abs/2309.17226v3
# Differentiable Optimization Based Time-Varying ###### Abstract Control barrier functions (CBFs) provide a simple yet effective way for safe control synthesis. Recently, work has been done using differentiable optimization based methods to systematically construct CBFs for static obstacle avoidance tasks between geometric shapes. In this work, we extend the application of differentiable optimization based CBFs to perform dynamic obstacle avoidance tasks. We show that by using the time-varying CBF (TVCBF) formulation, we can perform obstacle avoidance for dynamic geometric obstacles. Additionally, we show how to alter the TVCBF constraint to consider measurement noise and actuation limits. To demonstrate the efficacy of our proposed approach, we first compare its performance with a model predictive control based method on a simulated dynamic obstacle avoidance task with non-ellipsoidal obstacles. Then, we demonstrate the performance of our proposed approach in experimental studies using a 7-degree-of-freedom Franka Research 3 robotic manipulator. ## I Introduction With robotic systems being deployed in more complicated and dynamic scenarios, it is crucial to ensure their safe interaction with the environment [1, 2, 3, 4]. Model predictive control (MPC) based methods [2] are widely used for safety-critical tasks. However, the computation time for MPC-based methods greatly increases when dealing with nonlinear system dynamics and nonconvex safety constraints. Recently, control barrier functions (CBFs) [5] have gained popularity in ensuring safety in robotic applications. CBF-based methods transform nonlinear and nonconvex safety constraints into linear ones [6, 7, 8], which makes them more suitable when considering complex safe set geometries. CBFs have been previously used for operational space control [9], teleoperation of quadrotors [10], and kinematic control [11]. However, in [9, 10, 11], the CBF is formulated between a point and a geometric shape. This makes it challenging to adapt the methods in [9, 10, 11] to tasks where the geometry of both the robot and its safety constraints cannot be modeled as a point. To extend CBF-based methods to consider the geometry of both the robot and the environment, in [12], the authors used a duality-based convex optimization problem that models the robot and obstacles as polytopes. In [13], the authors provided a more general approach, where the CBF is formulated using the signed-distance function (SDF) between two geometric shapes. However, since SDFs are not globally continuously differentiable, [13] used an approximation of the SDF-based CBF in the CBF constraint. This resulted in a conservative constraint that reduced the feasible set. An alternative solution is provided in [14], where a differentiable optimization based CBF is used, which recovers the entire safe set. However, the aforementioned works only consider time-invariant safe sets and cannot be directly deployed when the problem has a time-varying safe set, such as for dynamic obstacle avoidance tasks. The ability to avoid dynamic obstacles is crucial when deploying robotic systems in the real world. Early works studying dynamic obstacle avoidance tasks proposed using velocity obstacles [15] and dynamic windows [16]. More recently, work has been done using MPC-based methods that utilize adaptively updated half-space constraints [17]. However, in addition to the limitations of MPC-based approaches, [17] models both the robot and the obstacle as spheres, which may generate overly conservative motions when the robot and obstacle geometry consists of shapes that cannot be closely approximated as spheres, e.g., the links of robot manipulators and flat boards. Work has also combined MPC and CBF-based methods for dynamic obstacle avoidance [18]. Similar to [17], the robot and obstacle geometries are limited to spheres and ellipsoids. This paper extends the differentiable optimization based CBF approach to handle time-varying safe sets and applies the proposed method to dynamic obstacle avoidance tasks. The proposed method can handle a wider variety of geometries [19]. The main contribution of this paper is threefold: (1) extending the differentiable optimization based CBF for Fig. 1: Control pipeline for using time-varying differentiable optimization based control barrier functions for dynamic obstacle avoidance. mulation to settings with time-varying safe sets; (2) proposing approaches to make the CBF-based controller account for state estimation uncertainty and actuation limits; (3) showing the efficacy of our proposed method through simulations and real-life experiments. The remainder of this paper is structured as follows. Section II briefly reviews CBFs and differentiable optimization based CBFs. In Section III, the safe robotic control problem for dynamic obstacle avoidance is formulated. In Section IV, we present the time-varying differentiable optimization based CBF formulation and how to account for state estimation uncertainty and actuation limits. In Section V, we show the efficacy of our approach by comparing our proposed approach with an MPC-based method on a dynamic obstacle avoidance task with non-ellipsoid obstacles in simulation and two dynamic obstacle avoidance tasks in the real world using the 7-DOF Franka Research 3 (FR3) robotic arm. Finally, in Section VI, we conclude the paper with a discussion on future directions. ## II Preliminaries In this section, we provide a brief introduction to CBFs and differentiable optimization based CBFs. ### _Control Barrier Function_ Consider a control affine system \[\dot{x}=F(x)+G(x)u \tag{1}\] where the state is represented as \(x\in\mathbb{R}^{n}\), the control as \(u\in\mathbb{R}^{m}\), the drift as \(F:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\), and the control matrix as \(G:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n\times m}\). Let the set \(\mathcal{C}\), with \(\mathcal{C}\subset\mathcal{D}\subset\mathbb{R}^{n}\), be the 0-superlevel set of a continuously differentiable function \(h:\mathcal{D}\rightarrow\mathbb{R}\) that has the property \(\partial h/\partial x\neq 0\) for all \(x\in\partial\mathcal{C}\), where \(\partial\mathcal{C}\subset\mathbb{R}^{n}\) represents the boundary of \(\mathcal{C}\). Then, if \[\sup_{u\in\mathcal{U}}\Big{[}\frac{\partial h(x)}{\partial x}\Big{(}F(x)+G(x) u\Big{)}\Big{]}\geq-\Gamma(h(x)) \tag{2}\] holds for all \(x\in\mathcal{D}\) with \(\Gamma:\mathbb{R}\rightarrow\mathbb{R}\) being an extended class \(\mathcal{K}_{\infty}\) function1, then \(h\) is a CBF on \(\mathcal{C}\). Footnote 1: Extended class \(\mathcal{K}_{\infty}\) functions are strictly increasing with \(\Gamma(0)=0\). ### _Differentiable Optimization Based Control Barrier Function Formulation_ In [14], a differentiable optimization based CBF is constructed using the optimization problem that finds the minimum uniform scaling factor of two convex objects under which they collide \[\min_{p,\alpha} \alpha \tag{3}\] \[\mathrm{subject\ to} p\in\mathcal{P}_{A}(\alpha,\psi_{A})\] \[p\in\mathcal{P}_{B}(\alpha,\psi_{B})\] \[\alpha>0\] where \(\alpha\in\mathbb{R}_{+}\) is the uniform scaling factor, \(p\in\mathbb{R}^{3}\) is the point of intersection after scaling the two convex objects with the scaling factor \(\alpha\), \(\psi_{A},\psi_{B}\in\mathrm{SE}(3)\) represents the position and orientation of \(A\) and \(B\), respectively, and \(\mathcal{P}_{A},\mathcal{P}_{B}:\mathbb{R}_{+}\times\mathrm{SE}(3)\rightrightarrows \mathbb{R}^{3}\) represent the two convex objects after applying a scaling factor of \(\alpha\). Define the solution to (3) as \(\alpha^{\star}\) and \(p^{\star}\). It can be seen that the two objects are not in collision when \(\alpha^{\star}>1\), while the objects are either touching or colliding when \(\alpha^{\star}\leq 1\). If \(A\) is the robot and \(B\) is the obstacle, then the corresponding CBF has the form of \[h(x,\psi_{B})=\alpha^{\star}(x,\psi_{B})-\beta \tag{4}\] where \(\beta\geq 1\). Note that since \(\psi_{A}\) is a function of the state of the robot \(x\), we write \(h(x,\psi_{B})\) instead of \(h(\psi_{A},\psi_{B})\). ## III Problem Formulation Let the robot dynamics be defined as (1). For obstacle avoidance tasks with a single obstacle, the safe set \(\mathcal{C}\) is defined as the complement set of the interior and surface of the obstacle. For dynamic obstacles, the safe set would be time-varying, which is denoted as \(\mathcal{C}(t)\). When there exist \(N\in\mathbb{Z}_{+}\) obstacles, the safe set at time \(t\) can be written as \[\mathcal{C}(t)=\bigcap_{i=1}^{N}\mathcal{C}_{i}(t) \tag{5}\] where \(C_{i}(t)\) represents the safe set with respect to (w.r.t.) the \(i\)-th obstacle at time \(t\). Let the robot state at \(t=0\) be \(x_{0}\in\mathcal{C}(0)\). This work aims to find a systematic approach that ensures the robot does not collide with any of the obstacles, i.e., \(x(t)\in\mathcal{C}(t)\) for \(t>0\). **Remark 1**.: _It is not always possible to find a control sequence that ensures the safety of the robot. For example, if the robot has a fixed base, no control sequence will be able to make the robot avoid an obstacle that hits the fixed base. Another example is when the robot is significantly less agile than the obstacle. **This work does not consider such cases, we assume there exists a control sequence within the robot's control authority that guarantees safety**._ ## IV Method In this section, we present the proposed method for performing dynamic obstacle avoidance using a differentiable optimization based time-varying CBF (TVCBF). Fig. 2: This figure illustrates the motion generated by the TVCBFOP controller. The robot we control is in blue. The obstacle is in orange. The light blue and orange dashed lines represent the path the robot and the obstacle traveled, respectively. The timestamp of each snapshot is given in the lower right corner of each figure. Each small grid is \(2~{}\mathrm{m}\times 2~{}\mathrm{m}\). ### _Motivating Example - Moving Circles_ To aid in the presentation of our method, a motivating example is provided, which will be used throughout Section IV. The example consists of a circular robot with radius \(R_{r}=0.5\ \mathrm{m}\) and a circular obstacle with radius \(R_{o}=1.5\ \mathrm{m}\). Define the position of the robot and the obstacle as \(p_{r}\in\mathbb{R}^{3}\) and \(p_{o}\in\mathbb{R}^{3}\), respectively. Set the initial position of the robot and the obstacle to \(p_{r}^{0}=(-5.0,-0.5,0.0)\) and \(p_{o}^{0}=(5.0,0.0,0.0)\), respectively. Define the velocity of the obstacle as \(v_{o}\in\mathbb{R}^{3}\), which is along the negative \(x\) direction. The dynamics of the robot is \[\dot{p}_{r}=\begin{bmatrix}\dot{p}_{r,x}\\ \dot{p}_{r,y}\\ \dot{p}_{r,z}\end{bmatrix}=\begin{bmatrix}1&0\\ 0&1\\ 0&0\end{bmatrix}\begin{bmatrix}u_{x}\\ u_{y}\end{bmatrix}=u \tag{6}\] with \(u\in\mathbb{R}^{2}\) representing the agent's control command. The goal of the robot is to move along the positive \(x\) direction while avoiding collision with the obstacle. For this moving circles example, the constraints in (3) have the form of \[\begin{bmatrix}0\\ -r_{i}\end{bmatrix}-\begin{bmatrix}\mathbf{0}_{1\times 3}&-1\\ -\mathbf{I}_{3\times 3}&\mathbf{0}_{3\times 1}\end{bmatrix}\begin{bmatrix}p \\ \alpha\end{bmatrix}\in\mathcal{Q}_{4},\ i=\{A,B\} \tag{7}\] where \(r_{i}\in\mathbb{R}^{3}\) represents the position of object \(i\) and \(\mathcal{Q}_{4}\) represents the second-order cone. ### _Time-Varying Control Barrier Function_ In Section II, the safe set \(\mathcal{C}\) is assumed to be time-invariant. When the safe set is time-varying, TVCBFs would need to be used. To emphasize that TVCBFs are computed w.r.t. a time-varying safe set, they are written as \(h(x,\psi(t))\), where \(\psi(t)\) denotes the configuration of the safe set at time \(t\). The TVCBF constraint [20] has the form of \[\frac{\partial h}{\partial x}(x,\psi(t))\dot{x}+\frac{\partial h}{\partial t} (x,\psi(t))\geq-\gamma h(x,\psi(t)). \tag{8}\] with \(\gamma\in\mathbb{R}_{+}\). The component \(\partial h/\partial t\) in (8) is numerically computed using first order approximation as: \[\frac{\partial h}{\partial t}(x,\psi(t))\approx\frac{h(x,\psi(t+\Delta t))-h( x,\psi(t))}{\Delta t} \tag{9}\] with \(\Delta t\in\mathbb{R}_{+}\) being the approximation timestep. For dynamic obstacle avoidance tasks, \(\psi(t)\) represents the configuration of the obstacles. The value of \(\psi(t+\Delta t)\) can be estimated using a state estimator. This control pipeline is illustrated in Fig. 1. For robotics tasks, to apply the differentiable optimization based TVCBF, both the robot and the obstacle would need to be encapsulated by convex primitive shapes. Define the number of primitives assigned to the robot and obstacle as \(n_{r}\in\mathbb{N}\) and \(n_{o}\in\mathbb{N}\), respectively. Then, there will be \(n_{r}\times n_{o}\) TVCBF constraints. Adopting the CBFQP formulation, we write the time-varying CBFQP (TVCBFQP) as \[\min_{u\in\mathcal{U}} \|u-u_{\mathrm{ref}}\| \tag{10}\] \[\mathrm{subject\ to} \frac{\partial h_{ij}}{\partial x}\dot{x}+\frac{\partial h_{ij}} {\partial t}\geq-\gamma h_{ij}\] where \(i=1,\cdots,n_{r}\), \(j=1,\cdots,n_{o}\), \(u_{\mathrm{ref}}\in\mathbb{R}^{m}\) is the reference control action, \(\mathcal{U}\subset\mathbb{R}^{m}\) is the admissible set of controls, and \(h_{ij}\)'s dependency on \((x_{i},\psi_{j}(t))\) is omitted for brevity. To show the effectiveness of differentiable optimization based TVCBPQPs, we apply it to solving the Moving Circles example. When \(v_{o}=[-4.0,0.0,0.0]^{T}\) and \(u_{\mathrm{ref}}=[2,0]^{T}\), the result is shown in Fig. 2. It may be noted that the robot avoids collision with the obstacle while continuing its movement along the positive \(x\) direction. **Remark 2**.: _For the Moving Circles example, if we replace the differentiable optimization based **TVCBF** with the differentiable optimization based **CBF** proposed in [14], it is observed that the robot will collide with the obstacle._ ### _Measurement Noise_ To implement our TVCBFQP controller we need to predict the motion of the obstacles. Any sensor providing such measurements will be corrupted by measurement noise. If not taken into consideration, the measurement noise may lead to safety violations of the robotic system. We utilize an Extended Kalman Filter (EKF) to provide a probabilistic estimation of the state in the form of a Gaussian distribution \(\mathcal{N}(\mu,\Sigma)\) with \(\mu\in\mathbb{R}^{n_{m}}\), \(\Sigma\in\mathbb{R}^{n_{m}\times n_{m}}\), and \(n_{m}\in\mathbb{N}\) being the dimension of \(\mu\). Denote the Mahalanobis distance [21]\(d_{M}\) for a point \(y\) w.r.t. to \(\mathcal{N}(\mu,\Sigma)\) as \[d_{M}(y,\mu,\Sigma)=\sqrt{(y-\mu)^{T}\Sigma^{-1}(y-\mu)}\in\mathbb{R}_{+} \tag{11}\] where \(y\in\mathbb{R}^{n_{m}}\) and \(d_{M}:\mathbb{R}^{n_{m}}\times\mathbb{R}^{n_{m}}\times\mathbb{R}^{n_{m}\times n _{m}}\rightarrow\mathbb{R}_{+}\). Denote the probability of sampling a random variable from \(\mathcal{N}\) that has a Mahalanobis distance smaller or equal to \(k\in\mathbb{R}_{+}\) as \(\mathrm{Prob}_{k}\in[0,1]\) and the part of the state space of \(\mathbb{R}^{n_{m}}\) that has a Mahalanobis distance smaller or equal to \(k\) as \(\mathcal{S}_{k}\). **Theorem 1**.: _For the CBF defined in (4), let_ \[\tilde{\psi}=\underset{\psi}{\mathrm{argmin}} h(x,\psi) \tag{12}\] \[\mathrm{subject\ to} \psi\in\mathcal{S}_{k}.\] Fig. 3: This figure illustrates the motion generated by the TVCBFQP controller under sensor noise. The top two rows show motion snapshots generated by the controller that considers sensor noise. For the motion snapshots, the color scheme follows Fig. 2 with the only difference being the light orange dashed lines representing the estimated obstacle trajectory. The bottom plot compares the CBF value for two controllers, one considering sensor noise using the proposed method and the other not considering sensor noise. The bottom plot only shows from \([1.5,2.5]\) s since the CBF values are positive for the rest of the time. _Then, the solution to the TVCBFQP with \(h(x,\tilde{\psi})\) in the TVCBF constraint also guarantees safety for all \(\psi\in\mathcal{S}_{k}\)._ Proof.: From the definition of TVCBF in (8), \(\alpha(q,\tilde{\psi})\leq\alpha(x,\psi)\), \(\forall\psi\in\mathcal{S}_{k}\). The solution of the TVCBFQP using \(h(q,\tilde{\psi})\) ensures that \(\alpha(x,\tilde{\psi})\geq\beta\), which ensures \(\alpha(x,\psi)\geq\beta\). Thus, safety is guaranteed \(\forall\psi\in\mathcal{S}_{k}\). Thus, using the obstacle configuration \(\tilde{\psi}\) guarantees safety with probability \(\mathrm{Prob}_{k}\). Note that this probability is only a lower bound since many obstacle configurations with a Mahalanobis distance greater than \(k\) will have a larger \(\alpha\) value than \(\alpha(x,\tilde{\psi})\). Since the optimization problem in (12) is computationally expensive to solve, we provide a heuristic-based approach to estimate \(\tilde{\psi}\). **Assumption 1**.: _We assume the change in obstacle orientation is much slower than the obstacle position._ **Remark 3**.: _Assumption 1 holds in many robotic applications, such as self-driving cars and robotic manipulation. Otherwise, we can use a bounding shape that encapsulates the region covered when the obstacle is rotating in place._ Define the surface of the ellipsoid that contains points with a Mahalanobis distance of \(k\) as \(K_{M}\) and the position distribution and quaternion distribution as \(\mathcal{N}(\mu_{p},\Sigma_{p})\) and \(\mathcal{N}(\mu_{q},\Sigma_{q})\), respectively. Define \(h_{r}\) as \[h_{r}=\frac{\partial h}{\partial r}\Big{/}\Big{\|}\frac{\partial h}{\partial r }\Big{\|}=\begin{bmatrix}h_{r,x}\\ h_{r,y}\\ h_{r,z}\end{bmatrix}\in\mathbb{R}^{3} \tag{13}\] with \(r\in\mathbb{R}^{3}\) representing the positional terms. For a line that starts from \(\mu_{p}\) and goes along the direction of \(h_{r}\), we define its intersection with \(K_{M}\) as \(p_{d}\) \[p_{d}=\mu_{p}+\frac{kh_{r}}{\sqrt{h_{r}^{T}\Sigma_{p}^{-1}h_{r}}}\in\mathbb{R} ^{3}. \tag{14}\] Then, we can write the TVCBF constraint as \[\frac{\partial h}{\partial x}(x,\psi_{d}(t))\dot{x}+\frac{\partial h}{ \partial t}(x,\psi_{d}(t))\geq-\gamma h(x,\psi_{d}(t)) \tag{15}\] where \(\psi_{d}(t)=(p_{d}(t),\mu_{q}(t))\). To show the usefulness of this approach in handling measurement noise, we use the Moving Circles example in Section IV-B, but with an additive Gaussian noise \(\mathcal{N}(0,0.5)\) applied to the position measurement on each dimension. Note that this level of noise is much larger than what is experienced in real robotic experiments. From Fig. 3, it can be seen that the robot maintains safety using the proposed method to consider sensor noise and is unsafe when measurement noise is not considered. ### _Actuation Limits_ Robotic systems cannot realize arbitrarily large joint velocity commands and have a limited tracking bandwidth. Thus, if the TVCBFQP sends large joint velocity commands or requires the joint velocity to change rapidly, safety cannot be guaranteed. Under the assumption in Remark 1, both large and rapidly changing joint velocity commands are different consequences of the same issue: the robot did not act early enough. To tackle the aforementioned issue, one can use MPC to plan a path w.r.t. a preview horizon. However, embedding our TVCBF constraint in an MPC will create a nonlinear-bilevel-optimization problem that cannot be solved in real-time. Alternatively, we propose a simple solution that inflates the scaling factor \(\alpha\) based on the relative velocity of the robot (or robot segment) and the obstacle. Define the robot velocity as \(v_{r}=\dot{p}_{r}\). We can compute the velocity of the obstacle in a frame that is aligned with the inertial frame but attached to the robot (or robot segment), which is defined as \(\bar{v}_{o}=v_{o}-v_{r}\). By projecting \(\bar{v}_{o}\) onto the vector that points from the obstacle to the robot, we have \[a_{v}=\bar{v}_{o}^{T}(p_{r}-p_{o})=(v_{o}-v_{r})^{T}(p_{r}-p_{o})\in\mathbb{R}. \tag{16}\] Then, on the right-hand-side of the TVCBFQP constraint, instead of using (4) as the TVCBF, we use \[h(x,\psi_{B})=(1+b)a_{v}\alpha^{*}(x,\psi_{B})-\beta \tag{17}\] where \(b\in\mathbb{R}_{+}\) is a hyperparameter that can be tuned to make the system more (make \(b\) larger) or less (make \(b\) smaller) conservative. Similar to the previous sections, we demonstrate this approach using the Moving Circles example. We use the same problem setting as in Section IV-B but limit the robot's velocity along the \(x\) and \(y\) axis to be between \([-1,1]\ \mathrm{m}/\mathrm{s}\) and set \(b=1\). The motion generated is shown in Fig. 4. It can be seen that the proposed scaling factor inflation scheme helps ensure the safety of the robot. ## V Experiments In this section, we show the simulation and experimental results of utilizing our proposed method on robotic systems. First, we compare our approach with the approach proposed in [17] on dynamic rectangular obstacle avoidance tasks. Then, we show the efficacy of our approach on the FR3 robot both in simulation and in real life. All experiments Fig. 4: This figure illustrates the motion generated by the TVCBFQP controller under actuation limits. The top two rows show snapshots of the motion generated when actuation limits are considered. The color scheme follows Fig. 2. The bottom plots compare the CBF value for two controllers, one considering actuation limits using the proposed method in Section IV-D, and the other not considering actuation limits. The bottom right plot zooms in on the bottom left plot between time \([2.4,2.6]\)s to show that the controller not considering actuation limits leads to unsafe behavior. are performed on a PC with 32GB of RAM and an Intel Core i7 11700 processor. The optimization problems are solved using ProxSuite[22] and Pinocchio[23] is used to compute the kinematics and dynamics terms. The two hyperparameters in our TVCBFQP controller are \(\gamma\) and \(\beta\). In general, the larger the value of \(\gamma\) and \(\beta\), the more conservative the TVCBFQP controller becomes. In all our experiments, we set \(\gamma=1.0\) and \(\beta=1.03\). ### _Dynamic Rectangular Obstacle Avoidance_ The previous Moving Circles example models both the robot and the obstacle as circles, which can also be sufficiently dealt with using other methods, e.g., [17]. In this section, we replace the circular obstacle with a rectangular obstacle to show the effectiveness of our approach in handling non-ellipsoid-like obstacles. The rectangular obstacle has size \(3\ \mathrm{m}\times 0.4\ \mathrm{m}\times 2\ \mathrm{m}\). Since the motion is limited to the \(xy\) plane, the size of the obstacle along the \(z\) axis does not affect the generated motion. For the rectangular obstacle, the constraint in (3) has the form of \[\mathbf{AR}^{T}r-\begin{bmatrix}\mathbf{AR}^{T}&-\mathbf{b}\end{bmatrix} \begin{bmatrix}p\\ \alpha\end{bmatrix}\in\mathbb{R}_{+} \tag{18}\] where \(\mathbf{A}\in\mathbb{R}^{6\times 3}\) and \(\mathbf{b}\in\mathbb{R}^{6}\) represents the halfspace constraints and \(\mathbf{R}\in\mathrm{SO}(3)\) represents the orientation of the rectangular obstacle using a rotation matrix. The robot is modeled as a sphere with a radius \(0.5\ \mathrm{m}\). The obstacle starts from position \([5.0,0.0,0.0]^{T}\ \mathrm{m}\) and moves with a fixed velocity of \([-4.0,0.0,0.0]^{T}\ \mathrm{m}/\mathrm{s}\). The robot's task is to start from position \((-5.0,-0.5,0.0)\ \mathrm{m}\) and reach the target point located at \((20.0,-0.5,0.0)\ \mathrm{m}\) while avoiding collision with the rectangular obstacle. The robot dynamics is the same as (6). The robot velocity along the \(x\) and \(y\) axis are both limited to be between \([-1,1]\ \mathrm{m}/\mathrm{s}\). The performance controller is a proportional controller with \(K_{p}=2.0\). For (17), \(b=10^{-3}\). No measurement noise is considered to align our problem setup with [17]. For our implementation of [17], the time horizon is \(T=1.5\ \mathrm{s}\), the sample time \(\Delta t=50\ \mathrm{ms}\), \(d_{\mathrm{risk}}=d_{\mathrm{obs}}=1.5\ \mathrm{m}\), \(w_{\mathrm{target}}=0.1\), \(w_{\mathrm{effort}}=0.1\), and \(w_{\mathrm{avoid}}=10.0\). The signed distances are computed using hpp-fcl[24]. The motions generated by our method and [17] are shown in Fig. 5. We see the motion generated by [17] is more conservative than using our method. This is due to [17] modeling both the obstacle and the robot as spheres, which is different from the actual geometry and causes the generated motion to be conservative. Although \(d_{\mathrm{obs}}\) can be tuned to generate less aggressive avoidance maneuvers, this does not fundamentally solve the problem of only being able to model spherical geometries. Thus, our method's ability to model geometries using a variety of primitive shapes [19] makes it applicable to a wider range of tasks. On average, the computation time for our proposed method is 0.29 ms, while it is 2.9 ms for the MPC-based method. Since the time complexity of linear MPCs is \(O(n^{3})\)[25], the computation time may get prohibitively large for higher dimensional systems. ### _FR3 Robotic Manipulator_ In this section, we apply our proposed method to dynamic obstacle avoidance tasks on a 7-DOF FR3 robot manipulator. The experimental setup is shown in Fig. 6. For joint velocity control, the system dynamics have the form of an integrator system, i.e., \(\dot{x}=u\). To utilize our method, we encapsulate the robot links with capsules and the end-effector with a sphere. The controller implementation requires prediction of the obstacles motion at current and the next sampling time. The position and orientation of the obstacles are measured using an array of 10 Vicon Valkyrie motion capture cameras. Then, the future position and orientation of the obstacles at the next time step are predicted using a quaternion-based EKF (Q-EKF) with a constant velocity model (as outlined in Section IV-C). The control loop runs at 100 Hz. #### Iv-B1 Avoiding a Moving Board The task is for the robotic manipulator to avoid one moving board with size \(0.9\ \mathrm{m}\times 0.6\ \mathrm{m}\times 0.02\ \mathrm{m}\). The board is modeled as a polygon and uses the same constraint as (18). The board approaches the FR3 robot in a variety of configurations. Define a nominal joint configuration \(\bar{q}\in\mathbb{R}^{7}\). The performance controller is a proportional-derivative (PD) controller that makes the FR3 robot return to \(\bar{q}\) \[u_{\mathrm{ref}}=K_{p}(\bar{q}-q)-K_{d}\dot{q} \tag{19}\] with \(K_{p},K_{d}\in\mathbb{R}^{7\times 7}\) being diagonal matrices with the PD gains on their diagonal. The safe control is computed using (10). The TVCBF constraints are computed between the links and the board. The TVCBF that considers both Fig. 5: This figure illustrates the motion generated by our proposed controller and the controller in [17] on the moving rectangle task described in Section V-A. The color scheme follows Fig. 2. Comparing the figures in the 3rd - 6th columns shows that the MPC-based method (top row) generates a much larger collision avoidance maneuver compared to our proposed approach (bottom row). sensor noise and actuation limits has the form of \[h(x,\psi_{d}(t))=(1+b)a_{v}\alpha^{\star}(x,\psi_{d}(t))-\beta \tag{20}\] with \(\psi_{d}(t)\) defined as in Section IV-C, \(a_{v}\) defined as in Section IV-D, \(k=3.0\), and \(b=2.5\). Snapshots of the generated motion are given in Fig. 6, the computed CBF values in Fig. 7, and the corresponding minimum distances in Fig. 8. We can see that as the board moves toward the robot with varying positions, orientations, and velocities, the proposed differentiable optimization based TVCBFQP controller generates an avoidance maneuver that maintains a positive CBF value. On average, our proposed method takes 0.41 \(\mathrm{ms}\) to compute the safe control. #### Iv-A2 Avoiding Two Moving Boxes The task is to avoid two moving boxes with size \(0.23~{}\mathrm{m}\times 0.25~{}\mathrm{m}\times 0.21~{}\mathrm{m}\). The performance controller is in the same form as (19). The Fig. 8: Minimum distance between the board/boxes and the body of the robot. The black dashed line represents zero distance. Fig. 6: Snapshots of the motion generated by the TVCBFQP of the FR3 robot avoiding a moving board and two boxes. The timestamp of the snapshots increases from left to right. (A) The moving board approaches the FR3 robot manipulator from the top. The FR3 robot retracts away from the board to avoid collision. (B) The two boxes approach the FR3 robot manipulator from the bottom two sides. Since its end-effector is closer to the boxes, the FR3 robot moves the end-effector upward and away to avoid collision with the boxes. Besides these snapshots, more scenarios of the obstacles approaching the robot can be found at [https://youtu.be/iMMWf9vqgtPU](https://youtu.be/iMMWf9vqgtPU). Fig. 7: CBF values for the robot experiments. The black dashed line represents zero CBF value. Since the CBF curves are above the dashed lines, it shows the robot avoided collision with the board/boxes. We did not include links 1 and 2 since they did not move relative to the base of the robot. TVCBF is in the form of (20) with \(k=3.0\) and \(b=1.0\). Snapshots of the generated motion are given in Fig. 6, the CBF values w.r.t. the two boxes are shown in Fig. 7, and the corresponding minimum distances are shown in Fig. 8. It can be seen that the proposed TVCBFQP controller is able to maintain safety w.r.t. the two moving boxes. On average, our proposed method takes 3.7 ms to compute the safe control. **Remark 4**.: _We also tested obtaining the position and orientation using April tags [26], which yielded similar performances on the experiments in Section V-B1 and Section V-B2. The main benefit of the Vicon system compared to April tags is that it enables more dynamic motion of the obstacles since the April tags would occasionally get occluded._ ## VI Conclusion In this paper, we extended the usage of differentiable optimization based CBFs to time-varying safe sets and applied it to dynamic obstacle avoidance tasks. We proposed modifications to the TVCBFQP that make its performance robust under sensor noise and actuation limits. In a simulation environment, we have compared our method with an MPC-based state-of-the-art method for dynamic obstacle avoidance. Given our method's ability to handle a wide range of geometric shapes when the obstacle geometry is significantly different from an ellipsoid, it was shown that our proposed method generates less conservative motions. The efficacy of our approach is also tested on a 7-DOF FR3 robotic manipulator in real life for two dynamic obstacle avoidance tasks. Besides the experimental results shown in Section V more scenarios of the obstacles approaching the robot can be found at [https://youtu.be/iMmWIgvqgTU](https://youtu.be/iMmWIgvqgTU). In the future, we plan to include characterizations on the type of obstacle motion our method is able to avoid and also explore its integration with MPC-based approaches. ## VII Acknowledgement We thank Pranay Gupta for his assistance with the experiments.
2309.16886
Two-body Coulomb problem and hidden $g^{(2)}$ algebra: superintegrability and cubic polynomial algebra
It is shown that the two-body Coulomb problem in the Sturm representation leads to a new two-dimensional, exactly-solvable, superintegrable quantum system in curved space with a $g^{(2)}$ hidden algebra and a cubic polynomial algebra of integrals. The two integrals are of orders two and four, they are made from two components of the angular momentum and from the modified Laplace-Runge-Lenz vector, respectively. It is demonstrated that the cubic polynomial algebra is an infinite-dimensional subalgebra of the universal enveloping algebra $U_{g^{(2)}}$.
Alexander V. Turbiner, Adrian M. Escobar-Ruiz
2023-09-28T22:47:18Z
http://arxiv.org/abs/2309.16886v2
Two-body Coulomb problem and hidden \(g^{(2)}\) algebra: superintegrability and cubic polynomial algebra ###### Abstract It is shown that the two-body Coulomb problem in the Sturm representation leads to a new two-dimensional, exactly-solvable, superintegrable quantum system in curved space with a \(g^{(2)}\) hidden algebra and a cubic polynomial algebra of integrals. The two integrals are of orders two and four, they are made from two components of the angular momentum and from the modified Laplace-Runge-Lenz vector, respectively. It is demonstrated that the cubic polynomial algebra is an infinite-dimensional subalgebra of the universal enveloping algebra \(U_{g^{(2)}}\). _Keywords_: Superintegrability, Lie hidden algebra, integrals of motion, polynomial algebra of integrals. ## 1 Introduction In the talk given at QTS12 by the second author (A.M.E.R.) and based partly on [1] the two-body Coulomb problem in the three-dimensional space of relative motion is considered, where the Hamiltonian is of the form \[\hat{\cal H}\ =\ \frac{1}{2\,\mu}\,\hat{\bf p}^{2}\ -\ \frac{\alpha}{r}\, \tag{1}\] here \(\alpha>0\) for the case of charges of opposite signs, \(\hat{\bf p}=-i\,\hbar\,\nabla\) is the momentum operator, \(r=\sqrt{x^{2}+y^{2}+z^{2}}\); for simplicity a unit reduced mass, \(\mu=1\) and \(\hbar=1\) is assumed from now on. It is well-known that the angular momentum \[\hat{\bf L}\ =\ {\bf r}\times\hat{\bf p}\, \tag{2}\] and the Laplace-Runge-Lenz vector \[\hat{\bf A}\ =\ \frac{1}{2}(\,\hat{\bf p}\times\hat{\bf L}\,-\,\hat{\bf L}\times \hat{\bf p}\,)\ -\ \frac{\alpha}{r}\,{\bf r}\, \tag{3}\] are integrals of motion: they commute with the Hamiltonian (1), \[[\hat{\bf L},\,\hat{\cal H}]\ =\ [\hat{\bf A},\,\hat{\cal H}]\ =\ 0\.\] Hence, the system (1) is maximally superintegrable. It is easy to verify the existence of the well-known relation \[\hat{\bf A}^{2}\ =\ \alpha^{2}\ +\ 2\,\hat{\cal H}\,(\,\hat{\bf L}^{2}\ +\ 1\,)\, \tag{4}\] and constraint \[\hat{\bf L}\,\cdot\,\hat{\bf A}\ =\ \hat{\bf A}\,\cdot\,\hat{\bf L}\ =\ 0. \tag{5}\] The integrals \(\hat{\bf L}\), \(\hat{\bf A}\) obey the following commutation relations: \[[\hat{L}_{i}\,,\,\hat{L}_{j}]\ =\ i\,\epsilon_{ijk}\,\hat{L}_{k}\quad,\quad[ \hat{A}_{i}\,,\,\hat{L}_{j}]\ =\ i\,\epsilon_{ijk}\,\hat{A}_{k}\quad,\quad[\hat{A}_{i}\,,\,\hat{A}_{j}]\ =\ -2\,i\, \epsilon_{ijk}\,\hat{L}_{k}\,\hat{\cal H}\, \tag{6}\] where \((i,j,k=x,y,z)\); due to the presence of the quadratic term \((\hat{L}_{k}\,\hat{\cal H})\) in the rhs of the third commutation relation, they do not form a Lie algebra. Instead of solving the Schrodinger equation \[\hat{\cal H}\,\Psi\ =\ E\,\Psi\,\] we introduce the operator \[r\,(\hat{\cal H}\ -\ E)\ =\ \left(-\frac{r}{2}\Delta^{(3)}\,-\,E\,r\,\right)\ -\ \alpha\ \equiv(\hat{K}-\alpha)\, \tag{7}\] and look for its zero modes, here \(\Delta^{(3)}\) is the three-dimensional Laplacian. Evidently, this is equivalent to the study of the spectrum of the operator \(\hat{K}\), \[\hat{K}\,\Psi\ =\ \alpha\,\Psi\,\quad\hat{K}=\left(-\frac{r}{2}\Delta^{(3)}\, -\,E\,r\,\right)\, \tag{8}\] see e.g. [1], where \(\alpha\) is the spectral parameter, while the energy \(E\) - the original spectral parameter - is considered fixed. This procedure resembles the introduction of the so-called Sturm representation for the Hydrogen atom in quantum mechanics. Note that in spherical coordinates after separation of angular variables in \(\hat{K}\) the resulting radial operator in variable \(w=\sqrt{r}\) corresponds to the radial Schrodinger operator of four-dimensional harmonic oscillator with frequency \(\omega^{2}=-E\). By using the _coupling constant metamorphosis_ formalism introduced in [2] and further established by Kalnins-Miller-Post [3], it can be checked that for the operator \(\hat{K}\), see (8), the angular momentum \(\hat{\bf L}\) (2) remains to commute with it but the Laplace-Runge-Lenz vector (3) should be _modified_ \[\hat{\bf B}\ =\ \frac{1}{2}(\,\hat{\bf p}\times\hat{\bf L}\,-\,\hat{\bf L}\times \hat{\bf p}\,)\ -\ \frac{\alpha}{r}\,{\bf r}\,\hat{\cal H}\, \tag{9}\] in order to maintain an integral of motion. Hence, these \(\hat{\bf L}\) and \(\hat{\bf B}\) are the integrals of motion for \(\hat{K}\): \([\hat{\bf L},\,\hat{K}]=[\hat{\bf B},\,\hat{K}]=0\). It can be also verified that \[\hat{\bf B}^{2}\ =\ \hat{K}^{2}\ +\ 2\,E\,(\,\hat{\bf L}^{2}\ +\ 1\,) \tag{10}\] cf.(4), and \[\hat{\bf L}\,\cdot\,\hat{\bf B}\ =\ \hat{\bf B}\,\cdot\,\hat{\bf L}\ =\ 0. \tag{11}\] Thus, the operator \(\hat{K}\) is _maximally superintegrable_. Let us note that the above integrals obey the commutation relations: \[[\hat{B}_{i}\,,\,\hat{L}_{j}]\ =\ i\,\epsilon_{ijk}\,\hat{B}_{k}\quad,\quad[ \hat{B}_{i}\,,\,\hat{B}_{j}]\ =\ -2\,i\,\epsilon_{ijk}\,\hat{L}_{k}\,E\, \tag{12}\] By adding the commutation relation \([\hat{L}_{i}\,,\,\hat{L}_{j}]\ =\ i\,\epsilon_{ijk}\,\hat{L}_{k}\), see (6), one finds that for \(E<0\) the operators \(\hat{\bf L},\hat{\bf B}\) span the Lie algebra \(so(4)\). ## 2 Two-dimensional problem Let us consider the expression \[\Gamma\ =\ \rho^{|m|}\,e^{-\beta\,r}\,z^{p}\,e^{i\,m\,\varphi}\, \tag{13}\] as a gauge factor, where \(m=0,\pm 1,\pm 2\ldots\), is the magnetic quantum number, \(p=0,1\) defines parity in the \(z\)-direction, \(\beta\equiv\sqrt{-2\,E}\), \(\rho=\sqrt{x^{2}+y^{2}}\), \(\tan(\varphi)=y/x\). We introduce the gauge rotated operator \[\hat{h}\ \equiv\ \Gamma^{-1}\,\hat{K}\,\Gamma\, \tag{14}\] with \(\hat{K}\) defined in (8). Thus, the eigenfunctions of \(\hat{K}\) are of the form \(\Psi(r,\rho,\varphi)=\Gamma\,\psi(r,\,\rho)\). In this case, the variable \(\varphi\) is easily separated out and the operator \(\hat{h}\) has \(\psi(r,\,\rho)\) as the eigenfunctions. After straightforward calculations we arrive at \[\hat{h}(r,\,\rho) = -\,{{1\over 2}}\,r\,\partial_{r}^{2}\ -\ {{1 \over 2}}\,r\,\partial_{\rho}^{2}\ -\ \rho\,\partial_{r,\rho}^{2}\ -\ {{(1+2|m|)\,r\,-\,2\,\beta\,\rho^{2}}\over 2\, \rho}\,\partial_{\rho} \tag{15}\] \[-(1+p+|m|\,-\,\beta\,r)\,\partial_{r}\ +\ \beta\,(1+p+|m|)\.\] In the variables \((r,\,u\equiv\rho^{2})\), the operator \(\hat{h}\) (15) takes the form of an algebraic operator with polynomial coefficients [1], \[\hat{h}_{a}(r,u) = -\,{{1\over 2}}\,r\,\partial_{r}^{2}\ -\ 2\,r\,u\, \partial_{u}^{2}\ -\ 2\,u\,\partial_{r,u}^{2}\ -\ 2\,[(1+|m|)\,r\,-\,\beta\,u]\, \partial_{u} \tag{16}\] \[-\,(1+p+|m|\,-\,\beta\,r)\,\partial_{r}\ +\ \beta\,(1+p+|m|)\,\] and the eigenvalue problem (8) becomes \[\hat{h}_{a}\,P\ =\ \alpha\,P\, \tag{17}\] here the domain is defined as \(D=(r\geq 0,r\geq\rho\geq 0)\). Its spectrum can be easily found \[\alpha_{n}=\ \beta(n\ +\ 1+p+|m|)\,\] see [1]. It is evident that the combination of integrals \[L_{a}\equiv\hat{L}_{x}^{2}\,+\,\hat{L}_{y}^{2}\qquad\hbox{and}\qquad B_{a} \equiv\hat{B}_{x}^{2}\,+\,\hat{B}_{y}^{2}\, \tag{18}\] commutes with \(\hat{L}_{z}\). It is also evident that the gauge-rotated two-dimensional operators \[\hat{l}_{a}\ \equiv\ \Gamma^{-1}\,(\hat{L}_{x}^{2}\,+\,\hat{L}_{y}^{2})\,\Gamma \qquad\hbox{and}\qquad\hat{b}_{a}\ \equiv\ \Gamma^{-1}\,(\hat{B}_{x}^{2}\,+\,\hat{B}_{y}^{2})\,\Gamma \tag{19}\] commute with \(\hat{h}_{a}\) (16): \[[\hat{h}_{a},\,\hat{b}_{a}]=0\quad,\quad[\hat{h}_{a},\,\hat{l}_{a}]=0\.\] In explicit form, \[\hat{l}_{a}\ =\ 2\,u\,(r^{2}\,-\,u)\,\partial_{u}^{2}\ -\ (\,u\,(1+2\,p)\ -\ 2\,(1+|m|)(r^{2}\,-\,u)\,)\, \partial_{u}\, \tag{20}\] is a second order differential operator, while \[2\,\hat{b}_{a}\,=\,{{u\over 4}}\,\partial_{r}^{4}\,+\,4\,u^{3}\, \partial_{u}^{4}\,+\,2\,u\,(2r^{2}+u)\,\partial_{u}^{2}\partial_{r}^{2}\,+\,2 \,r\,u\,\partial_{r}^{3}\,\partial_{u}\,+\,8\,r\,u^{2}\,\partial_{u}^{3}\, \partial_{r}+((m+1)r-\beta\,u)\,\partial_{r}^{3}\] \[\qquad\qquad\qquad+\ 8u^{2}(m+p+3-\beta r)\,\partial_{u}^{3}\ +\ \left(2u(m+p-3\beta r+3)+4(m+1)r^{2}\right)\partial_{r}^{2}\, \partial_{u}\] \[+\,4\,u\left(\,r(3m+2p-2\beta r+7)-\beta u\right)\partial_{u}^{2}\, \partial_{r}\ +\ \left((m+1)(m+p+1)-3\beta(m+1)r+\beta^{2}u\right)\partial_{r}^{2}\] \[+\,\left(4\,r\left((m+1)(m+2p+3)+\beta^{2}u\right)-4\beta u(m+p+3) -8\beta(m+1)r^{2}\right)\partial_{r}\,\partial_{u}\ +\] \[4\,u\left(m^{2}+3m(p-\beta r+2)+p(7-2\beta r)+\beta r(\beta r-7)+7 \right)\partial_{u}^{2}\,-\,2\beta(m+1)(m+p+1-\beta\,r)\,\partial_{r}\] \[+\,\left(\,4(m+1)(p+1)(m+p+1)-4\beta(m+1)r(m+2p+3)+2\beta^{2} \left(2(m+1)r^{2}+u\right)\,\right)\partial_{u}\, \tag{21}\] is a fourth order differential operator. Hence, the operator \(\hat{h}_{a}\) (16) is maximally superintegrable. ### \(2d\) Schrodinger operator By using the gauge factor \[\Gamma_{a}\ \equiv\ e^{\beta\,r}\,(r^{2}-u)^{-\frac{p}{2}}\,u^{-\frac{1+2|m|}{4 }}\, \tag{22}\] the algebraic operator \(\hat{h}_{a}\) (16) can be transformed into a two-dimensional Schrodinger operator \[{\cal H}_{\rm LB}^{(2)}\ \equiv\ \Gamma_{a}^{-1}\ \hat{h}_{a}\ \Gamma_{a}\ =\ -\Delta_{\rm LB}^{(2)}\ +\ V_{\rm eff}(r,u)\, \tag{23}\] where \(\Delta_{\rm LB}^{(2)}\) is the two-dimensional Laplace-Beltrami operator, \[\Delta_{\rm LB}^{(2)}=\frac{1}{\sqrt{g}}\,\partial_{\mu}\sqrt{g}\,g^{\mu\,\nu} \,\partial_{\nu}\,\quad\mu,\nu=r,u\,\] where \(g=\det[g_{\mu\,\nu}]\) is the determinant of metric, with cometric \[g^{\mu\,\nu}\ =\ \left|\ \begin{array}{cc}\frac{1}{2}\,r&u\\ u&2\,r\,u\end{array}\right|\, \tag{24}\] whereas the effective potential is given by \[V_{\rm eff}\ =\ \frac{\left(4|m|^{2}-1\right)\,r}{8\,u}\ +\ \frac{\beta^{2}}{2}\,r. \tag{25}\] The determinant of \(g^{\mu\,\nu}\) reads \(\det[g^{\mu\,\nu}]=u\,(r^{2}-u)\), and the scalar curvature is \(R=\frac{r\,(4u-1)}{2\,u\,(r^{2}-u)^{2}}\). Finally, we arrive at a new form of the Coulomb problem, \[{\cal H}_{\rm LB}^{(2)}\ =\ -\Delta_{\rm LB}^{(2)}\ +\ \frac{\left(4|m|^{2}-1 \right)\,r}{8\,u}\ +\ \frac{\beta^{2}}{2}\,r. \tag{26}\] This defines an exactly-solvable, maximally-superintegrable problem, see below. ## 3 Polynomial algebra From (20) and (21), by taking the commutator we construct a fifth order differential operator \[\hat{c}\ \equiv\ [\,\hat{b}_{a},\,\hat{l}_{a}\,]\] \[=\ 8\,u^{3}\,\left(r^{2}-u\right)\,\partial_{u}^{5}\ +\ 8\,r\,u^{2}\,\left(r^{2}-u\right)\, \partial_{u}^{4}\,\partial_{r}\ +\ 2\,r\,u\,\left(u-r^{2}\right)\partial_{u}^{2}\, \partial_{r}^{3}\ +\ \frac{1}{2}u\,\left(u-r^{2}\right)\partial_{u}\,\partial_{r}^{4}\] \[\qquad\qquad\qquad+\ 2\,u^{2}\,\left(2r^{2}(5m+2p+17)-10u(m+p)-4 \beta r^{3}+4\beta ru-37u\right)\,\partial_{u}^{4}\] \[+\ 8\,r\,u\,\left(-2u(m+p)+2(m+2)r^{2}-5u\right)\partial_{u}^{3} \,\partial_{r}\,+\,3u\left(-2(p+1)r^{2}+2\beta r^{3}-2\beta ru+u\right)\partial_ {u}^{2}\,\partial_{r}^{2}\] \[-\ 2\left(r^{2}-u\right)(mr+r-\beta u)\partial_{u}\,\partial_{r}^{ 3}\ +\ \frac{1}{8}\left(2\,u\,(m+p)-2(m+1)r^{2}+3\,u\right)\partial_{r}^{4}\ +\ {\cal O}(\partial^{3})\, \tag{27}\] which commutes with \(\hat{h}_{a}\) (16). Now, let us consider the algebra generated by the four elements (\(\hat{h}_{a}\), \(\hat{l}_{a}\), \(\hat{b}_{a}\), \(\hat{c}\)). The relevant commutators are the following: \[\left[\,\hat{c},\,\hat{l}_{a}\,\right]\ =\ 2\,\hat{l}_{a}\,\hat{h}_{a}^{2}\ -\ 8\,\hat{l}_{a}\,\hat{b}_{a}\ \ -\ 4\,\beta^{2}\,\hat{l}_{a}^{2}\ -\ 8\,\beta(3m+2p+7)\,\hat{l}_{a}\,\hat{h}_{a}\ -\ (m+1)(2m+2p-1)\,\hat{h}_{a}^{2}\] \[+\ 2\,\beta^{2}\biggl{(}11m^{2}+m(20p+38)+(p+1)(9p+26)\biggr{)}\,\hat{l}_{a}\] \[-\ 4\,\hat{c}\ +\ (2m+2p-1)(2m+2p+3)\,\hat{b}_{a}\] \[+\ \beta\,(2m+2p-1)\,(2m+2p+3)\,(3m+2p+7)\,\hat{h}_{a}\] \[-\ \beta^{2}\,(2m+2p-1)\left(\,m\,(m\,(5m+14p+26)+62p+41)+66p+20\right)\,, \tag{28}\] which is a third order polynomial in the generating elements, or equivalently, a sixth order differential operator, and \[\left[\,\hat{c},\,\hat{b}_{a}\,\right]\ =\ -\ 4\,\beta^{2}\,\hat{l}_{a}\, \hat{h}_{a}^{2}\ -\ 2\,\hat{b}_{a}\,\hat{h}_{a}^{2}\ -\ 2\,\beta\,(3m+2p+7)\,\hat{h}_{a}^{3}+\ 4\,\hat{b}_{a}^{2}\ +\ 8\,\beta\,(3m+2p+7)\,\hat{b}_{a}\,\hat{h}_{a}\] \[+\ 8\,\beta^{2}\,\hat{l}_{a}\,\hat{b}_{a}+\ 8\,\beta^{3}\,(3m+2p+7)\, \hat{l}_{a}\,\hat{h}_{a}\ +\ 2\,\beta^{2}\,(m(21m+30p+94)+76p+105)\,\hat{h}_{a}^{2}\] \[-\ 2\,\beta^{2}\,(m(11m+20p+38)+44p+26)\,\hat{b}_{a}\] \[+\ 4\,\beta^{2}\,\hat{c}\ -\ 2\,\beta^{3}\,\biggl{(}m(m(33m+82p+191)+4(97p+86))+ 448p+182\biggr{)}\,\hat{h}_{a}\] \[-\ 4\,\beta^{4}\,\biggl{(}5m^{2}+2m(4p+9)+4(p+1)(p+3)\biggr{)}\, \hat{l}_{a}\] \[+\ 2\,\beta^{4}\,(m+p+1)\biggl{(}15m^{3}+m^{2}(39p+89)+3m(p(74-11p )+54)\] \[+\ p\,(\,293-p(8p+65)\,)+84\biggr{)}\, \tag{29}\] which is a third order polynomial in the generating elements, or an eighth order differential operator. Hence, by definition we arrive at a polynomial algebra [4, 5] generated by (\(\hat{h}_{a}\), \(\hat{l}_{a}\), \(\hat{b}_{a}\), \(\hat{c}\)). In our case, it is a _cubic_ polynomial algebra. **4. Lie-algebraic structure** It can be immediately checked that the operator \(\hat{h}_{a}\) (16) has infinitely-many finite-dimensional invariant subspaces \[{\cal P}_{n}\ =\ \langle\,r^{p}u^{q}\ |\ 0\leq p+2q\leq n\,\rangle\, \tag{30}\] \[\hat{h}_{a}:\,{\cal P}_{n}\ \to{\cal P}_{n}\,\ \ \ \ \ \ n=0,1,2,\ldots\,\] which form the infinite flag. For fixed \(n\) the invariant subspace \({\cal P}_{n}\) coincides with the finite-dimensional representation space of the algebra \(g^{(2)}\): the infinite-dimensional, eleven-generated algebra of differential operators. This algebra was introduced in [6] (see for a discussion [7]) in relation to the \(G_{2}\)-integrable system of the Hamiltonian reduction. It is spanned by the Euler-Cartan generator \[\tilde{\cal J}_{0}(n)\ =\ r\,\partial_{r}\ +\ 2\,u\,\partial_{u}\ -\ n\, \tag{31}\] and the generators \[{\cal J}^{1}\ =\ \partial_{r}\,\ {\cal J}_{n}^{2}\ =\ r\,\partial_{r}\ -\ \frac{n}{3}\,\ \ {\cal J}_{n}^{3}\ =\ 2\,u\,\partial_{u}\ -\ \frac{n}{3}\,\] \[{\cal J}_{n}^{4}\ =\ r^{2}\partial_{r}\ +\ 2\,r\,u\,\partial_{u}\ -\ nr\ =\ r \tilde{\cal J}_{0}(n)\, \tag{32}\] \[{\cal R}_{0}\ =\ \partial_{u}\,\ {\cal R}_{1}\ =\ r\,\partial_{u}\, \ {\cal R}_{2}\ =\ r^{2}\,\partial_{u}\,\] \[{\cal T}_{0}\ =\ u\partial_{r}^{2}\,\ {\cal T}_{1}\ =\ u\partial_{r}\tilde{\cal J }_{0}(n)\,\ {\cal T}_{2}\ =\ u\tilde{\cal J}_{0}(n)\ (\tilde{\cal J}_{0}(n)+1)=\ u \tilde{\cal J}_{0}(n)\ \tilde{\cal J}_{0}(n-1)\, \tag{33}\] where \(n\) is a parameter, which has a meaning of the mark of the representation. If \(n\) is a non-negative integer, the space \({\cal P}_{n}\) is the common invariant subspace for the generators (31), (32), (33), where the algebra acts irreducibly. The differential operator \(\hat{h}_{a}(r,u)\) can be written in the \(g^{(2)}\)-Lie-algebraic form, see [1], \[\hat{h}^{(g^{(2)})}_{a}\ =\ -\,\frac{1}{2}\,{\cal J}_{0}^{2}\,{\cal J}^{1}\ -\ {\cal J}_{0}^{3}\,{\cal R}_{1}\ -\ {\cal J}_{0}^{3}\,{\cal J}^{1}\ +\ \beta\tilde{\cal J}_{0}(0)\ - \tag{34}\] \[(1+p+|m|)\ {\cal J}^{1}\ -\ 2\ (1+|m|)\ {\cal R}_{1}\ +\ \beta\,(1+p+|m|)\,\] in terms of the generators (31), (32) (at \(n=0\)) only. In a similar way the integrals \(\hat{l}_{a}\) (20), \[\hat{l}_{a}\ =\ {\cal J}_{0}^{3}\,{\cal R}_{2}\,-\,\frac{1}{2}\,{\cal J}_{0}^{ 3}\,{\cal J}_{0}^{3}\ -\,\frac{1}{2}\,(1+2\,p+2|m|)\,{\cal J}_{0}^{3}\ +\ 2\,(1+|m|){\cal R}_{2}\, \tag{35}\] as well as \(\hat{b}_{a}\) (21) and their commutator \(\hat{c}\ =\ [\,\hat{b}_{a},\,\hat{l}_{a}\,]\) (27) admit a representation in terms of the generators (31), (32). _Remark._ Note that the generators (31), (32) span the subalgebra \(gl(2)\ltimes R^{3}\in g^{(2)}\), discovered by Sophus Lie around 1880 [8] as the algebra of vector fields, see [9] (Cases 24, 28 at \(r=2\)), which was extended to first order differential operators in [10]. In general, this algebra (31), (32) acts on functions of two variables \((r,u)\). However, in the action on functions of the single variable \(r\), this algebra acts as a \(sl(2)\)-algebra, \[J_{n}^{+}=r^{2}\partial_{r}-n\,r\,\ J_{n}^{0}=2\,r\,\partial_{r}-n\,\ J^{-}= \partial_{r}. \tag{36}\] **Conclusions** In this work we showed that the two-body Coulomb problem in the Sturm representation leads to a new two-dimensional, exactly-solvable, superintegrable quantum system in curved space with variable curvature, which has \(g^{(2)}\) hidden algebra and possesses a cubic polynomial algebra of integrals. Two integrals are built from the two components of angular momentum and from the modified Laplace-Runge-Lenz vector, respectively. The Hamiltonian and these integrals can be written as algebraic operators with polynomial coefficients, they admit a \(g^{(2)}\)-Lie-algebraic form: they can be represented as a non-linear combinations of the \(g^{(2)}\)-algebra generators. The cubic polynomial algebra of integrals is an infinite-dimensional subalgebra of the universal enveloping algebra \(U_{g^{(2)}}\). This confirms a suspicion, known in folklore (P. Winternitz, W. Miller Jr, A. V. Turbiner), that if a maximally superintegrable system has a hidden algebra, the polynomial algebra of integrals belongs to the universal enveloping algebra of the hidden algebra, usually, this system is exactly-solvable (the so-called Montreal conjecture), see [11]. **Acknowledgments** This work is partially supported by CONACyT grant A1-S-17364 and DGAPA grant IN113022 (Mexico). A.M.E.R. thanks the audience who raised the questions about integrals and possible polynomial algebra during his presentation at the XII International Symposium on Quantum Theory and Symmetries (QTS12) in Prague.
2309.08460
Explaining Search Result Stances to Opinionated People
People use web search engines to find information before forming opinions, which can lead to practical decisions with different levels of impact. The cognitive effort of search can leave opinionated users vulnerable to cognitive biases, e.g., the confirmation bias. In this paper, we investigate whether stance labels and their explanations can help users consume more diverse search results. We automatically classify and label search results on three topics (i.e., intellectual property rights, school uniforms, and atheism) as against, neutral, and in favor, and generate explanations for these labels. In a user study (N =203), we then investigate whether search result stance bias (balanced vs biased) and the level of explanation (plain text, label only, label and explanation) influence the diversity of search results clicked. We find that stance labels and explanations lead to a more diverse search result consumption. However, we do not find evidence for systematic opinion change among users in this context. We believe these results can help designers of search engines to make more informed design decisions.
Z. Wu, T. Draws, F. Cau, F. Barile, A. Rieger, N. Tintarev
2023-09-15T15:08:24Z
http://arxiv.org/abs/2309.08460v1
# Explaining Search Result Stances to Opinionated People ###### Abstract People use web search engines to find information before forming opinions, which can lead to practical decisions with different levels of impact. The cognitive effort of search can leave opinionated users vulnerable to cognitive biases, e.g., the _confirmation bias_. In this paper, we investigate whether stance labels and their explanations can help users consume more diverse search results. We automatically classify and label search results on three topics (i.e., _intellectual property rights_, _school uniforms_, and _atheism_) as _against_, _neutral_, and _in favor_, and generate explanations for these labels. In a user study (\(N\)=203), we then investigate whether search result stance bias (balanced vs biased) and the level of explanation (plain text, label only, label and explanation) influence the diversity of search results clicked. We find that stance labels and explanations lead to a more diverse search result consumption. However, we do not find evidence for systematic opinion change among users in this context. We believe these results can help designers of search engines to make more informed design decisions. Keywords:Explainable Search Confirmation Bias User Study. ## 1 Introduction Web search that can lead to consequential decision-making frequently concerns _debated topics_, topics that different people and groups disagree on, such as _whether to vaccinate a child_ or _whether nuclear energy should be used as a power source_. Prior research has shown that the interplay between search engine biases and users' cognitive biases can lead to noteworthy behavioral patterns. For instance, when search result stance biases interact with cognitive user biases, information seekers may experience the _search engine manipulation effect_ (SEME): the tendency to adopt the stance expressed by the majority of (highly-ranked) search results [3, 9, 16, 35]. However, these results have only been studied and found for users who are undecided, not for users who already have strong opinions, who we refer to as _opinionated_ users. High cognitive demand during complex web searches can increase the risk of cognitive biases [6]. One such bias is the _confirmation bias_, which involves a preference for information that aligns with preexisting beliefs while disregarding contradictory information during the search process [6, 33]. Interventions to mitigate confirmation bias during web search have aimed at decreasing engagement with attitude-confirming and increasing engagement with attitude-opposing information [40], i.e., reducing interaction with search results that confirm a user's attitude and increasing interaction with search results that challenge a user's attitude. Interventions to reduce interaction with particular items have also been investigated in the context of misinformation. One effective method for reducing interaction with misleading content involves _labels_ to flag certain items [10, 25, 31]. The core issue with confirmation bias during web search, similar to the related issues of misinformation and SEME, is that users consume biased content. This motivated us to investigate interventions to increase the diversity of consumption and specifically whether labels indicating stance, and their explanations, are likewise successful for confirmation bias mitigation during search on debated topics. Therefore the goal of these interventions is to promote unbiased web search and mitigate the effects of users' confirmation bias and underlying (stance) biases in a _search engine result page_ (SERP). Consequently, this paper aims to address the following question: _Do automatically generated stance labels and explanations of the labels for search results increase the diversity of viewpoints users engage with, even if search results are biased?_ To address this question, we test three hypotheses. Previous work has found that undecided users are likely to change their opinion when exposed to biased search results, since they select more search results reflecting a certain opinion [3, 15, 17, 35]. However, in this study we restrict participants to opinionated users, having _strong_ existing opinions on a topic, and investigate whether _H1a): Users who are exposed to viewpoint-biased search results interact with less diverse results than users who are exposed to balanced search results._ Second, informative labels have been shown to mitigate confirmation bias in search results [40]. Therefore, in this study, we investigate whether simple stance labels (_against_, _neutral_, and _in favor_), and stance labels with explanations (importance of keywords) are effective for mitigating confirmation bias. This leads to _H1b): Users who are exposed to search results with (1) stance labels or (2) stance labels with explanations for each search result interact with more diverse content than users who are exposed to regular search results._ Third, if the labels are effective in reducing confirmation bias, we would expect an interaction effect between the bias in search results and explanation level (plain, label, label plus explanation): _H1c) Users who are exposed to search results with (1) stance labels or (2) stance labels with explanations are less susceptible to the effect of viewpoint biases in search results on clicking diversity._ We investigate these hypotheses in a pre-registered between-subjects user study (\(N\)=203) simulating an open search task.1 Our results show that both stance labels and explanations, led to a more diverse search result consumption compared to plain (unlabeled) search result pages. However, we did not find evidence that the explanations influenced opinion change. We believe these results can help designers of search engines to make more informed design decisions. ## 2 Related Work Explainable Artificial Intelligence (XAI) aims to help people understand the decisions and predictions AI systems make. In this paper, we investigate specifically how XAI can support users in searching for disputed topics. Search for debated topics is highly subjective: when users search the web to seek advice or form opinions on these kinds of topics, not just search result _relevance_ but also the stance of content is influential [3, 15, 16, 35, 40]. To mitigate undesired effects such as biased opinion change, earlier work has measured and increased the fairness [20, 52, 55] and viewpoint diversity in search results [13, 36, 49]. On the user interface side, it could be fruitful to label and explain the stance represented on a search engine results page (or SERP). These labels are related to the task known as _stance detection_, which is predominantly applied in a _target-specific_ fashion. That is, detecting not just a sentiment, but how it is referred to in relation to a specific topic or claim (often referred to as the _target_, e.g., "people should wear _school uniforms_") [2]. Stance detection is a multi-class classification task (i.e., typically classifying documents into _against_, _neutral_, and _in favor_, so predictive performances are most commonly reported in terms of macro F1 scores [26]. Furthermore, web search interventions targeting the mitigation of undesired effects, such as SEME, require _cross-target_ stance detection models to quickly respond to the large variety of debated topics users may search for. Here, stance detection models are applied to data sets where each document may be relevant to one of many potential topics [2, 26]. Constructing models that classify documents into stances related to _any_ topic in such a way may lead to weaker predictive accuracy compared to target-specific methods, but makes stance detection more generalizable and scalable. Cross-target ternary stance detection by previous work (e.g., on news articles or tweets) have ranged roughly from macro F1 scores of.450 to.750 [1, 4, 5, 23, 38, 51]. Also comparable are the cross-topic stance detection models evaluated using the _Emergent_ data set (and its follow-up version, the _2017 Fake News Challenge_ data set) which have achieved macro F1 scores of up to.756 [22, 41, 43]. While the main contribution of this paper is not to improve on the state of the art for stance detection, the stance detection method (DistilBERT) used here is comparable to this state of the art (macro F1 of 0.72). DistilBERT is much smaller than other pre-trained models, and handles small datasets well [44, 46, 48]. What XAI methods are suitable for explaining stance detection to users? Stance detection can be seen as a text classification task. For text classification, explanations containing _input features_ have been found to be highly adaptable and often meaningful to humans [12, 30]. The way in which explanations can be visualized depends on the data type, purpose, and audience. Many current methods indicate input features as fea ture importance using a saliency map [39, 47]. When the features are readable texts, saliency information is shown by highlighting the most significant input at word or token level [24]. There have been some instances where researchers used text-based saliency maps to demonstrate their findings [21]. To the best of our knowledge, no previous work has explored whether highlighting salient words in search results would mitigate people's clicking bias. One of the most closely related works found that feature-based explanations could help users simulate model predictions for search results [12]. Another similar study involves a verbal saliency map using a model-agnostic explainer and a human evaluation of explanation representations of news topic classifier and sentiment analysis [19]. Their finding is that the saliency map makes explanations more understandable and less cognitively challenging for humans than heatmap visualization. However, our work differs from theirs in several ways: we study explanations in the context of search engines, and we have conducted a full-fledged user study while they only performed a pilot study. **Contribution to Knowledge for XAI.** Previous XAI literature has contributed to explaining information retrieval systems, focusing on the interpretability of document-retrieval mechanisms [27, 28, 54]. For example, the authors of [54] propose a listwise explanation generator, which provides an explanation that covers all the documents contained in the page (e.g., by describing which query aspects were covered by each document). These explanations were not evaluated by people. Another paper studied how well explanations of individual search results helped people anticipate model predictions [12], but did not consider cognitive or stance bias. In contrast to previous work, this paper examines how explanations affect users, considering the potential mitigation of their cognitive biases on search engine manipulation effects (SEME). In doing so, we see a need to address both potential (stance) bias within a search result page, and the bias of users consuming these results. To the best of our knowledge, this is also the first work to conduct empirical experiments on users' _behavior_ in response to explanations in the context of information retrieval. ## 3 Methodology This section describes the materials we used for organizing the user study (e.g., data set, stance detection model, explanation generation, and search interface). ### Data Preparation To train, test, and explain the stance detection model, we considered a public data set containing search results related to three debated topics (i.e., _atheism_, _intellectual property rights_, and _school uniforms_) [14].1[14] motivate the selection of these three topics because they offer valid arguments for both supporting and opposing viewpoints. Additionally, they argue that opinions on these topics have diverse impacts, ranging from concerning mainly the user (atheism) to businesses (intellectual property rights) and society (school uniforms). These data include URLs, titles, snippets, and stance labels for a total of 1475 search results, which had been retrieved via API or web crawling from two popular web search engines. Stance labels had been assigned (by experts) on seven-point Likert scales (i.e., including three degrees of opposing or supporting a topic), which we mapped into the three categories _against_, _neutral_, and _in favor_ (i.e., which is what most current stance detection methods handle). Using the provided URLs, we crawled the full web page text bodies (stripped of any HTML tags) for all search results. We here dropped 347 search results from the data as their text bodies could not be retrieved (e.g., because of 404 errors), leaving 1125 search results accompanied by their respective text bodies. Finally, we processed the retrieved contents by truncating each document's middle section, retaining only its head and tail, then concatenating each search result's title, snippet, and the head section and tail section, while ensuring that the result is exactly 510 tokens long. We removed all other information from the data aside from the documents' stance labels. Table 1 shows the stance distribution per topic in our final data set. ### Stance Detection Model \begin{table} \begin{tabular}{l l l} \hline \hline & \multicolumn{2}{c}{**Stance Distribution**} \\ **Topic** & **N** & Against – Neutral – In Favor \\ \hline Intellectual property rights & 378 & 10.5\% – 17.7\% – 71.7\% \\ School uniforms & 395 & 21.5\% – 36.7\% – 41.8\% \\ Atheism & 352 & 19.8\% – 46.3\% – 33.8\% \\ \hline **Total** & 1125 & 17.3\% – 33.3\% – 49.3\% \\ \hline \hline \end{tabular} \end{table} Table 1: Topic and stance distribution in the used data. Figure 1: Distributions of Attributes Across the Training, Validation and Test Set After pre-processing (tokenization), we developed the model for classifying search results into against, neutral, and in favor. The dataset was split into training (75%) and validation/test (25%) sets. We fine-tuned the model for different hyperparameters. After every epoch the validation set were evaluated to monitor its learning progress. Once the model's evaluation loss stops to decrease for 5 epochs the training is terminated and is evaluated based on the unseen test set, and the result is considered the general performance that we report. The best-performing model and learned parameters were used as the predictor for identifying labels and generating explanations. #### 3.2.2 Tokenization. Before training the stance classification model, we needed further preprocessing of the raw search results. Specifically, we had to _tokenize_ each word before feeding the search results into the model. The tokenization process performs tasks such as handling subwords, adding special tokens to pad a sequence of words to a max length, or indicating the beginning or end of a sequence. In our work, we instantiated the DistilBERT tokenizer using the AutoTokenizer class provided by the transformers[50]. We set the length of the tokenizer to a size of 512. #### 3.2.3 Training Details. Similar to previous work performing search result stance classification [12], we built one cross-topic model using our entire data set. First, we split the dataset in a stratified manner, which preserves the percentage of the topic and labels in each subset. Figure 1 shows the final split across the training, validation, and test sets on both topics and labels. Then, we classified search results into stance categories (i.e., _in favor_, _against_, or _neutral_), using a pre-trained version of the uncased DistilBERT base model [42] from HuggingFace. Specifically, we fine-tuned DistilBERT using 75% (843) of the documents in our data and split the remaining 25% (282) equally for validation and testing. Due to the relatively small size of our dataset, we tried to avoid over-fitting by using neural network dropout and stopping training when evaluation loss stops improving for 5 epochs [45, 53]. We trained using the same dataset split and experimented with learning rates ranging from 5\(e\)-6 to 1\(e\)-4. Regarding the remain hyper-parameters, the models were optimized using the Adam optimizer with a learning rate of 1\(e-5\), the batch size was set to 8 and we set the dropout rate of both attention layers and fully connected layers to 0.4. #### 3.2.4 Metric. In our stance detection task, we have three labels to be classified, and their distribution is uneven. To take performance on all stances into account, we considered the macro F1 score, defined as the average of the F1 scores for each of the three stances: \[macro_{F1}=(F1(stance=favor)+F1(stance=neutral)+F1(stance=against))/3 \tag{1}\] Model performance.We obtained a stance detection model by fine-tuning the DistilBERT model for our downstream classification task. The fine-tuning process was completed through HuggingFace's Trainer interface. The progress of each run was tracked using Weights & Biases.1 We observed that the learning rate of \(1e\)-5 gave the best performance with a macro F1 score of 0.72.2 Footnote 1: [https://wandb.ai](https://wandb.ai) Footnote 2: Due to a minor error in evaluation, a slightly higher macro F1 score was reported in the pre-registration. However, this erroneous score did not influence the training process or affect our user study. ### Creating Explanations Instance Selection.For the user study, we selected search results correctly predicted by the model, picking them mainly from the test set and some from the validation set. As mentioned before, the train, validation, and test sets were split in a stratified way, which preserves the frequencies of our target classes. To assemble search results for our study, we randomly drew 21 _correctly predicted_ search results per topic from our test and validation data (i.e., seven against, seven neutral toward, and seven in favor of the topic). The SERPs later displayed 10 results per page. #### 3.3.1 LIME Parameter Tuning. After we fine-tuned and evaluated the model on the search result corpus, we used LIME (Local Interpretable Model-Agnostic Explanations) to explain the model's predictions [39].3 For text data, LIME outputs a list of features (tokens) from the input sentence, ranked by their importance w.r.t. a model's specific prediction. We generated the explanations by setting the neighborhood size to learn the linear model to 5000, kernel_width to 50,4 and showing the top 20 important tokens (or less based on the text length) belonging to the model's predicted class. Footnote 3: [https://github.com/marcotcr/lime](https://github.com/marcotcr/lime) Footnote 4: We tried multiple kernel sizes (10, 25, 50, and 75) and chose a value of 50 since we got a slight increase in the R2 scores for each LIME local prediction on the test set of about 3-4% on average compared to the other sizes. ### Search Engine Architecture.We implemented our web-based search interface using the SearchX platform as a basis [37]. The web application has both a front-end and a back-end. The front-end is built on NodeJS using React and Flux frameworks and manages users' data by sending logs to the back-end, which mainly handles the retrieval of search results from the database and stores the logs from the front-end. #### 3.3.2 Interface. We designed the search interface as follows. As soon as users open the homepage of the study, saw a search bar positioned at the top of the page, where they may enter their queries. The search engine showed results only when the input query included one or more keywords referring to the user's assigned topic (among atheism, intellectual property rights, and school uniforms). Otherwise, showed a message informing users that no results were found for their query. We provided users with the topic and the specific keyword to include in the query with a sentence below the search bar. For the _Atheism_ topic, the keywords are "atheist" or "atheism", for the _intellectual property right_ topic, the keywords are "property right", "intellectual right" or "intellectual property right", while for the _school uniform_ topic, keywords are "uniform" or "school uniform". After the users inserted a query including the above mentioned keywords, the interface displayed a list of matched results. This set of search results had a different arrangement based on the random condition a user was assigned to. Figure 2: Different conditions for the SERP display. _a)_ Text-only Search Result Page. _b)_ Predicted stance labels. _c)_ Predicted stance labels and Explanations. We set up three SERP display conditions (see Section 4.2): 1) text-only SERP, 2) SERP with labels; and 3) SERP with labels and explanations. The text-only SERP showed ten search results without any extra information. Each search result had two parts: a clickable title redirecting the user to the corresponding webpage and a snippet showing some of its content. Figure 2 a) shows the layout of this type of interface. The SERP with labels introduces two new features: the labels indicating the original content's stance and an overview of all the search results labels on the right side of the webpage. As shown in Figure 2 b), we assigned a different color representing each viewpoint. We also added extra information to each label, indicating whether the document is _against_, _neutral_, or _in favor_ of the topic. The third type of SERP interface makes use of a mono-chromatic saliency map (see Figure 2 c), highlighting the top 20 words that best contribute to the prediction in the search result snippet. Due to the space limitation of the interface, not all the feature would appear in the snippet. The color of the saliency map aligns with the color of the label. We use the same color for all 20 feature words regardless of their significance to make it less complicated for users to understand. #### 4.2.2 Biased Versus Balanced Results. We ranked search results depending on the ranking bias condition randomly assigned to each user. Specifically, we created _biased_ and _balanced_ top 10 search result ranking templates, according to which we would later display search results to users. Biased search results were biased either in the _against_ or _in favor_ direction and thus contain only results of one of these two stance classes in the top four ranking spots (the remaining six results were balanced across stance classes). Users who were assigned to the _biased_ condition would see search results biased in the opposite direction compared to their pre-search opinion (e.g., if they were in favor of school uniforms, they \begin{table} \begin{tabular}{c l l l} \hline \hline **Rank** & **Biased** & **Biased** & **Balanced** \\ & **Opp. (T1)** & **Supp. (T2)** & **(T3)** \\ \hline 1 & Against & Favor & Neutral \\ 2 & Against & Favor & Neutral \\ 3 & Against & Favor & Neutral \\ 4 & Against & Favor & Neutral \\ 5 & Favor & Against & Against \\ 6 & Neutral & Neutral & Neutral \\ 7 & Against & Favor & Favor \\ 8 & Favor & Against & Against \\ 9 & Neutral & Neutral & Neutral \\ 10 & Against & Favor & Favor \\ \hline \hline \end{tabular} \end{table} Table 2: Templates for the first SERP (i.e., the top 10-ranked search results) in each SERP ranking bias condition. would see results biased against school uniforms). Users who were assigned to the balanced condition would see a similar search result page, but with neutral search results in the top four spots. Neutral results either do not contain any arguments on the debated topic or equally many arguments in both directions.1 Table 2 shows an overview of the ranking templates. Footnote 1: We chose this setup to make the conditions as comparable as possible, e.g., rather than displaying results in alternating fashion in the balanced condition. ## 4 User Study Setup ### Research Ethics and pre-registration. We deployed the web application on servers owned by the Faculty of Maastricht University (UM server) and secured the connection through HTTPS protocol with SSL certificates. The study was reviewed and accepted by our review board. The data collected from users, ranging from demographical information to user's clicking behaviours, are all stored anonymously in the server. Prior to the launch of the user study, the research question, hypotheses, methodology, measurements, etc. were pre-registered on the Open Science Framework. Only minor changes were made, including balancing the number of participants in the different conditions, change the way we measure users' attitude change, and correcting a computation error in the macro F1 score. ### Variables In our study, each subject could look at search results (i.e., 10 search results per page) accompanied by different features. We analyzed the participants' attitudes and interaction behavior (with a focus on the proportion of clicks on attitude-confirming search results). **Independent variables.** * _Topic_ (between-subjects, categorical). Participants were assigned to one topic (i.e., atheism, intellectual property rights, or school uniforms) for which they have a strong pre-search attitude (i.e., strongly opposing or strongly supporting). If a participant had no strong attitude on any topic, they ended the study. If a participant had multiple strong attitudes, they were assigned to the topic that has the fewest participants at that point in the study (i.e., to move toward a balanced topic distribution). * _SERP ranking bias_ (between-subjects, categorical). There were two types of ranking conditions: biased and balanced. For each of these two conditions, we preset a ranking template (see Table 2). Participants would see a search result page with ten items which were ranked in accordance with the template. If a user was assigned to the biased condition, they would see opposing-biased search results if their pre-search attitude was _in favor_ (i.e., 3) and supporting-biased search results if their pre-search attitude was _against_ (i.e., \(-3\)). * _SERP display_ (between-subjects, categorical). Each participant saw search results accompanied by one of these features: (1) plain text results without stance labels, (2) results with predicted stance labels (Figure 2 b), or (3) results with predicted stance labels and highlighted explanations (Figure 2 c). #### 3.2.1 Dependent variable. * _Shannon Index_ (numerical). The Shannon Index was applied to measure the diversity of users' clicks. Let \(N\) be the total number of clicks made in one session, \(n_{0},n_{1},n_{2}\) be the number of clicks of "against", "neutral" and "favor" items respectively. The formula for computing clicking diversity is: \(-\sum_{i=0}^{2}\frac{n_{i}}{N}\ln(\frac{n_{i}}{N})\). The convention for no occurrence of a class is to ignore it as \(\ln(0)\) is undefined [29]. For instance, the Shannon Index for (0, 1, 3) is \(0+0.34+0.21=0.55\). The minimum value of the Shannon Index is 0, which indicates that there is no diversity and only one viewpoint was clicked on. When each class is equal the Shannon entropy has the highest value (for three classes, this would be \(3*(-\frac{1}{3})\ln(\frac{1}{3})=1.1\)). #### 3.2.2 Descriptive and exploratory measurements. We used these variables to describe our sample and for exploratory analyses, but we did not conduct any conclusive hypothesis tests on them. * _Demographics_ (categorical). We asked participants to state their gender, age group, and level of education from multiple choices. Each of these items includes a "prefer not to say" option. * _Clicks on Neutral Items_ (numerical). In a balanced SERP, the majority of items were neutral. We were specifically interested in whether participants' engagement with search results with a neutral stance is affected by the SERP display condition. * _Clicking Diversity_ (numerical). We logged the clicking behavior of participants during the survey and computed the ratio of pre-search attitude-confirming vs. attitude-opposing search results among the results a user has clicked on. Clicks on neutral search results were not regarded for this variable. * _Attitude Change_ (numerical). In line with previous research [15, 40, 16], we asked participants to select their attitudes on debated topics before and after the experiments using a seven-point Likert scale ranging from "strongly opposing" to "strongly supporting". The difference between their two answers is then assessed in the analysis. * _Textual feedback_ (free text). We asked participants to provide feedback on the explanations and the task. #### 3.2.3 Procedure. Participants completed the study in three steps as described below. The survey was conducted on Qualtrics1, while the interaction with search results occurred on our own server. Step 1After agreeing to an informed consent, participants were asked to report their gender, age group, and level of education. Participants were first asked to imagine the following scenario: _You and your friend were having a dinner together. Your friend is very passionate about a debated topic and couldn't help sharing his views and ideas with you. After the dinner, you decide to further inform yourself on the topic by conducting a web search._ Furthermore, participants were asked to state their attitudes concerning each debated topic (see Section 3.1; including one attention check for which we specifically instruct participants on what option to select from a Likert scale). Step 2We introduced participants to the task and subsequently assigned them to one of the three debated topics (i.e., _atheism_, _intellectual property rights_, and _school uniforms_) depending on their pre-search attitudes and randomly assigned them to one SERP ranking bias condition and one SERP display condition (see Section 4.2). Participants were then asked to click on a link leading them to our search platform (i.e., SEPP; see Section 3.4). Here, participants could enter as many queries as they want, as long as those queries include their assigned topic term (e.g., school uniforms pros and cons for the topic _school uniforms_). Regardless of what or how many queries participants enter, they always received search results from the same pool of 21 available search results relevant to their assigned topic (i.e., seven against, seven neutral, and seven in favor; see Section 3.1). With every query that participants entered, they received those search results ranked according to the ranking template associated to their assigned SERP ranking bias condition (see Section 3.4) for the first SERP and randomly drawn search results (following the template) for consequent searches.1 Depending on the SERP display condition participants were assigned to, they could see either plain search results, search results accompanied by stance labels, or search results accompanied by stance labels with additional explanations. Participants were made aware that the search results they were seeing might be biased and that there were limited results. After entering a query, participants were free to explore search results as long as they wish and click on links that lead to the presented web pages, or enter new queries. Users were instructed to return to the Qualtrics survey when they were done searching. Footnote 1: Whenever a user enters a new query, the first SERP (i.e., displaying the top 10 results) will always show search results according to the template, whereas pages 2 and 3 will show the 21 search results relevant to the topic in random order. Step 3Finally, in the questionnaire, we asked participants to report their post-search attitude (towards their assigned topic). Further, we asked them to provide textual feedback on the explanations and the task. We also included another attention check to filter out low-quality data in this post-interaction questionnaire. The attention checks consisted of one straightforward question with suggested response options. We excluded the data of participants who failed one or more of the attention checks from data analysis. **Recruitment Methods.** In this study, we used Prolific1 and Qualtrics2 to manage the participants and design survey workflow, respectively. The workflow of our study, including the informed consent, screening questions, and link to the survey, were all completed on Qualtrics. In the recruitment platform Prolific, we only selected participants with a minimum age of 18 (in compliance with research ethics), and fluent in English, as our dataset only contains English results. Footnote 1: [https://prolific.co](https://prolific.co) Footnote 2: [https://www.qualtrics.com/](https://www.qualtrics.com/) **Sample Details.** We anticipated to observe medium effects for _SERP display_ and _SERP ranking bias_ on _clicking diversity_ (Cohen's \(f=0.25\)). Thus, we determined in an a priori power analysis for a between-subjects ANOVA (see Section 4.2) a required sample size of 205 participants, assuming a significance threshold of \(\mathfrak{a}=\frac{0.05}{3}=0.017\) (testing three hypotheses), a desired power of (1- \(\beta\)) = 0.8 and considering that we tested, depending on the hypothesis, six groups (i.e., three SERP display conditions: _without stance labels, with stance labels, with stance labels and explanation_; and 2 SERP ranking bias conditions: _biased towards the attitude-opposing viewpoint, balanced_) using the software _G*Power_[18]. We aimed for a balanced distribution across topics and conditions. Participants were required to be older than 18 and with a high proficiency of English (i.e., as reported by _Prolific_). Participants could only participate in our study once. As mentioned above, we excluded participants from data analysis if they did not pass one or more attention checks. We also excluded participants from data analysis if they did not access our search platform at all or if they did not click on any links during their search. **Statistical Analyses.** To test our three hypotheses, we conducted Analysis of Variance (ANOVA), looking at the main and interaction effects of the three independent variables (1) the topic, (2) _SERP display_ (without stance labels, withstance labels and explanation) and (3) _SERP ranking bias_ (biased towards the attitude-opposing viewpoint, balanced) on the _shannon index_ (H1a, H1b, H1c). Aiming at a type 1 error probability of \(\alpha=0.05\) and applying Bonferroni correction to correct for multiple testing, we set the significance threshold to \(\frac{0.05}{3}=0.017\). We added _topic_ as an additional independent variable to this analysis to control for its potential role as a confounding factor. In addition to the analyses described above, we conducted posthoc tests (i.e., to analyze pairwise differences) to determine the exact differences and effect size, Bayesian hypothesis tests (i.e., to quantify evidence in favor of null hypotheses), and exploratory analyses (i.e., to note any unforeseen trends in the data) to better understand our results. ## 5 Results The overarching objective of this research is to investigate the effect of extra visual elements such as stance labels and text explanations on users' interaction behaviours, especially in terms of clicking diversity. In this section, we present the final results of the study with 203 participants and address our research question and hypotheses. #### 4.2.3 Descriptive Statistics. Prior to analyzing the primary statistics, it is necessary to first examine the demographic data. In general, young people made up most of our participants (see Figure 3). The educational level data reveal that the majority of participants have completed at least some level of higher education, with a smaller percentage have completed advanced degrees. Participants were roughly equally distributed across the three topics: 70 _athesim_, 66 _intellectual property rights_, and 67 _school uniforms_. Regarding factors such as bias and interface types that were randomized by Qualtrics workflow, their distributions are also balanced, with 102 participants accessed the biased SERPs and 101 accessed balanced SERPs, and 72 users view text-only SERP display, 64 viewed labelled interface and 67 viewed interface with saliency maps. In the pre-survey attitude test, 130 people were granted the access to our survey by expressing rather negative (against) viewpoints towards a specific topic, while only 73 people expressed a positive stance. Figure 4 shows the diversity of users' clicks as measured by the Shannon index across conditions. For balanced SERPs, the mean Shannon diversity index starts from 0.63 when there are explanation, then it slightly reduces to 0.55 in labelled interface, and drastically drop to 0.24 for the text SERP. For unbalanced SERPs, the trend is similar, from 0.64 to roughly 0.52, but the reduction is less drastic. In short, those who interacted with balanced pages overall scored somewhat lower in Shannon diversity. Figure 5 shows the stacked histogram representing users' click history under different conditions. In the left subplot, we see that _neutral_ items attracted more Figure 3: Demographic information of participants: Bar chart showing the distribution of users’ age, gender, and education level. clicks than _favor_ and _aginst_ combined in the balanced setting. In the biased setting, however, the number neutral clicks starts to shrink and user start to visit polarized contents more. It is notable that users clicked more _neutral_ items in the text-only SERP display. The other two types of interface seem to generate very similar amounts of clicks for each stance. ### Hypothesis Tests We ran an ANOVA test to examine the relationship between the variable _Shannon index_ and other predictor variables, including _explanation condition_, _bias type_ and _assigned topic_. Table 3 contains the results of the ANOVA test, including the F-statistic, p-value, and degrees of freedom for each variable and interaction. We also included the assigned topic in the test to control for topic as a potential confounding factor. Figure 4: Shannon Index across SERP Display conditions, split by ranking bias conditions. Error bars represent confidence intervals. Figure 5: Number of clicks on items of each stance made by users under different conditions. Left: different bias type, Right: different interface **H1a: Users who are exposed to viewpoint-biased search results interact with less diverse results than users who are exposed to balanced search results.** To test this hypothesis, we examine the Shannon diversity of clicks made by users who viewed viewpoint-biased search engine result pages (SERPs) to those who did not. Figure 4 suggests that users who were exposed to more balanced search results clicked on somewhat less diverse content. However, in the ANOVA summary (Table 3), we see that the influence of bias type (\(F=4.911,df=1,p=.027\), Cohen's \(f=0.16\)) is not statistically significant, given the significance threshold of \(.017\). We thus do not find any conclusive evidence for a difference in the diversity of clicked results between bias conditions. **H1b: Users who are exposed to search results with (1) stance labels or (2) stance labels with explanations for each search result interact with more diverse content than users who are exposed to regular search results.** Regarding H1b, compare users' clicks in different SERP displays. The results of the ANOVA shows a significant effect of explanation condition on clicking diversity (\(F=7.664,df=2,p<.001\), Cohen's \(f=0.28\)). This suggests that there is a difference in the diversity of content interacted between users who were exposed to (1) plain search results, (2) search results with stance labels, or (3) search results with stance labels and also explanations. We conducted a pairwise Tukey test to determine whether there are significant differences between SERP display levels (text, label, explanation). We found significant differences between text-label and text-explanation (with adjusted p-values of \(.0008\) and \(.012\), respectively), suggesting that both stance labels and stance labels with explanations led to more diverse clicks compared to the regular SERPs. The p-value of labelled-explanation group is \(.723\), indicating that there may be no difference in Shannon diversity between the _labelled_ and _explanation_ group. **H1c: Users who are exposed to search results with (1) stance labels or (2) stance labels with explanations are less susceptible to the effect of viewpoint biases in search results on clicking diversity.** This hypothesis concerns the interaction effect between SERP display and bias types on the \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Variable** & **Df** & **Sum Sq** & **Mean Sq** & **F** & \(p\) \\ \hline SERP Ranking Bias & 1 & 0.81 & 0.8146 & 4.911 &.027 \\ SERP Display & 2 & 2.54 & 1.2712 & 7.664 & \(<\).001 \\ Topic & 2 & 0.03 & 0.0158 & 0.095 &.909 \\ SERP RB:SERP Display & 2 & 0.65 & 0.3256 & 1.963 &.143 \\ Residuals & 195 & 32.34 & 0.1659 & & \\ \hline \hline \end{tabular} \end{table} Table 3: ANOVA results for the Shannon index and independent variables including the explanation level, bias type, the interaction between explanation and bias type, and the topic the user holds strong opinion towards. Under our reduced significance threshold, only the _Explanation_ effect is significant. Shannon index. Our ANOVA (Table 3) did not reveal any evidence for such an interaction effect (\(F=1.963,df=2,p=.143\), Cohen's \(f=0.02\)). In other words, when users are looking at different types of SERP displays but are exposed to the same level of viewpoint bias, our results do not contain evidence that users will click more diverse items because explanations and labels are visible. ### Exploratory Analysis To further understand our results, we also conducted exploratory analysis. Note that these analyses were not preregistered. #### 5.2.1 Clicks on Neutral Items. To better understand why the search results were more diverse in the labeled and explanation conditions, we looked closer at the distribution across the three viewpoints. In Figure 5, we can already see a larger number of neutral results for the text-only condition. Furthermore, in every SERP display, users who were exposed to a balanced page clicked on more neutral items on average. This may be due to their tendency to click on highly-ranked neutral items while being unaware of the stance. We conducted an exploratory ANOVA to investigate the effects of biases and interfaces on the number of clicks on neutral items. Table 4 lists the test results. We can observe that both explanation condition and bias condition had main effects on click diversity. Also their interaction is significant. This means that users may click on more neutral results in the balanced condition, and especially so when SERPs do not contain any stance labels or explanations. #### 5.2.2 Attitude Change. Previous research showed that mildly opinionated users' attitude change can differ across levels of ranking bias [15, 17, 35, 3, 8]. However, this effect has, to the best of our knowledge, not yet been shown for strongly opinionated users such as in our study. We intended to measure the attitude change by subtracting the post-search viewpoint \(t_{1}\) from the pre-search viewpoint \(t_{0}\), and thus the difference would range from \(-6\) to \(6\). We summarize users' attitude change in Figure 6 and Table 5. From Figure 6, we observe that only a few participants developed large attitude changes (absolute value \(>3\)) after the survey. In Table 5, the mean absolute \begin{table} \begin{tabular}{l c c c c} \hline \hline **Variable** & **Df** & **Sum Sq** & **Mean Sq** & **F** & \(p\) \\ \hline SERP Ranking Bias & 1 & 84.3 & 84.30 & 44.308 & \(<\).001 \\ SERP Display & 2 & 23.1 & 11.54 & 6.067 &.003 \\ SERP RB:SERP Display & 2 & 15.6 & 7.79 & 4.096 &.018 \\ Residuals & 197 & 374.8 & 1.90 & & \\ \hline \hline \end{tabular} \end{table} Table 4: Results of the ANOVA analysis on for number of neutral clicks. attitude change ranges from 1.0 to 1.3. The median statistics appear to be very stable; under all conditions, 1 is always the central value. We performed an exploratory ANOVA on the absolute _attitude change_ variable (Table 6) but do not find any convincing evidence for an effect of any independent variables on attitude change in our scenario, i.e., apart from a potential role of the topic (\(F=4.132,df=2,p=.017\)) that would need to be further investigated. ### Qualitative feedback After excluding feedback with fewer than four characters like "no", "nope" etc, there are 59 substantial comments. Positive feedback indicated that participants perceived the search results to be diverse: _"I thought the search engine was quite varied as it provided different types of sources"_. Other participants perceived the viewpoint labels to be accurate: _"The search engine was very useful, and the classification of the information was accurate."_ At least some participants \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline **Bias Type** & **SERP display** & **Attitude Change** & **Attitude Change** & **Skewness** \\ & & (Mean) & (Median) & \\ \hline \multirow{3}{*}{Balanced} & Text-only & 1.14 & 1.00 & 1.31 \\ & Labelled & 1.30 & 1.00 & 0.15 \\ & Explanation & 1.15 & 1.00 & 1.10 \\ \hline \multirow{3}{*}{Biased} & Text-only & 1.25 & 1.00 & 0.20 \\ & Labelled & 1.19 & 1.00 & 0.35 \\ & Explanation & 1.09 & 1.00 & 0.25 \\ \hline \hline \end{tabular} \end{table} Table 5: Absolute values of attitude change per condition. Figure 6: Histogram visualizing the difference between pre- and post-survey attitudes. Each group of bins indicates the number of people in a bias type setting who changed their viewpoints towards their assigned topic by a certain number of points. The X-axis indicates the normalized difference of attitude between the pre- and post-survey answers measured by a 7-point Likert Scale. were able to perceive the bias in the search result pages: _"the results against atheism were promoted to the top results of the search, it was not an impartial search result"_. While we aimed to mimic a real search engine, some participants may have realized that the results were reshuffled: _"It appears the results of the search were similar or the same in each new search. I'm not sure the words used to determine the stance of the page were appropriate."_ ## 6 Discussion and Limitations We found a significant effect of _SERP display_ on clicking diversity. That is, participants who were exposed to viewpoint labels or viewpoints labels with explanations consumed more diverse results than plain text search. While both non-text SERP displays affect users' click diversity over text-only SERP displays, we cannot observe any additional effect from the SERP display with labels and explanations over the label-only SERP display. Our results suggest that this difference can be explained by a predominance of clicks on the neutral stance in the text-only condition. This, alongside the qualitative comments, suggests that participants trusted and used the labels and explanations to inform themselves diversely. Contrary to our expectations, we did not find evidence for a difference between _biased and balanced search results_, in terms of click diversity. Furthermore, exploratory analyses revealed that users exposed to balanced SERPs clicked on more neutral items. We further found no exploratory evidence that intervention types (bias of search results; explanation level) affect participants' viewpoints. This later result is not necessarily surprising, given that the participants in this study held strong opinions on the topic. We also would not expect a change because the task given to participants was formulated as low-stakes and open-ended (inform yourself about the topic after speaking to an opinionated friend). **Limitations and Future work.** Our study has at least two important limitations. First, each search result in the data set we considered only had one overall viewpoint label, and this was limited to _against_, _neutral_, and _in favor_. This allowed for scalability but does not reflect the full nuances of online search results. For example, an essay or blog post could express highly diverse perspectives but still receive a positive stance label if that is its overall conclusion. Secondly, our study carefully aimed to balance a controlled environment \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Variable** & **Df** & **Sum Sq** & **Mean Sq** & **F** & \(p\) \\ \hline SERP Ranking Bias & 1 & 0.01 & 0.013 & 0.014 &.907 \\ SERP Display & 2 & 0.56 & 0.282 & 0.296 &.744 \\ Topic & 2 & 7.88 & 3.938 & 4.132 &.017 \\ SERP RB:SERP Display & 2 & 0.55 & 0.277 & 0.290 &.748 \\ Residuals & 195 & 185.88 & 0.953 & & \\ \hline \hline \end{tabular} \end{table} Table 6: ANOVA Results: absolute value of users’ attitude change with maintaining ecological validity. Despite this, it is possible that participants recognized that this was not a true search engine when issuing new queries. Similarly, the selection of templates allowed for some structured reshuffling of results. However, this also meant that strong opinions (contrary to the active user) were always at the top of biased search results. For the balanced condition there were also more neutral results. This likely contributed to the higher diversity of search results selected in the biased condition. In addition, ranking fairness metrics could have been used for the ranked lists, which could have led to slightly different results. Further work is required to disentangle the relationship between position bias and confirmation bias, and to replicate the study with different templates. ## 7 Conclusion In this paper, we studied the impact of stance labels and explanations of these labels for search on disputed topics. We found that stance labels and explanations led to a more diverse search result consumption. However, we cannot conclude that explanations have an extra effect in addition to labels on user's click diversity. Whether consuming diverse or more neutral results is preferable is in itself a debated topic. Backfire effects, where users become even more invested in their pre-existing beliefs, are possible when users consume strongly opposing views [34]. Greater diversity can further induce wrong perceptions of present evidence: for example, portraying climate change deniers and believers equally can give the impression that climate change is an open issue and thus may be worse than indeed weighing the evidence on both sides [11]. How much stance diversity is ideal can thus depend on individual user traits [32] and may lie somewhere between the extremes [7]. While these are domains and a context where confirmation biases are expected to be large, similar cognitive biases are likely to occur in other decision-making tasks where XAI is used. Further work is needed to catalogue different scenarios of opinion formation in different domains where confirmation bias may be present. ## Acknowledgments This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 860621.
2305.20039
The metallicity dependence of the stellar initial mass function
Dust is important for star formation because it is the crucial component that couples gas to stellar radiation fields, allowing radiation feedback to influence gas fragmentation and thus the stellar initial mass function (IMF). Variations in dust abundance therefore provide a potential avenue by which variation in galaxy metallicity might affect the IMF. In this paper we present a series of radiation-magnetohydrodynamic simulations in which we vary the metallicity and thus the dust abundance from 1% of Solar to 3$\times$ Solar, spanning the range from the lowest metallicity dwarfs to the most metal-rich early-type galaxies found in the local Universe. We design the simulations to keep all dimensionless parameters constant so that the interaction between feedback and star-forming environments of varying surface density and metallicity is the only factor capable of breaking the symmetry between the simulations and modifying the IMF, allowing us to cleanly isolate and understand the effects of each environmental parameter. We find that at a fixed surface density more metal-rich clouds tend to form a slightly more bottom-heavy IMF than metal-poor ones, primarily because in metal-poor gas radiation feedback is able to propagate further, heating somewhat larger volumes of gas. However, shifts in IMF with metallicity at a fixed surface density are much smaller than shifts with surface density at fixed metallicity; metallicity-induced IMF variations are too small to explain the variations in mass-to-light ratio reported in galaxies of different mass and metallicity. We, therefore, conclude that metallicity variations are much less important than variations in surface density in driving changes in the IMF and that the latter rather than the former are most likely responsible for the IMF variations found in early-type galaxies.
Tabassum S. Tanvir, Mark R. Krumholz
2023-05-31T17:12:46Z
http://arxiv.org/abs/2305.20039v1
# The metallicity dependence of the stellar initial mass function ###### Abstract Dust is important for star formation because it is the crucial component that couples gas to stellar radiation fields, allowing radiation feedback to influence gas fragmentation and thus the stellar initial mass function (IMF). Variations in dust abundance therefore provide a potential avenue by which variation in galaxy metallicity might affect the IMF. In this paper we present a series of radiation-magnetohydrodynamic simulations in which we vary the metallicity and thus the dust abundance from 1% of Solar to 3\(\times\) Solar, spanning the range from the lowest metallicity dwarfs to the most metal-rich early-type galaxies found in the local Universe. We design the simulations to keep all dimensionless parameters constant so that the interaction between feedback and star-forming environments of varying surface density and metallicity is the only factor capable of breaking the symmetry between the simulations and modifying the IMF, allowing us to cleanly isolate and understand the effects of each environmental parameter. We find that at a fixed surface density more metal-rich clouds tend to form a slightly more bottom-heavy IMF than metal-poor ones, primarily because in metal-poor gas radiation feedback is able to propagate further, heating somewhat larger volumes of gas. However, shifts in IMF with metallicity at a fixed surface density are much smaller than shifts with surface density at fixed metallicity; metallicity-induced IMF variations are too small to explain the variations in mass-to-light ratio reported in galaxies of different mass and metallicity. We, therefore, conclude that metallicity variations are much less important than variations in surface density in driving changes in the IMF and that the latter rather than the former are most likely responsible for the IMF variations found in early-type galaxies. keywords: magnetic fields -- radiative transfer -- stars: formation -- stars: luminosity function, mass function -- stars: protostars -- turbulence ## 1 Introduction The stellar initial mass function (IMF) is the mass distribution of stars at the point of their birth. The IMF is one of the most important distributions in astrophysics because it at least partly determines everything from chemical evolution to the strength of feedback during galaxy formation. The IMF has been found to be nearly universal in the Milky Way and its closest neighbours, which are the only locations where measurements of the IMF by direct star counting are possible (Offher et al., 2014, and references therein). A number of hypotheses have been proposed to explain this lack of variation, mostly focusing on the universality of turbulence (Padoan et al., 1997; Padoan & Nordlund, 2002; Hennebelle & Chabrier, 2008, 2009; Hopkins, 2012, 2013; Nam et al., 2021) and the stabilising effects of stellar radiation feedback (e.g., Bate, 2009, 2012; Krumholz, 2011; Krumholz et al., 2012; Guszejnov et al., 2016; Cunningham et al., 2018) or the isothermal-adiabatic transition (e.g., Lee & Hennebelle, 2018; Hennebelle et al., 2019). One important outcome of these studies is that, while turbulence alone can explain why the high mass portion of the IMF always has the same slope, some additional physical process is likely required to explain the existence of an IMF peak at a particular mass (e.g., Krumholz, 2014; Krumholz et al., 2016; Guszejnov et al., 2016, 2020). This finding is particularly significant because tentative evidence has started to emerge that in the extreme star forming environments found in early-type galaxies (ETGs), the IMF peak shifts slightly towards lower masses than the IMF peak found in the local Universe. There are multiple lines of evidence for such a shift derived from different measurement techniques - spectroscopic (e.g., van Dokkum & Conroy, 2010; Spiniello et al., 2012; La Barbera et al., 2013; Conroy et al., 2017), dynamical (e.g., Cappellari et al., 2012; Newman et al., 2017; Oldham & Auger, 2018), and gravitational lensing (Treu et al., 2010; Spiniello et al., 2015) - all pointing towards at least qualitatively consistent conclusions; see Smith (2020) for a comprehensive review. There is also much more tentative evidence for the possibility that the IMF might be more bottom-light in ultra-faint dwarf galaxies (e.g., Geha et al., 2013; Gennaro et al., 2018; however, see El-Badry et al., 2017 for a contrary perspective). These observations raise the theoretical question of which physical processes might be responsible for inducing the shift in the IMF peak in ETGs, and at least potentially in dwarfs as well. In Tanvir et al. (2022, hereafter Paper I), we performed a series of carefully controlled radiation magnetohydrodynamic (RMHD) simulations to study the role of the environment and its interaction with stellar feedback mechanisms in setting the IMF peak. We control these experiments by keeping all the dimensionless parameters constant (e.g., virial parameter, Mach number, Alfven Mach number) while altering only one other parameter so that it is easier to deduce the role of that parameter along with the feedback mechanism. In Paper I we explored the role of surface density and showed that with increasing surface density the IMF peak shifts towards lower masses than the ones found in the Milky Way, but by less than would be expected purely from a shift in the mean Jeans mass with surface density because of the enhanced effectiveness of stellar radiation feedback in denser environments. The resulting shift in the IMF peak plausibly matches what is required to explain the mass-to-light ratios measured in ETGs. However, in Paper I, we did not alter another potentially important parameter that differs between ETGs and other galaxies: the metallicity, and therefore dust properties. Dust matters since it is the component of the ISM that couples gas to stellar radiation. Some previous authors have conducted numerical simulations to understand how changing metallicity, and therefore changing dust abundance, might alter the IMF. For example, Bate (2019) carries out radiation hydrodynamical simulations to study the metallicity-dependence of the properties of stellar populations formed at metallicities \(Z=0.01-3\)\(\mathrm{Z}_{\odot}\), and finds that the stellar mass distribution is largely insensitive to metallicity. This result is in qualitative agreement with a number of similar, earlier numerical studies (Myers et al., 2011; Bate, 2014; Bate & Keto, 2015). More recently, however, Guszejnov et al. (2022) found that the characteristic mass of a forming stellar population does vary with both metallicity and the strength of the interstellar radiation field (ISRF). They find that with a typical Galactic ISRF, lower metallicities produce a higher characteristic mass. Chen et al. (2021) also studied the IMF in metal-poor environments and found that the number of low-mass stars increases with metallicity and that the mass function is top-heavy in low-metallicity environments. Sharda & Krumholz (2022) use analytic models to survey a wide range of parameter space, and find that the characteristic stellar mass is comparatively high at metallicities low enough that metal line cooling dominates, but begins to decrease with metallicity once the metallicity is high enough (\(Z\gtrsim 0.01\mathrm{Z}_{\odot}\)) for dust-gas coupling to dominate gas thermodynamics Bate (2023) conducts simulations taking into account how the combined effects of increasing cosmic microwave background intensity at high redshift and variation in metallicity impacts the IMF. He finds that for the CMB intensity at redshift \(z=5\), increasing metallicity increases the characteristic mass of stars. While all these authors have studied the effects of varying metallicity, none of the previous simulations (as opposed to analytic models) have done so for the very high surface-density environments that likely characterised star formation in ETGs. Moreover, the approach in most of these papers was to consider a wide range of environmental changes - for example not just metallicity, but also CMB intensity, ISRF intensity, total star-forming cloud mass, etc. While this has the advantage of being maximally realistic since in fact all of these parameters do vary with redshift and galactic environment, it has the disadvantage of making the results extremely difficult to interpret. Since these simulations vary many parameters at once, it is not clear which of the many possible knobs that are being turned is responsible for any particular aspect of the simulation outcomes, nor is it easy to identify by what mechanism any particular knob operates. In this paper we take a different approach, following on from that in Paper I: we present a series of RMHD simulations where we explore the role of metallicity variation in setting the IMF by keeping the dimensionless parameters that describe our star-forming clouds constant except for parameters that we vary systematically and one at a time so that we can create a clean experiment that isolates the effects of environment and its interaction with stellar feedback. A critical aspect of our simulation approach is that our experiments are engineered so that, except for the one change that we make, our simulations are all simply re-scaled versions of one another, such that if we were to turn off stellar feedback and complex thermodynamics, and simply adopt an isothermal equation of state, the results of all simulations would be identical. This allows us to solve the problem of interpretability since we can then unambiguously connect effects and causes. We describe the numerical method and initial conditions we use to achieve these effects in Section 2. In Section 3 we discuss the results from our simulations and their implications for the physics behind the IMF. We summarise our conclusions in Section 4. ## 2 Numerical Methods and Initial Condition ### Numerical Methods The numerical methods we employ in this study are identical to those used in Paper I, and we refer interested readers to that paper for more detailed information. Here we provide a brief summary. We carry out our simulations using the oron2 adaptive mesh refinement code (Li et al., 2021). The code uses the approach of Li et al. (2012) to solve the equations of ideal magnetohydrodynamics with self-gravity (Truelove et al., 1998; Klein et al., 1999) and radiation transfer (Krumholz et al., 2007) in the two-temperature, mixed-frame, grey, flux-limited diffusion approximation. The code includes sink particles (Krumholz et al., 2004) to replace regions where protostars are forming and that are collapsing beyond our ability to resolve. Each sink particle runs a one-zone protostellar evolution model as described in Offner et al. (2009), which provides the instantaneous properties of the star that it represents, such as radius, luminosity, and polytropic index; these depend on the accretion history determined self-consistently from the simulations. The luminosity of each sink particle is then used as a source term in the radiative transfer equations. We also take into account the feedback caused by protostellar outflows through momentum sources around each sink particle. The outflow model we use is described in Cunningham et al. (2011). In this model, whenever mass is accreted onto a sink particle, a fraction \(f_{w}\) of it is ejected back into the simulation in the form of an outflow. This outflow material is launched with a speed of \(v_{w}\). In our simulation, we adopt the same wind model parameters used in Hansen et al. (2012) and Cunningham et al. (2018), with \(f_{w}=0.3\) and \(v_{w}=\min(v_{\mathrm{kep}},60\ \mathrm{km/s})\), where \(v_{\mathrm{kep}}\) is the Keplerian speed at the protostar's surface (also determined self-consistently from the accretion history via the protostellar evolution model). These parameters are based on observations of the momentum budget of protostellar outflows (Richer et al., 2000). A critical component of the simulations is the opacity of the dusty gas, which is responsible for coupling the gas flow to the radiation field generated by the stars and by the thermal radiation from the dusty gas itself. As in Paper I, we take the Rosseland and Planck mean opacity of the dusty gas as a function of density and temperature from the tabulated results of Semenov et al. (2003). In order to study the effects of varying the metallicity, we scale these tabulated opacities by the metallicity relative to Solar, i.e., for runs with metallicity \(Z=0.1\mathrm{Z}_{\odot}\) we take the opacities to be 10% of Semenov et al.'s tabulated values. We should therefore understand the metallicity we quote when describing our simulations as the dust metallicity; the gas-phase metallicity is unimportant as long as the density is high enough for dust and gas to be thermally coupled by collisions since in this case dust heating and cooling completely dominates the gas thermodynamics. There is a limitation to the radiation transfer method: we assume the dust and gas temperatures are equal. This is generally a good assumption for gas and dust temperature at densities above \(10^{4}-10^{5}\)cm\({}^{-3}\)(e.g., Goldsmith, 2001). However, in low-density regions, dust-gas collisions may occur too infrequently to allow efficient gas-dust coupling, allowing gas to be either hotter or cooler than the dust. This is however unlikely to be a factor in determining the IMF since in regions where gas is collapsing and fragmenting, the density is high enough for gas and dust to be well coupled. Studies that have treated gas and dust temperatures separately (Bate & Keto, 2015) find that its effect on fragmentation is minimal (Bate, 2019). ### Initial and boundary conditions We construct our initial conditions following Paper I, and we refer readers to that paper for full details. Our initial conditions are designed to produce a carefully controlled experiment, whereby we hold all simulation parameters fixed except for one, which we vary systematically in order to isolate the effects of that parameter. In this study, we consider two different series of runs. The "medium" density case (M runs hereafter) consists of a periodic box1 containing a mass \(M_{\rm box}=1000\) M\({}_{\odot}\) with surface density \(\Sigma=1\) g cm\({}^{-2}\)(corresponding to a mean density \(\rho_{0}=7.0\times 10^{-19}\) g cm\({}^{-3}\), box length \(L=0.46\) pc), an initial temperature \(T_{0}=10\) K, an initial gas velocity dispersion \(\sigma=2.4\) km s\({}^{-1}\), and an initially-uniform magnetic field of strength \(B_{0}=0.73\) mG; for this combination of parameters, the box free-fall time \(t_{\rm ff}=160\) kyr, the crossing time \(t_{\rm cross}=0.4\) Myr, the virial ratio \(\alpha_{\rm vir}=1\), the Mach number \(\mathcal{M}=12.6\), and the plasma \(\beta=0.012\). The "low" density series (L runs hereafter) are a rescaled version of the M runs with a surface density that is lower by a factor of \(f=1/2\), and a box length, volume density, and magnetic field strength that are multiplied by factors of \(f^{-1}\), \(f^{2}\), and \(f\) relative to the M series, respectively; the gas velocity dispersion and temperature are unchanged. These transformations have the property that they leave \(\alpha_{\rm vir}\), \(\mathcal{M}\), and \(\beta\) unchanged between the L and M series, so the _only_ dimensionless number that varies between the L and M series is the optical depth of the gas. Footnote 1: While all MHD quantities use periodic boundaries, the radiation field uses Marshak boundaries, with a inward radiation flux corresponding to that of an isotropic blackbody radiation field with a radiation temperature of 10 K; see Paper I for full details. In addition to varying surface density and thus the optical depth at fixed metallicity, as in Paper I, in this study we also vary the metallicity independently of the surface density. The metallicity range we explore is from 1% of Solar to 3\(\times\) Solar; this covers the range from the lowest metallicity dwarf galaxies in the local Universe (Madden et al., 2013) to the most metal-rich early-types (Gu et al., 2022). We carry out runs at \(Z/Z_{\odot}=1\)%, 10%, 1, and 3 for both the L and M cases; we refer to these runs as L1%, L10%, L1\(\times\), and L3\(\times\), and similarly for the M series, and we summarise their full properties in Table 1. Note that the L1\(\times\) and M1\(\times\) runs are identical to the L1 and M1 runs in Paper I. Metallicity variation will change how the gas interacts with stellar radiation feedback since metals (in the form of dust grains) are what couple the radiation to the gas. They will also change how the gas interacts with protostellar outflow feedback, since outflows shock the gas, and the rate at which the gas is then able to cool back down to its equilibrium temperature depends on the box optical depth. Since the dimensionless parameters are the only ones that describe the system in the absence of stellar feedback (radiation and protostellar outflow), any differences we find between the simulations can only be a result of the interaction of stellar feedback with the differing surface density and metallicity, and thus the differing optical depth. By comparing to the Solar metallicity runs reported in Paper I, we can further separate the effects of metallicity and surface density. ### Resolution, refinement, and sink particles In order to ensure that we can compare our current runs to those carried out in Paper I, we use identical resolution, refinement, and sink particle creation criteria, and we refer to that paper for a detailed description, which we only summarise briefly here. The AMR hierarchy in these simulations is set on a \(512^{3}\) base grid which we denote \(\mathcal{L}=0\). The simulation takes place in two stages; during the first, "driving" phase, with lasts for two crossing times, we disable self-gravity and radiation transport, and drive the turbulence to a steady velocity dispersion, providing time for the turbulence to achieve a statistically steady state. During the driving phase we disable AMR, so no higher level exists. After the driving phase we start the "collapse" phase, during which we disable driving and re-enable self-gravity and radiation. Once we turn on gravity the grid is allowed to adaptively refine to a maximum level \(\mathcal{L}_{\rm max}=2\), and we refine in any cell where the Jeans number \[J=\sqrt{\frac{G\rho\Delta x^{2}}{\pi c_{s}^{2}}} \tag{1}\] rises above \(J=1/8\); here \(\rho\) is the gas density, \(\Delta x\) is a cell size, and \(c_{s}\) is the gas isothermal sound speed. We report the size of the cells on the finest level in Table 1. Sink particle formation is triggered in any zone on the finest AMR level where the gas is dense enough to reach a local Jeans number \(J>1/4\). Once a sink particle is formed it interacts with the gas via gravity, accretion, and stellar feedback only. ## 3 Results We present the results of our simulations here. First, we give an overview of the simulations in Section 3.1. Next, we discuss the mass distribution of the sink particles and identify how different metallicity at a fixed surface density impacts the IMF in Section 3.2. In Section 3.3 and Section 3.4 we interpret these results in terms of the effects of radiation and protostellar outflow feedback, and the interaction of these two feedback mechanisms with gas of varying metallicity. ### Overview of simulations In Figure 1 and Figure 2 we show the column density and the density-weighted temperature of runs L and M for our four different metallicities: 1%, 10%, 1\(\times\), and 3\(\times\) Solar metallicity. As in Paper I we show these plots at matching SFE (star formation efficiency) instead of at matching times since star formation occurs at different times in these runs. It is clear from Figure 1 that turbulence has created dense filamentary structures and that star formation activity is confined within these structures. Morphologically the runs are very similar to one another, which is not surprising since by construction in the absence of a feedback mechanism these runs would be identical. There is one small difference visible between the two sets of runs with different column densities: run L produces a filamentary structure that is more straight and narrow than the filamentary structures produced in run M. By contrast, at different metallicities but fixed surface density the simulations are almost identical to one another. This indicates that, at least with regard to morphology, metallicity has a relatively minor impact. The temperature structure of the gas shown in Figure 2 behaves quite differently. Here differences with metallicity are much more apparent than in Figure 1. For both the L and M series, lower metallicity runs are warmer compared to the higher metallicity runs. This indicates that the lower metallicity runs stellar radiation is able to escape further from the dense regions around individual stellar sources, leading to more widely distributed heating and a warmer mean temperature over most of the volume. This is in contrast to the effects of surface density, in which lower surface density runs are cooler because radiation is trapped less efficiently in the full simulation box. We show the time evolution of the star formation efficiency (SFE) in Figure 3, and the total number of stars present as a function of the SFE in Figure 4; we define the SFE as the ratio of total stellar mass in the simulation to the total initial mass present in the volume. We find that, in all runs, once the star formation activity begins it takes approximately 0.5 free fall times to reach 5% SFE. There is very little difference between the runs with varying metallicities at a fixed surface density. This indicates that whatever effects the feedback mechanisms have on the star formation rate in the simulations are independent of metallicity and surface density. By contrast, we see larger differences in Figure 4, which shows the number of stars as a function of SFE. At a fixed surface density, the two more metal-poor runs in both the L and M series, corresponding to 1% and 10% of Solar metallicity, produce the same number of stars at a given SFE. On the other hand, the runs with Solar and 3\(\times\) Solar metallicity produce more stars at fixed SFE, indicating that the stars formed in these runs have systematically lower masses; the effect is not large, but is clear, particularly in the L runs. We examine these trends further in the next section. ### Stellar mass distribution To explore the metallicity dependence of the stellar mass function further, we present two sets of plots. The first, Figure 5, shows the evolution of the median of the sink particle mass distributions for all the runs. As in Paper I we measure the median with respect to the stellar mass rather than the number, i.e., the median stellar mass \(m_{*}\) is defined by the condition that half the total stellar mass is found in stars with masses \(<m_{*}\). Looking at Figure 5 we see that the median masses have reached nearly steady values at 5% SFE. When measured on both absolute (top panel) and relative scales (bottom panel) the L runs at varying metallicity show greater variation in median mass than the M runs; however, the direction of variation is the same in both sets of runs, which is that as the metallicity increases the median mass decreases. However, we further note that the differences between the different metallicities are smaller than the differences between runs L and M, particularly when expressed in terms of absolute rather than relative mass. To investigate these differences further, in Figure 6 we show the cumulative mass functions of the runs at 5% SFE on both absolute and relative mass scales; as in Figure 5, we measure the CDF with respect to mass rather than number, since this is much more numerically stable. Consistent with the trend as a function of time shown in Figure 5, in the L runs as the metallicity increases there is a slight shift in the CDF shape and mass range it covers, with metal-poor runs showing slightly heavier mass distributions than metal-rich runs at the high mass end. The trend is less visible, and may in fact reverse, at the lowest stellar masses, so that the metal-rich distribution is narrowest. The variations in run M are similar to or perhaps slightly smaller than those in run L, and are in qualitatively the same direction. Consistent with Paper I, we also see that, on an absolute scale, the L runs have a slightly heavier IMF than the M runs. ### The effect of radiation feedback at varying metallicity Recall that our simulation series is constructed so that if the gas were isothermal and not subject to protostellar feedback, the different metallicity and surface density runs would all simply be re-scaled versions of the same simulation, and we would therefore expect identical results. Thus the differences in IMF we have observed between the runs must be a result of feedback, and variations in IMF at fixed surface density must be due to the interaction of feedback with the gas of varying metallicity. In this section, we examine how radiation feedback alters gas temperature distribution in runs of varying metallicity and how this, in turn, influences fragmentation, and in the next section, we perform a similar exercise for protostellar outflow feedback. Figure 7 shows the gas mass distribution with respect to both density and temperature in runs L and M at different metallicity, all at 5% SFE. From the plot, it is clear that there are differences between the metal-poor and the metal-rich runs. The most significant differences are in the high-density parts of the distribution since this is \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline Name & \(M_{\rm box}\) (M\({}_{\odot}\)) & \(L\) (pc) & \(\rho_{0}\) (g cm\({}^{-3}\)) & \(B_{0}\) (mG) & \(tt\) (kyr) & \(\Delta x\) (AU) & t\({}_{\rm cross}\) (Myr) & \(\Sigma\) (g cm\({}^{-2}\)) & \(Z/Z_{\odot}\) & r\({}_{\rm 100K}\) \\ \hline \hline L1\% & 2000 & 0.92 & \(1.74\times 10^{-19}\) & 0.36 & 160 & 46 & 0.4 & 0.5 & 0.01 & \(5.38\times 10^{-3}\) \\ L10\% & 2000 & 0.92 & \(1.74\times 10^{-19}\) & 0.36 & 160 & 46 & 0.4 & 0.5 & 0.1 & 0.054 \\ L1\% & 2000 & 0.92 & \(1.74\times 10^{-19}\) & 0.36 & 160 & 46 & 0.4 & 0.5 & 1 & 0.54 \\ L3\(\times\) & 2000 & 0.92 & \(1.74\times 10^{-19}\) & 0.36 & 160 & 46 & 0.4 & 0.5 & 3 & 1.6 \\ M1\% & 1000 & 0.46 & \(6.96\times 10^{-19}\) & 0.73 & 80 & 23 & 0.2 & 1 & 0.01 & 0.0108 \\ M10\% & 1000 & 0.46 & \(6.96\times 10^{-19}\) & 0.73 & 80 & 23 & 0.2 & 1 & 0.1 & 0.108 \\ M1\% & 1000 & 0.46 & \(6.96\times 10^{-19}\) & 0.73 & 80 & 23 & 0.2 & 1 & 1 & 1.08 \\ M3\(\times\) & 1000 & 0.46 & \(6.96\times 10^{-19}\) & 0.73 & 80 & 23 & 0.2 & 1 & 3 & 3.2 \\ \hline \hline \end{tabular} \end{table} Table 1: Simulation parameters, from left to right: run name, mass in the computational box, size of the computational box, mean density in the computational box, initial magnetic field strength, mean-density free-fall time, cell size at the finest AMR level, turbulent crossing time, surface density, metallicity, and optical depth computed using the Planck-mean opacity evaluated a temperature \(T=100\) K. All simulations use the same initial velocity dispersion \(\sigma_{0}=2.4\) km s\({}^{-1}\) and temperature \(T_{0}=10\) K. the gas that is closest to star formation and thus whose fragmentation will most directly affect the IMF. In the metal-poor runs, we see an upturn in the minimum temperature of the dense gas, while in the more metal-rich runs the temperature remains flat as high density. This difference occurs because at low metallicity the gas is less opaque and therefore is not able to shield itself from heating by the radiation produced by nearby stars, and therefore becomes warmer as it collapses. The most extreme case here is the M run at 1% of Solar metallicity, where there is essentially no gas at densities \(>10^{3}\rho_{0}\) and temperatures \(\lesssim 15\) K. On the other hand, the temperature distribution at high density extends to noticeably higher temperatures in the metal-rich runs than in the metal-poor ones, most likely for the same reason: because when the gas is more opaque, radiation is more easily "bottled up", leading to hotter warm regions around protostars that have already started radiating. While all of these temperature differences might seem minor, recall that the Jeans mass varies as \(T^{3/2}\), so a factor of 1.5 corresponds to a factor of 1.8 in mass - comparable to or larger than the IMF shifts we have observed between the runs, and of roughly the size required to explain the observations of ETGs. In order to understand how the opacity affects fragmentation more directly, we show in Figure 7 the 1D mass-weighted cumulative distribution function of mass with respect to \(M_{\rm J}/M_{\rm box}\), where \(M_{\rm J}\) is the Jeans mass computed from the local gas density and temperature. Figure 1: Column densities of simulations L (top) and M (bottom) series at 5% SFE at metallicities 1%, 10%, 1\(\times\), 3\(\times\) solar metallicity. The colour scale goes from \(\log(\Sigma/\Sigma_{0})=-1\) to 1, where \(\Sigma_{0}=\rho_{0}L\) and \(\rho_{0}\) is the mean density in the simulation domain. Circles show star particles, and are colour-coded by mass \(m_{\bullet}\) from \(\log(m_{\bullet}/M_{\rm box})=-5\) to \(-3\), where \(M_{\rm box}\) is the total mass of the simulation box. Figure 2: Same as Figure 1, but showing density-weighted projected temperature rather than column density. For a detailed description of how to construct this figure we refer readers to Paper I), but to summarise, note that the horizontal axis in Figure 8 corresponds to the dashed lines of constant \(M_{\rm J}/M_{\rm box}\) in Figure 7. The vertical axis in Figure 8 then shows what fraction of the mass in the simulation lies to the right of the corresponding dashed line in Figure 7, i.e., it shows that fraction of the mass in a given simulation has a Jeans mass smaller than the indicated value, and thus has the potential to fragment to produce stars with mass smaller than that value. From this plot, we see a clear trend between the metal-poor and metal-rich runs. The metal-rich runs tend to have a more mass at lower \(M_{\rm J}/M_{\rm box}\) than the metal-poor runs, consistent with our observation that the metal-rich runs to produce a more bottom-heavy IMF. The location at which the dashed 1-1 line in the figure intersects the CDF gives a rough estimate of the minimum object mass that is possible for gravity to produce: to the left of this intersection point, the box contains less than a single Jeans mass of material for which \(M_{J}\) is that small, and is therefore not capable of producing collapsed objects of such small mass. The intersection point is on average shifted to slightly lower mass at higher metallicity, indicating that radiation feedback is more efficient in metal-poor runs at suppressing the formation of lower-mass objects. ### The effect of outflow feedback As discussed in Paper I, radiation and outflow feedback play complementary roles in shaping the IMF: the former sets a lower bound on the masses of stars that can form, shaping the low-mass end of the IMF, while the latter inhibits the accumulation of mass by the most massive stars, shaping the high-mass end of the IMF. This difference explains why it is possible for the low-mass and high-mass parts of the CDF to shift in opposite directions, as we observe in Figure 6. In this case, if radiation feedback were the only actor in our simulations, then we would expect a uniform shift of the IMF towards the higher masses in the metal-poor runs. The actual pattern of IMF shift with metallicity is more complex, indicating that protostellar outflow feedback is significant as well, and that its dependence on metallicity is not identical to that of radiation feedback. To understand how outflow feedback affects our runs we first look at Figure 9, which shows the evolution of accretion rate-weighted mean surface escape speed in the runs as a function of SFE. This quantity matters because the outflow prescriptions we use in these simulations link the outflow velocity to the stellar surface escape speed, as is observed to be the case (Richer et al., 2000, and references therein) and as is expected theoretically (Konigl & Pudritz, 2000, and references therein), since outflows are launched from close to the stellar surface. See Cunningham et al. (2011) for a detailed description of our outflow prescription. From the plot, it is clear that \(v_{\rm esc}\) varies with surface density (run L has a higher surface escape speed than run M) but for a fixed surface density there is no systematic difference in the surface escape speed between the different metal Figure 4: Number of stars formed as a function of star formation efficiency. As in Figure 3, the left panel shows the L run series and the right panel shows the M series. Figure 5: The top panels shows the evolution of the median of the sink particle mass distribution for run L (left) and M (right) at different metallicities in absolute mass expressed in M\({}_{\odot}\). The lower panel shows the evolution relative to the box mass \(M_{\rm box}\). Here the medians are calculated with respect to mass rather than number. Figure 3: Star formation efficiency as a function of time since the formation of the first star \(t_{\star}\), measured in units of the free-fall time \(t_{\rm ff}\). The left panel shows the L series of runs at different metallicities, and the right panel shows the corresponding results for the M series. licity runs; this is not surprising, given that the primary determinant of escape speed is the ratio of the time stars have had to contract toward the main sequence to the stellar Kelvin-Helmholtz timescale; the former depends on surface density but not on metallicity (since the simulation star formation rates are almost independent of metallicity), and the latter to good approximation depends on neither. Thus we find that at any given metallicity the L series of runs has higher mean escape speed than the M series, since the L runs take longer to reach a given SFE, but that there is no difference with metallicity. Thus the differences we find in how outflows influence the IMF at different metallicity cannot be a result of differences in the outflow strength. Instead, in order to understand how protostellar outflows are breaking the symmetry in the simulations, we next examine Figure 10, which shows the total kinetic energy, scalar momentum, and volume occupied by outflow material normalised to the box kinetic energy (\(E_{\rm box}=M_{\rm box}\sigma_{\rm v}^{2}/2\), where \(\sigma_{\rm v}\) is the initial 3D velocity dispersion), scalar momentum by \(p_{\rm box}=M_{\rm box}\sigma_{\rm v}\) and volume \(V_{\rm box}=L^{3}\); we identify outflow material by a passive scalar that we add to the gas launched into outflows, and refer readers to Paper I for details on the procedure. Looking at this plot we see that in the metal-rich runs the outflowing material on average has more kinetic energy and scalar momentum, and occupies more volume, than for the more metal-poor runs at the same surface density. In Paper I we found that greater outflow momentum results in _less_ efficient fragmentation of the gas, because the outflows punch through their environments and escape more efficiently, therefore creating an environment more favourable to the formation of higher-mass stars. However, this does not appear to be the case here: the metal-rich runs have more outflow momentum and kinetic energy, but have slightly _fewer_ massive stars, as can be seen by examining the CDFs shown in Figure 6. One possible explanation is that the outflow momentum and kinetic energy vary between the L and M run series for different reasons than they vary with metallicity at fixed surface density. The outflow energy and momentum are larger in the L runs than the M ones because the L runs have more powerful outflows due to the higher surface escape speeds achieved by stars that have longer to contract toward the main sequence. By contrast, the differences between the runs at different metallicities are almost certainly driven by the different cooling rates of the outflow gas and the dense cloud material with which it mixes. In the metal-poor runs, cooling times are longer due to the lower metal content of the gas, and as a result, regions of shocked cloud gas may build up that pressure-confine the outflow more effectively, in turn making outflows less efficient at breaking up the gas clumps. This allows slightly more massive stars to form at the high-mass end of the IMF, as we observe in Figure 6. ### Implications for the mass to light ratio in early type galaxies One of the primary motivations of this study is to investigate the role of metallicity in the variations of the IMF that have been observed in ETGs. The mass-to-light ratio is the most direct line of evidence we can collect from our simulations to study these variations. In this section we explore the mass-to-light ratios of our simulations with varying metallicity. We use the slug stellar population synthesis code (da Silva et al., 2012; Krumholz et al., 2015) to generate isochrones at stellar population ages of 5 to 10 Gyr and at the metallicities used in the simulations (see Paper I for a detailed description of the procedure). We use the isochrones to calculate the mass-to-light ratio of the stellar populations formed in our simulations at ages of 5 to 10 Gyr in the SDSS \(r\) band, which is commonly used for measurements of \(M/L\) in ETGs. The top panel of Figure 11 shows the mass-to-light evolution of the runs from age 5 to 10 Gyr, while the lower panel shows the IMF mismatch parameter \(\alpha\), defined as the ratio of the actual \(M/L\) ratio in the simulations to the \(M/L\) ratio expected for a population with a Solar metallicity and a Chabrier (2005) IMF at the same age; this latter quantity is the index most commonly used in observations to study IMF variations in ETGs, though we emphasise that the absolute value here is less significant than the differences between the runs due to the systematic uncertainties. For both the absolute \(M/L\) or the IMF mismatch parameter, we see that, at any given metallicity, there is a difference of \(\sim 0.3-0.5\) dex in mass to light ratio between the L and M runs; this finding is consistent with that in Paper I for the Solar metallicity case, and extends the results to non-Solar metallicities. At a fixed surface density, by contrast, there is a smaller difference in the mass-to-light ratio between the runs, consistent with our finding Figure 6: Cumulative distribution function (CDF) of the sink particle masses of the simulations, the top panel is the run L at different metallicities, and the bottom panel is run M at different metallicities. The left side is the CDF with respect to absolute stellar mass and the right side is the CDF for mass measured relative to the box mass \(M_{\rm box}\). In both the L and M series, lower metallicity runs produce a slightly top-heavy IMF than the higher metallicity runs that surface density is a more influential factor than metallicity when it comes to determining the IMF. Moreover, Figure 11 in fact somewhat exaggerates the effect of metallicity, because some of the difference in mass-to-light ratio between the differing metallicity runs occurs because the metallicity itself affects stellar evolution and atmospheres; thus we would not expect to identical \(M/L\) values for two populations of different metallicity even if they had exactly the same IMF. To remove this effect, in Figure 12 we show the same quantities as in Figure 11, but where we have calculated the mass-to-light ratio for all runs using the isochrones for Solar metallicity; while this is clearly artificial, it enables us to isolate the effects of metallicity on the IMF from the effects on stellar evolution and atmospheres. In Figure 12 the runs at different metallicity but the same surface density cluster even more tightly, and are further from the corresponding runs at different surface density, than in Figure 11. This reinforces our conclusion that the effects of metallicity on the IMF and the observational diagnostics used to assess it are fairly minor compared to the effects of surface density. Our qualitative conclusion that metallicity effects cannot be responsible for the IMF variations seen in ETGs is the same as that reached in a recent paper by Bate (2023), who explored the role of varying metallicity on the stellar properties at redshift \(z=5\). Bate found that at this redshift, with its higher CMB temperature, increasing metallicity leads to an increase in the characteristic mass of stars, exactly the opposite of what would be required to reproduce observed IMF variations in ETGs. However, our conclusions differ somewhat in detail; Bate varied both the metallicity and the CMB temperature, whereas we, in keeping with our philosophy of controlled experiments, have varied only the former, a decision that is likely to be significant at \(z\gtrsim 3\), when the CMB temperature begins to exceed our adopted background IR radiation field temperature of 10 K. Thus there is no contradiction between the findings of Bate (2023) and of this work, since the experiments we have carried out are different. On the other hand, our finding that there is a very weak IMF variation with metallicity at fixed surface density is qualitatively consistent with the findings of Bate (2019), though they explore only a single surface density case, one similar to our L series. ## 4 Conclusions In this paper, we present a set of radiation-magnetohydrodynamic simulations of star formation including radiation and protostellar outflow feedback from young stars. We carry out a systematic exploration of how the mass function of the stars formed in the simulations varies as a function of surface density and metallicity, exploring Figure 7: The joint distribution of normalised density \(\rho/\rho_{0}\) and temperature \(T\) for runs L and M at different metallicity. The top row shows the state of the L runs and the bottom row shows the M runs; metallicity varies from 1% to 3\(\times\) Solar from left to right, as indicated above the columns. All plots show the state of the simulations when they reach 5% SFE. The colour bar shows the mass in each density-temperature bin. Dashed lines indicate loci of constant box-normalised Jeans mass \(M_{\rm J}/M_{\rm box}\); lines are spaced logarithmically at intervals of 0.5 dex, with the left-most line corresponding to \(\log M_{\rm J}/M_{\rm box}=-2\). surface densities from those typical of Milky Way-like galaxies to those typical of starbursts or early-type galaxies, and metallicities that range from those typical of the most metal-poor local dwarfs, \(Z=Z_{\odot}/100\), to those typical of the centres of the most metal-rich early-types, \(Z=3Z_{\odot}\). The setup of the simulations allows us to separate out the effects of metallicity at fixed surface density and of surface density at fixed metallicity, and to identify the specific mechanisms by which these parameters interact with stellar feedback. This extends the results of Tanvir et al. (2022), who used the same methodology to explore the effects of surface density alone, without metallicity variation. As in that paper, they key to approach is to set up a series of simulations where we keep all dimensionless parameters except the cloud optical depth (which depends on both surface density and metallicity) constant, so that, for isothermal gas in the absence of stellar feedback, all the simulations would just be rescaled versions of one another, and would produce identical results. Overall we see a trend whereby metal-poor cases produce a slightly heavier mass distribution over most of the IMF than metal-rich cases. However, this trend disappears or reverses at the lower mass end of the IMF, so that metal-rich cases also have a slightly narrower IMF overall. We attribute these shifts to the contrasting ways that metallicity variation interacts with different types of stellar feedback. This high mass end of the IMF is sensitive primarily to the effects of protostellar outflow feedback inhibiting the most massive objects from accreting, something that appears to happen slightly more efficiently when the metallicity is higher and cooling of outflow-shocked gas is more rapid, suppressing the formation of more massive objects. By contrast the remainder of the IMF is shaped primarily by radiation feedback suppressing fragmentation and preventing small objects Figure 8: Cumulative distribution functions of mass with respect to \(M_{\rm J}/M_{\rm box}\) for the L run series (left) and the M run series (right). The dashed straight lines in each panel show the one-to-one relation, and indicate a minimum condition for fragmentation: for any mass \(M_{\rm J}/M_{\rm box}\) for which the CDF falls below the dashed line, there is less than a single Jeans mass of material with \(M_{\rm J}\) that small in the box, and thus it is impossible to create an object of that mass via gravitational collapse. Figure 10: Outflow kinetic energy, momentum, and volume relative to the box kinetic energy, momentum, and volume, all as a function of SFE. The top set of panels shows the L series of runs and the bottom shows the M series. Figure 9: Accretion rate-weighted mean surface escape speed of stars formed the simulations as a function of SFE. The top panel shows the L series of run at varying metallicity, and the bottom panel shows the M series. from forming, something that appears to happen slightly more efficiently at lower metallicity. The net effect is that the IMF shifts to somewhat lower masses, and is also somewhat narrower, when the metallicity is higher. While these differences with metallicity provide interesting insight into how feedback interacts with the star-forming environment, they are ultimately of rather minor importance. This is because on an absolute scale the biggest differences we see in the IMF are set by variations in surface density. Our low surface density cases produce a slightly heavier IMF than our higher surface ones, and the differences are substantially larger than the subtle variations induced by metallicity. When we explore how these IMF variations compare to those required to explain observed variations in the mass-to-light ratio in early-type galaxies compared to spiral galaxies, we find that metallicity-induced IMF changes are too small to explain the observations, whereas surface density-induced ones are at the right level. We, therefore, conclude that surface density is a more important factor than metallicity in determining the stellar IMF and that the differences observed between the IMFs of spiral and early-type galaxies are most likely an effect of interstellar pressure and therefore surface density, not an effect of metallicity. ## Acknowledgements TST and MRK acknowledge support from the Australian Research Council through Laureate Fellowship award FL220100020. This research was undertaken with the assistance of resources and services from the National Computational Infrastructure (NCI), which is supported by the Australian Government, and by resources provided by the Pawsey Supercomputing Research Centre with funding from the Australian Government and the Government of Western Australia. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2303.00067
Playing to Learn, or to Keep Secret: Alternating-Time Logic Meets Information Theory
Many important properties of multi-agent systems refer to the participants' ability to achieve a given goal, or to prevent the system from an undesirable event. Among intelligent agents, the goals are often of epistemic nature, i.e., concern the ability to obtain knowledge about an important fact \phi. Such properties can be e.g. expressed in ATLK, that is, alternating-time temporal logic ATL extended with epistemic operators. In many realistic scenarios, however, players do not need to fully learn the truth value of \phi. They may be almost as well off by gaining some knowledge; in other words, by reducing their uncertainty about \phi. Similarly, in order to keep \phi secret, it is often insufficient that the intruder never fully learns its truth value. Instead, one needs to require that his uncertainty about \phi never drops below a reasonable threshold. With this motivation in mind, we introduce the logic ATLH, extending ATL with quantitative modalities based on the Hartley measure of uncertainty. The new logic enables to specify agents' abilities w.r.t. the uncertainty of a given player about a given set of statements. It turns out that ATLH has the same expressivity and model checking complexity as ATLK. However, the new logic is exponentially more succinct than ATLK, which is the main technical result of this paper.
Masoud Tabatabaei, Wojciech Jamroga
2023-02-28T20:09:50Z
http://arxiv.org/abs/2303.00067v3
# Playing to Learn, or to Keep Secret: ###### Abstract. Many important properties of multi-agent systems refer to the participants' ability to achieve a given goal, or to prevent the system from an undesirable event. Among intelligent agents, the goals are often of epistemic nature, i.e., concern the ability to obtain knowledge about an important fact \(\varphi\). Such properties can be e.g. expressed in **ATLK**, that is, alternating-time temporal logic **ATL** extended with epistemic operators. In many realistic scenarios, however, players do not need to fully learn the truth value of \(\varphi\). They may be almost as well off by gaining _some_ knowledge, in other words, by reducing their uncertainty about \(\varphi\). Similarly, in order to keep \(\varphi\) secret, it is often insufficient that the intruder never fully learns its truth value. Instead, one needs to require that his uncertainty about \(\varphi\) never drops below a reasonable threshold. With this motivation in mind, we introduce the logic **ATLH**, extending **ATL** with quantitative modalities based on the Hartley measure of uncertainty. The new logic enables to specify agents' abilities w.r.t. the uncertainty of a given player about a given set of statements. It turns out that **ATLH** has the same expressivity and model checking complexity as **ATLK**. However, the new logic is exponentially more succinct than **ATLK**, which is the main technical result of this paper. Multiagent Systems, Knowledge Representation, Uncertainty + Footnote †: 2021: _Proc. of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023). A. Ricci, Yeeh, & Agmon, is An (eds.), May 29 - June 2, 2023, London, United Kingdom._ 0 2023 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved. ## 1. Introduction Many important properties of multi-agent systems refer to _strategic abilities_ of agents and their groups (Han et al., 2016; Goyal et al., 2017). They can be formalized in logics of strategic ability, such as alternating-time temporal logic **ATL**(Kal strategic reasoning with information-theoretic properties. The paper (Steintein and Steintein, 2017) comes closest, as it discusses the relation between a variant of resource-bounded temporal-epistemic logic and Hartley measure. Moreover, our proposal is directly inspired by information-theoretic notions of security, cf. (Steintein and Steintein, 2017) for an introduction. Another strand of related works concerns quantitative specification and verification of MAS due to stochastic interaction (Steintein and Steintein, 2017; Steintein and Steintein, 2017), graded (Steintein and Steinteintein, 2017; Steintein and Steinteintein, 2018) and fuzzy strategic modalities (Steintein and Steintein, 2018; Stein and Steintein, 2018), or probabilistic beliefs about the opponents' response (Steintein and Steintein, 2018). Those papers considered neither knowledge nor information-theoretic properties, though (Steintein and Steintein, 2017) leaned in that direction by including a count over the accessible imperfect worlds. Succinctness of logical representations has been studied since early 1970s (Steintein and Steintein, 2018). In particular, the relative succinctness of branching-time logics was investigated in (Bauer and Steintein, 2017; Stein and Steintein, 2018; Stein and Steintein, 2018) studied the succinctness of the strategic logic **ATL\({}^{*}\)** with past-time operators. The methodology of proving succinctness by means of _formula size games_ was proposed in (Bauer and Stein, 2018), and later generalized in (Steintein and Steintein, 2018). We adapt the latter approach to obtain our main technical result here. ## 2. Logics of Strategic Ability We first recapitulate the logical foundations that we chose for our approach. ### Alternating-Time Logic ATL _Alternating-time temporal logic_**ATL**(Bauer and Stein, 2017; Stein and Steintein, 2018; Stein and Steintein, 2018) generalizes the branching-time temporal logic **CTL**(Stein and Steintein, 2018) by replacing the path quantifiers \(\mathsf{E}\), \(\mathsf{A}\) with _strategic modalities_\(\langle\langle A\rangle\rangle\). Informally, \(\langle\langle A\rangle\rangle\gamma\) says that a group of agents \(A\) has a collective strategy to enforce temporal property \(\gamma\). **ATL** formulas can include temporal operators: "\(X\)" (in the next state)", "\(G\)" (always from now on"), "\(F\)" (now or sometime in the future), and \(\mathcal{U}\) (strong "until"). **Syntax**. Formally, let \(Agt\) be a finite set of agents, and _Prop_ a countable set of atomic propositions. The language of **ATL** is defined as follows: \[\varphi\coloneqq\mathsf{p}\mid\neg\varphi\mid\varphi\land\varphi\mid\langle \!\langle A\rangle\!\rangle X\varphi\mid\langle\!\langle A\rangle\!\rangle G \varphi\mid\langle\!\langle A\rangle\!\rangle\varphi\,\mathcal{U}\,\varphi.\] where \(A\subseteq Agt\) and \(\mathsf{p}\in\textit{Prop}\). Derived boolean connectives and constants \((\forall,\top,\bot)\) are defined as usual. "Sometime" is defined as \(F\varphi\equiv\top\mathcal{U}\,\varphi\). ### Semantics of ATL **Models**. The semantics of **ATL** is defined over a variant of synchronous multi-agent transition systems. Let \(S=\{Agt,St,Act,d,out\}\) be a concurrent game structure (CGS) such that: \(Agt=\{1,...,k\}\) is a set of agents (or players), \(St\) is the set of states of the system, \(act\) is a set of actions, \(d:Agt\times St\to 2^{Act}\setminus\{\emptyset\}\) shows what actions are available for each player in each state, and \(out:St\times Act^{1}\times\ldots\times Act^{k}\to St\) is the transition function which, given a state and one action from each player in that state, returns the resulting sate. A CGS together with a set of atomic propositions \(P\!v\) and a valuation function \(V:Pv\to 2^{St}\) is called a concurrent game model (CGM). A _pointed CGM_ is a pair \((M,q_{0})\) consisting of a concurrent game model \(M\) and an initial state \(q_{0}\) in \(M\). **Strategies and their outcomes.** Given a CGS, we define the strategies and their outcomes as follows. A _strategy_ for \(a\in Agt\) is a function \(s_{a}:St\to Act\) such that \(s_{a}(q)\in d(a,q)\).1 The set of such strategies is sometimes denoted by \(\Sigma_{a}^{Ir}\), with the capital "\(\mathsf{T}\)" referring to perfect **I**nformation, and the lowercase "\(\mathsf{T}\)" for possibly imperfect recall. A _collective strategy_ for a group of agents \(A=\{a_{1},\ldots,a_{r}\}\) is a tuple of individual strategies \(s_{A}=\langle s_{a_{1}},\ldots,s_{a_{r}}\rangle\). The set of such strategies is denoted by \(\Sigma_{A}^{Ir}\). Footnote 1: This corresponds to the notion of _memoryless_ or _positional_ strategies. In other words, we assume that the memory of agents is explicitly defined by the states of the model. A _path_\(\lambda=q_{0}q_{1}q_{2}\ldots\) in a CGS is an infinite sequence of states such that there is a transition between each \(q_{i}\) and \(q_{i+1}\). \(\lambda[i]\) denotes the \(i\)th position on \(\lambda\) (starting from \(i=0\)) and \(\lambda[i,\infty]\) the suffix of \(\lambda\) starting with \(i\). The "outcome" function \(out(q,s_{A})\) returns the set of all paths that can occur when agents \(A\) execute strategy \(s_{A}\) from state \(q\) onward, defined as follows: \(out(q,s_{A})=\{\lambda=q_{0},q_{1},q_{2}\ldots\mid q_{0}=q\text{ and for each }i=0,1,\ldots\) there exists \(\langle a_{a_{1}}^{d},\ldots,a_{a_{k}}^{d}\rangle\) such that \(a_{a}^{i}\in d_{a}(q_{i})\) for every \(a\in Agt\), and \(a_{a}^{i}=s_{A}[a](q_{i})\) for every \(a\in A\), and \(q_{i+1}=o(q_{i},a_{a_{1}}^{d},\ldots,a_{a_{k}}^{d})\}\). **Semantic clauses.** The semantics of **ATL** is defined by the following clauses: \(M,q,\models\,\text{p}\,\text{ iff }q\in V(\mathsf{p})\), for \(\mathsf{p}\in\textit{Prop}\); \(M,q,\models\neg\text{p}\,\text{ iff }M,\,\not\models\varphi\); \(M,q\models\varphi_{1}\wedge\varphi_{2}\) iff \(M,q\models\varphi_{1}\) and \(M,q\models\varphi_{2}\); \(M,q\models(\lambda\lambda)\mathcal{X}\varphi\) iff there is a strategy \(s_{A}\in\Sigma_{A}^{Ir}\) such that, for each path \(\lambda\in\textit{out}(q,s_{A})\), we have \(M,\lambda[1]\models\varphi\). \(M,q\models(\lambda)\mathcal{G}\varphi\) iff there is a strategy \(s_{A}\in\Sigma_{A}^{Ir}\) such that, for each path \(\lambda\in\textit{out}(q,s_{A})\), we have \(M,\lambda[i]\models\varphi\) for all \(i\geq 0\). \(M,q\models(\lambda)\varphi_{1}\,\mathcal{U}\,\varphi_{2}\) iff there is a strategy \(s_{A}\in\Sigma_{A}^{Ir}\) such that, for each path \(\lambda\in\textit{out}(q,s_{A})\), we have \(M,\lambda[i]\models\varphi_{2}\) for some \(i\geq 0\) and \(M,\lambda[j]\models\varphi_{1}\) for all \(0\leq j<i\). ### Imperfect Information and Knowledge Realistic-agent interaction always includes some degree of limited observability. Here, we use the classical variant of "**ATL** with imperfect information1, defined as follows: Footnote 1: Again, we consider only positional strategies here. We extend concurrent game structures with indistinguishability relations \(\neg_{a}\), for each \(a\in Agt\). The resulting structure \(S=\{Agt,St,\{\neg_{a}\}\,a\in Agt\},Act,d,out\}\) is called a concurrent epistemic game structure (CEGS). A CEGS together with a set of atomic propositions \(P\!v\) and a valuation function \(V:P\!v\to 2^{St}\) is called a concurrent epistemic game model CEGM. Strategies of agents must specify identical choices in indistinguishable situations. That is, strategies with imperfect information (_ir_ strategies, for short) are functions \(s_{a}:St\to Act\) such that (1) \(s_{a}(q)\in d(a,q)\), and (2) if \(q\sim_{a}q^{\prime}\) then \(s_{a}(q)=s_{a}(q^{\prime})\). 2 As before, collective strategies for \(A\subseteq Agt\) are tuples of individual strategies for \(a\in A\). We denote the set of \(A\)'s imperfect information strategies by \(\Sigma_{A}^{Ir}\) (with the lowercase "\(i\)" for imperfect information). Footnote 2: Again, we consider only positional strategies here. The semantics of "**ATL** with imperfect information" (**ATL\({}_{ir}\)**) differs from the one presented in Section 2.1 only in that the strategies are taken from \(\Sigma_{A}^{Ir}\) instead of \(\Sigma_{A}^{Ir}\). In other words, the agents in should have an executable strategy which enforces \(\varphi\) from all the states that at least one member of the coalition considers possible. _Alternating-time temporal epistemic logic_**ATLK** adds the knowledge modality of the _multi-agent epistemic logic_ to **ATL** with imperfect information. In multi-agent epistemic logic, expressing the _knowledge_ of the agents is formalised by epistemic formulae of type \(\mathcal{K}_{\alpha}\varphi\), stating that agent _a knows that \(\varphi\) holds_, with the following semantics: \(M,q\models\mathcal{K}_{\alpha}\varphi\) iff, for every state \(q^{\prime}\) such that \(q\sim_{a}q^{\prime}\), we have that \(M,q^{\prime}\models\varphi\). The formula stating mutual knowledge \(E_{A}\varphi\) (read "everybody in \(A\) knows that \(\varphi\)") is defined as: \[M,q\models E_{A}\varphi\text{ iff }M,q\models\mathcal{K}_{\alpha}\varphi, \text{ for all }a\in A.\] ## 3. Motivating Example In this section we show, using an example, how our proposed logic can express more refined epistemic goals of agents using considerably more concise formulas. As we will see, not only the new formulation of these epistemic properties will be significantly shorter; the interpretation of the formulas will also be easier to understand in comparison to their analogous formulas in **ATLK**. ### Coercion in Referendums We consider a very simple scenario of an election with a single voter and a single coercer. The election is a referendum, in the sense that each voter has to either vote for an issue in question or to vote against it. We consider two variants. In the first one there is only one issue put for referendum (we call it proposal \(A\)). The model consists of two agents, the voter \(v\) and the coercer \(c\). The set of possible actions for the coercer in the model is \(\{e\}\) and for the voter is \(\{voteA,vote\overline{A},e\}\), where \(e\) represents a _null_ action (meaning the action of doing nothing). \(voteA\) and \(vote\overline{A}\) respectively represent voting for and against the proposal \(A\). The atomic proposition \(V_{A}\) states that the vote is cast in favor of proposal \(A\), while the atomic proposition \(Voted\) shows that the vote is already cast. The game model is depicted in Figure 1. The valuations of atomic propositions are depicted in blue, and the red dashed line between \(s_{1}\) and \(s_{2}\) shows that the these two states are indistinguishable for player \(c\). In this simple model, we might express the property of coercion resistance in **ATLK** as follows: \[M,s_{0}\models\langle\!\langle v\rangle\!\rangle F(Voted\wedge V_{A}\wedge G\neg(\mathcal{K}_{C}V_{A}\lor\mathcal{K}_{c} \neg V_{A}))\] \[\wedge\langle\!\langle v\rangle\!\rangle F(Voted\wedge\neg V_{A}\wedge G\neg( \mathcal{K}_{C}V_{A}\lor\mathcal{K}_{c}\neg V_{A}))\] The formula states that the voter has a strategy to vote for the proposal \(A\) or against it, in a way that in either case the coercer does not know the value of the vote. ### Referendum with Multiple Proposals Consider a more sophisticated variant in which the voter participates in a double referendum, i.e., votes on two proposals \(A\) and \(B\) on a single ballot. The set of atomic propositions in this scenario is \(\{Voted,V_{A},V_{B}\}\) and the set of actions for the voter is \(\{voteAB,vote\overline{AB},vote\overline{AB}\}\). For expressing the property of coercion resistance in this scenario, a seemingly reasonable way is to extend the above formulas as below: \[M,s_{0}\models\langle\!\langle v\rangle\!\rangle F(Voted\wedge V_{A}\wedge V_{B}\wedge\] \[G\neg(\mathcal{K}_{C}V_{A}\lor\mathcal{K}_{c}\neg V_{A}\lor \mathcal{K}_{C}V_{B}\lor\mathcal{K}_{c}\neg V_{B}))\] \[\wedge\langle\!\langle v\rangle\!\rangle F(Voted\wedge V_{A}\wedge \neg V_{B}\wedge\] \[G\neg(\mathcal{K}_{C}V_{A}\lor\mathcal{K}_{c}\neg V_{A}\lor \mathcal{K}_{c}V_{B}\lor\mathcal{K}_{c}\neg V_{B}))\] \[\wedge\langle\!\langle v\rangle\!\rangle F(Voted\wedge\neg V_{A}\wedge V_{B}\wedge\] \[G\neg(\mathcal{K}_{C}V_{A}\lor\mathcal{K}_{c}\neg V_{A}\lor \mathcal{K}_{c}V_{B}\lor\mathcal{K}_{c}\neg V_{B}))\] The formula states that the voter can vote in any combination, for or against \(A\) or \(B\), without the coercer knowing the value of any single vote. At the first glance these security properties seem to be strong enough for capturing the desirable property of coercion resistance. However, if we look at the two models in Figure 2, both of them satisfy the property above. On the other hand, we would consider model \(M_{1}\) less secure than \(M_{2}\). There are 4 possible combinations of the valuations of \(V_{A}\) and \(V_{B}\). In \(M_{2}\), the coercer considers all 4 of them as plausible, but in \(M_{1}\) he could narrow that down to only two possible combinations. In other words, the uncertainty of the coercer about propositions \(V_{A}\) and \(V_{B}\) is higher in \(M_{2}\) than in \(M_{1}\). In fact, as we shall see later, it is possible to write a formula in the language of **ATLK** that keeps the above property and yet distinguishes \(M_{1}\) and \(M_{2}\). But if we want to write a security property in **ATLK** that rejects all the models where the coercer has more distinguishing power over states \(s_{1}\) to \(s_{4}\) than the model \(M_{2}\), then the length of that formula would be very large - in the worst case, even exponential in the number of distinguishing properties. ### Reasoning about Uncertainty One way of looking at the above situation is that, when reaching any of the states \(s_{1}\) to \(s_{4}\), we want the coercer to have the least amount of information, or in other words the maximum uncertainty about the possible values of \(V_{A}\) and \(V_{B}\). To express this concept, we can use one of the well known quantitative measures of uncertainty. Two measures that come to mind are Shannon entropy and Hartley measure. Choosing Shannon entropy would be meaningful only if we knew the intrinsic probabilities of each state. Figure 1. Single issue referendum with one voter and one coercer However, in the models that we are using, and in the scenarios similar to the one above, what we are interested is the uncertainty of the agents about different possible outcomes of a set of properties (here \(V_{A}\) and \(V_{B}\)). We recall the definition of Hartley measure below: **Definition 3.1** (Hartley measure of uncertainty [29]).: If \(A\) is a set of possible outcomes, then its Hartley measure is defined by \(H(A)=\log|A|\). The Hartley function coincides with Shannon entropy when ignorance can be modeled by the uniform probability distribution. Using this measure, what we want to specify as a security property in the example above is that the uncertainty of the coercer about the values of \(V_{A}\) and \(V_{B}\) should be maximal. The set of properties of interest \(\{V_{A},V_{B}\}\) could have \(2^{2}=4\) different combinations of values. Therefore if we want that the coercer considers all of these combinations as possible, the Hartley measure of uncertainty of the coercer would be \(\log 4=2\) bits. To express this, we add a new operator \(\mathcal{H}\), and write the formula: \[\langle\!\langle e\rangle\!\rangle F(Voted\wedge\mathcal{H}_{\xi}^{\otimes 2} \{V_{A},V_{B}\})\] The formula states that the voter has a strategic ability to eventually cast her vote, while keeping the uncertainty of the coercer about the valuations of \(V_{A}\) and \(V_{B}\) at the level of at least 2 bits. Intuitively, the formula holds in state \(s_{1}\) of model \(M_{2}\), but not \(M_{1}\). In the next section, we use this idea to formalize the syntax and semantics of the logic **ATLH**. ## 4. **Atl with Uncertainty** In this section we define the syntax and semantics of the logic of strategic abilities with uncertainty operator **ATLH**. The logic is based on the idea of using the Hartley measure to quantify the uncertainty of agents about the possible valuations of a set of formulas. Similarly to **ATLK**, the semantics of **ATLH** is also defined with respect to concurrent epistemic game models (CEGM). ### Syntax The syntax of **ATLH** is given as follows: \[\varphi:=\mathsf{p}\mid\neg\mathsf{p}\mid\varphi\wedge\varphi\mid\langle \!\langle A\rangle\!\rangle X\varphi\mid\langle\!\langle A\rangle\!\rangle G \varphi\mid\langle\!\langle A\rangle\!\rangle\varphi\:\mathcal{U}\:\varphi \mid\mathcal{H}_{a}^{\otimes m}\beta.\] where \(A\in 2^{A\!gt}\) is a set of players, \(\beta\in 2^{\varphi}\setminus\{\emptyset\}\) is a set of formulas, \(a\in Agt\) is a player, and \(\otimes\in\{<,<,>,>,=\}\) is a comparison operator. For instance, the formula \(\mathcal{H}_{a}^{\frown m}\beta\) states that the the uncertainty of agent \(a\) about the set of formulas \(\beta\) is higher than \(m\). ### Semantics Let \([q]_{\neg a}=\{q^{\prime}\in St\mid q^{\prime}\neg a\:q\}\) denote the abstraction class of state \(q\in St\) with respect to relation \(\neg a\), i.e., the epistemic neighbourhood of \(q\) from the perspective of agent \(a\in Agt\). For a given formula \(\varphi\), we define relation \(\neg^{\varphi}\in St\times St\) that connects states with the same valuation of \(\varphi\): \[q_{1}\sim^{\varphi}q_{2}\text{ iff }M,q_{1}\models\varphi\Leftrightarrow M,q_{2} \models\varphi.\] If \(\beta=\{\varphi_{1},...,\varphi_{n}\}\) is a set of formulas and \(a\in Agt\), then we define \[\sim^{\beta}_{a}=\neg a\cap\bigcap^{n}_{i=1}\neg^{\varphi_{i}}\] I.e., \(q_{1}\sim^{\beta}_{a}q_{2}\text{ iff }q_{1},q_{2}\text{ look the same to }a\) and cannot be discerned by any formula in \(\beta\). Note that \(\sim^{\beta}_{a}\) is an equivalence. We define \[R_{a,q}(\beta)=\{[q^{\prime}]_{\neg a}^{\varphi}\mid\ q^{\prime}\neg a\:q\}\] for the set of equivalence classes of \(\sim^{\beta}_{a}\) contained in the epistemic neighbourhood of state \(q\). Then, the truth value of the statement "agent \(a\)'s uncertainty about the set of formulas \(\beta\) is in \(\otimes\) relation to value \(m\in\mathbb{R}\)" can be defined as follows: \[M,q\models\mathcal{H}_{a}^{\otimes m}\beta,\text{ iff }\log|R_{a,q}(\beta)|\otimes m\] Some straightforward validities of **ATLH** are: 1. \(\mathcal{H}_{a}^{\otimes m}\beta\underset{\beta}{\overset{m}{\times}}m\), \(\mathcal{H}_{a}^{\otimes m}\beta\underset{\beta}{\overset{m}{\times}}m\beta\), for all \(\beta\subseteq\beta^{\prime}\); \(\underset{\xi m}{\overset{<m}{\leqslant}}\underset{\xi m}{\overset{<m}{ \leqslant}}\underset{\xi m}{\overset{<m}{\leqslant}}\underset{\xi m}{ \overset{<m}{\leqslant}}\underset{\xi m}{\overset{<m}{\leqslant}}\) 2. \(\mathcal{H}_{a}^{\otimes m}\beta\underset{\beta}{\overset{<m}{\leqslant}} \underset{\xi m}{\overset{<m}{\leqslant}}\), for all \(\beta\subseteq\beta^{\prime}\). Also, if \(|St|\) is the number of states in the model, then it holds that \(M,q\models\mathcal{H}_{a}^{<\min(|\beta|,\log(|St|))}\)\(\beta\). ### Model Checking In this section, we discuss model checking for **ATLH**. The following results have long been known in the literature: * Model checking of epistemic logic is in \(\mathbf{P}\) with respect to the size of the model and the length of the formula [(28)]. * Model checking of **ATLK** for agents with _ir_ strategies is \(\mathbf{A}_{2}^{\mathbf{P}}\)-complete with respect to size of the model and the length of the formula [(16)]. This is a direct consequence of the fact that model checking of \(\mathbf{ATL}_{ir}\) is \(\Delta_{2}^{\mathbf{P}}\)-complete [(33; 50)]. Figure 2. Double referendum with one voter and one coercer. Model \(M_{1}\) depicts a scenario which is less secure than \(M2\) In the following, we show that model checking of **ATLH** is also \(\Delta^{\mathbf{P}}_{2}\)-complete. To this end, it suffices to show that model checking of the uncertainty part of the language is in \(\mathbf{P}\). **Proposition 4.1**: _If \(\varphi\) is an **ATLH** formula without strategic and temporal operators and \(M\) is a CEGM, then checking if \(\varphi\) is satisfied in a state \(q\) of \(M\) can be done in polynomial time with respect to \(|\varphi|\) and \(|M|\), where \(|M|\) is the total number of states, transitions, and epistemic relation pairs in \(M\)._ Let \(\varphi_{1},\varphi_{2},...\varphi_{k}\) be the subformulas of \(\varphi\) (which incrementally generate \(\varphi\)) listed in order of length. We can see that \(k\leq|\varphi|\), as there are at most \(|\varphi|\) subformulas of \(\varphi\). We start labeling each state in \(M\) in increasing order of \(i\), with labels \(\varphi_{i}\) or \(\neg\varphi_{i}\), depending on whether \(\varphi_{i}\) is true in that state or not. It is easy to see that we can do this in at most \(\mathbf{O}(k|M|)\) labeling step. If the formula \(\varphi_{i}\) is a propositional formula with respect to it's subformulas, then it can be labeled in in each state in constant time. In cases where \(\varphi_{i}\) is of the form \(\mathcal{H}^{\otimes m}_{a}\beta\) where \(\otimes\in\{<,>,=\}\) and \(\beta=\{\alpha_{1},...\alpha_{k^{\prime}}\}\), we have that each \(\alpha_{j}\) is a subformula of \(\varphi_{i}\). Therefore for labeling \(\varphi_{i}\) we construct the set of equivalence classes \(R_{a,q}(\beta)\) by checking the \(k^{\prime}\) labels of formulas in \(\beta\) in all the states \(q^{\prime}\) where \(q^{\prime}\sim_{a}q\). Then we calculate \(\log|R_{a,q}(\beta)|\) and compare it with \(m\) in order to label \(\varphi_{i}\). This procedure can be done in at most \(\mathbf{O}(k^{\prime}|M|)\) steps. Therefore the whole process of checking whether \(\varphi\) is satisfied in a state \(q\) or not can be done in at most \(\mathbf{O}(|\varphi|^{2}|M|^{2})\). **Proposition 4.2**: _Model checking of **ATLH** for agents with ir strategies is \(\Delta^{\mathbf{P}}_{2}\)-complete with respect to size of the model and the length of the formula._ The lower bound follows from the fact that **ATLH** subsumes \(\mathbf{ATL}_{ir}\), and model checking \(\mathbf{ATL}_{ir}\) is \(\Delta^{\mathbf{P}}_{2}\)-hard. The upper bound is straightforward from Proposition 4.1 and the fact that model checking \(\mathbf{ATL}_{ir}\) is in \(\Delta^{\mathbf{P}}_{2}\). ## 5. Expressive power of atlth In this section we show that **ATLH** and **ATLK** have the same expressive power. We start by recalling the semantic definition of comparative expressivity [57]. **Definition 5.1** (E Expressivity).: Let \(L_{1}=\langle\Phi_{1},\models_{1},\mathbb{M}\rangle\) and \(L_{1}=\langle\Phi_{2},\models_{2},\mathbb{M}\rangle\) represents two logics, such that \(\Phi_{1}\) and \(\Phi_{2}\) are the set of formulas defined in these logics, \(\mathbb{M}\) is a nonempty class of models (or pointed models) over which the logics are defined, and \(\models_{1}\) and \(\models_{2}\) are the truth relations of these logics, such that \(\models_{1}\subseteq\mathbb{M}\times\Phi_{1}\) and \(\models_{2}\subseteq\mathbb{M}\times\Phi_{2}\). We say that \(L_{2}\) is at least as expressive as \(L_{1}\) on the class of models \(\mathbb{M}\), iff for every formula \(\varphi_{1}\in\Phi_{1}\), there exists a formula \(\varphi_{2}\in\Phi_{2}\) such that for every \(M\in\mathbb{M}\) we have \(M\models_{1}\varphi_{1}\) iff \(M\models_{2}\varphi_{2}\). We will write it as \(L_{1}\less_{\mathbb{M}}L_{2}\). If both \(L_{1}\less_{\mathbb{M}}L_{2}\) and \(L_{2}\less_{\mathbb{M}}L_{1}\), then we say that \(L_{1}\) and \(L_{2}\) are equally expressive on \(\mathbb{M}\), and write \(L_{1}=_{\mathbb{M}}L_{2}\). In the following, we use \(\models_{\mathcal{K}}\) and \(\models_{\mathcal{H}}\) to denote the semantic relation of **ATLK** and **ATLH**, respectively, whenever it might not be clear from the context. ### Knowledge as Uncertainty **Theorem 5.2**: _ATLH is at least as expressive as **ATLK**._ Because the set of formulas defined in **ATLH** includes all the formulas defined in **ATLK** except the formulas including \(\mathcal{K}\) operator, and the semantics of the common formulas are similar in both logics, it suffices to prove that for any formula of type \(\varphi_{1}=\mathcal{K}_{a}\varphi\) in **ATLK** there is a formula \(\varphi_{2}\) in **ATLH** such that for every \(M\), \[M,q\models\mathcal{K}_{a}\varphi\ \Leftrightarrow\ M,q\models_{\mathcal{H}}\varphi_{2}\] We claim that we can construct such \(\varphi_{2}\) from \(\mathcal{K}_{a}\varphi\) to be \(\varphi_{2}=\varphi\wedge\mathcal{H}^{=0}_{a}\{\varphi\}\). Therefore we need to prove that: \[M,q\models_{\mathcal{K}}\mathcal{K}_{a}\varphi\ \Leftrightarrow\ M,q\models_{ \mathcal{H}}\varphi\wedge\mathcal{H}^{=0}_{a}\{\varphi\}\] We have that \(M,q\models_{\mathcal{K}}\mathcal{K}_{a}\varphi\) if and only if \(\varphi\) holds in all the indistinguishable states from \(q\) for \(a\), which includes state \(q\) itself. This means that \(\varphi\) holds in \(q\) and \(|R_{a,q}(\varphi)|=1\), which in **ATLH** would be expressed as \(M,q\models_{\mathcal{H}}\varphi\wedge\mathcal{H}^{=0}_{a}\{\varphi\}\). ### Uncertainty as Knowledge **Theorem 5.3**: _ATLK is at least as expressive as **ATLH**._ The proof proceeds by translating every occurrence of \(\mathcal{H}^{\otimes m}_{a}\beta\) to a Boolean combination of epistemic formulas that express the knowledge of agent \(a\) with respect to the _indistinguishability classes_ of the formulas in \(\beta\), defined as follows: **Definition 5.4** (Indistinguishability class of a formula).: For a given model \(M\), if \(q\in St\), \(a\in Agt\) and \(\varphi\) is a formula, then we define the indistinguishability class of \(\varphi\) with respect to \(q\) and \(a\) as follows: \[[\varphi]^{q}_{a}=[\varphi]\cap[q]_{-_{a}},\] where \([q]_{-_{a}}\) denotes the set of states that are indistinguishable from \(q\) for \(a\), and \([\varphi]\) is the set of states \(q^{\prime}\in St\) were \(M,q^{\prime}\models\varphi\). The full proof is technical and rather tedious; it can be found in the appendix. Here, we present how the translation works on an example. Let \(\varphi_{1}\) and \(\varphi_{2}\) be two formulas that do not contain any \(\mathcal{H}\) operators. We would like to find an **ATLK** formula \(P(\varphi_{1},\varphi_{2})\), such that: \[M,q\models\mathcal{H}^{\operatorname{clog}3}_{a}\{\varphi_{1},\varphi_{2}\}\ \Leftrightarrow\ M,q\models_{\mathcal{K}}P(\varphi_{1},\varphi_{2})\] First we define new formulas \(A\), \(B\), \(C\) and \(D\) as follows: \(A=\varphi_{1}\wedge\varphi_{2}\), \(B=\varphi_{1}\wedge\neg\varphi_{2}\), \(C=\neg\varphi_{1}\wedge\varphi_{2}\) and \(D=\neg\varphi_{1}\wedge\neg\varphi_{2}\). It is clear that the sets of states \([A]^{q}_{a}\), \([B]^{q}_{a}\), \([C]^{q}_{a}\) and \([D]^{q}_{a}\) are mutually exclusive, and moreover they partition \([q]_{-_{a}}\). Because the truth value of each one of \(A\), \(B\), \(C\), \(D\) corresponds to the truth value of exactly one of four possible different valuations combinations of \(\varphi_{1}\) and \(\varphi_{2}\), so they are distinct. If \(M,q\not\in\mathcal{H}^{\operatorname{clog}3}_{a}\{\varphi_{1},\varphi_{2}\}\), then exactly one of \([A]^{q}_{a}\), \([B]^{q}_{a}\), \([C]^{q}_{a}\) or \([D]^{q}_{a}\) has to be empty. Because these sets are mutually disjoint, if all are non-empty then we should have at least four different states in \([q]_{-_{a}}\) with the four different valuations for the formulas \(\varphi_{1}\) and \(\varphi_{2}\). This would mean that \(M,q\models\mathcal{H}^{\operatorname{clog}4}_{a}\{\varphi_{1},\varphi_{2}\}\) which contradicts \(M,q\models\mathcal{H}^{\operatorname{clog}3}_{a}\{\varphi_{1},\varphi_{2}\}\). Similarly if more than one of \([A]^{q}_{a}\), \([B]^{q}_{a}\), \([C]^{q}_{a}\) or \([D]^{q}_{a}\) are empty, then it means that only two or less possible valuation combinations of \(\varphi_{1}\) and \(\varphi_{2}\) exist in \([q]_{-_{a}}\). This entails that \(M,q\not\in\mathcal{H}^{\operatorname{clog}3}_{a}\{\varphi_{1},\varphi_{2}\}\), which is again a contradiction. The converse is also true: if exactly three of the sets \([A]^{q}_{a}\), \([B]^{q are exactly three valuation combinations of \(\varphi_{1}\) and \(\varphi_{2}\) in \([q]_{\neg_{a}}\), which follows that \(M,q\models\mathcal{H}_{a}^{\operatorname{\text{{\rm{emlog}}}}3}\{\varphi_{1}, \varphi_{2}\}\). So the formula \(M,q\models\mathcal{H}_{a}^{\operatorname{\text{{\rm{emlog}}}}3}\{\varphi_{1}, \varphi_{2}\}\) holds if and only if exactly one of \([A]_{a}^{q},[B]_{a}^{q},[C]_{a}^{q}\) or \([D]_{a}^{q}\) is empty. This can happen in four different ways (one corresponding to each of \([A]_{a}^{q},[B]_{a}^{q},[C]_{a}^{q}\) or \([D]_{a}^{q}\) being empty). First consider the case where \([C]_{a}^{q}\) is empty. Then: \[\begin{split}\nexistsq^{\prime}s.t.\ q^{\prime}\sim_{a}q& \text{ and }M,q^{\prime}\models C\\ \Leftrightarrow&(\forall q^{\prime},\ q^{\prime}\sim_{a}q \implies M,q^{\prime}\models C)\\ \Leftrightarrow&(\forall q^{\prime},\ q^{\prime}\sim_{a}q \implies M,q^{\prime}\models\neg C)\\ \Leftrightarrow& M,q\models\mathcal{K}_{a}\neg C \end{split}\] In a similar way we can show that \([A]_{a}^{q}\) is non-empty iff \(M,q\models\neg\mathcal{K}_{a}\neg\mathcal{A}\neg\mathcal{A}\). The same goes for \([B]_{a}^{q}\), and \([D]_{a}^{q}\). Therefore among \([A]_{a}^{q}\), \([B]_{a}^{q}\), \([C]_{a}^{q}\) and \([D]_{a}^{q}\), only \([C]_{a}^{q}\) is empty iff: \[M,q\models\neg\mathcal{K}_{a}\neg A\land\neg\mathcal{K}_{a}\neg B\land \mathcal{K}_{a}\neg C\land\neg\mathcal{K}_{a}\neg D\] We got this result by assuming that only \([C]_{a}^{q}\) is empty. Given that \(M,q\models\mathcal{H}_{a}^{\operatorname{\text{{\rm{emlog}}}}3}\{\varphi_{1}, \varphi_{2}\}\) iff exactly one of \([A]_{a}^{q}\), \([B]_{a}^{q}\), \([C]_{a}^{q}\) or \([D]_{a}^{q}\) is empty, and knowing that we have four possible choices for which one is to be empty, we get that: \[M,q\models\mathcal{H}_{a}^{\operatorname{\text{{\rm{emlog}}}}3}\{\varphi_{1}, \varphi_{2}\}\Leftrightarrow M,q\models_{\mathcal{K}}P(\varphi_{1},\varphi_{2})\] where \(P(\varphi_{1},\varphi_{2})\) is defined as: \[\begin{split} P(\varphi_{1},\varphi_{2})=&( \mathcal{K}_{a}\neg(\varphi_{1}\wedge\varphi_{2})\land\neg\mathcal{K}_{a} \neg(\varphi_{1}\land\neg\varphi_{2})\\ &\land\neg\mathcal{K}_{a}\neg(\varphi_{1}\wedge\varphi_{2}) \land\neg\mathcal{K}_{a}\neg(\neg\varphi_{1}\land\neg\varphi_{2}))\\ \lor&(\neg\mathcal{K}_{a}\neg(\varphi_{1}\wedge\varphi_{2}) \land\mathcal{K}_{a}\neg(\varphi_{1}\land\neg\varphi_{2})\\ &\land\neg\mathcal{K}_{a}\neg(\neg\varphi_{1}\wedge\varphi_{2}) \land\neg\mathcal{K}_{a}\neg(\neg\varphi_{1}\land\neg\varphi_{2}))\\ \lor&(\neg\mathcal{K}_{a}\neg(\varphi_{1}\wedge\varphi_{2}) \land\neg\mathcal{K}_{a}\neg(\varphi_{1}\land\neg\varphi_{2})\\ &\land\neg\mathcal{K}_{a}\neg(\varphi_{1}\wedge\varphi_{2}) \land\neg\mathcal{K}_{a}\neg(\varphi_{1}\land\neg\varphi_{2})\\ \land\neg\mathcal{K}_{a}\neg(\varphi_{1}\wedge\varphi_{2}) \land\neg\mathcal{K}_{a}\neg(\varphi_{1}\land\neg\varphi_{2})\\ &\land\neg\mathcal{K}_{a}\neg(\neg\varphi_{1}\wedge\varphi_{2}) \land\neg\mathcal{K}_{a}\neg(\neg\varphi_{1}\land\neg\varphi_{2})).\end{split}\] ## 6. Uncertainty is exponentially more succinct than knowledge The notion of succinctness (Becker, 1995; 1996; 1997) is a refinement of the notion of expressivity. Assume that one particular property can be expressed in both languages \(L_{1}\) and \(L_{2}\), with formulas \(\varphi_{1}\) and \(\varphi_{2}\) respectively. When comparing the representational succinctness of these two languages, we are interested in whether there is a significant difference in the lengths of \(\varphi_{1}\) and \(\varphi_{2}\). Similar to analysis of complexity, what we consider _significant_ is at least exponential growth of the size of a formula in one of the languages comparing to the equivalent formula in the other language. In this section, we prove that the language of **ATLH** is exponentially more succinct than **ATLK**. We use the so called _formula size games (FSG)_ from (Krishnan, 2007) to construct the proof. In brief, we will show that for any \(n\in\mathbb{N}\), there is a formula \(\varphi_{n}\) of size linear to \(n\) in **ATLH**, such that for any formula \(\varphi_{n}^{\prime}\) in **ATLK** with the same set of satisfying models as \(\varphi_{n}\), the parse tree of \(\varphi_{n}^{\prime}\) will have at least \(2^{n}\) distinct nodes, and therefore the size of \(\varphi_{n}^{\prime}\) is at least exponential in \(n\). ### Succinctness and Formula Size Games Before showing that **ATL** with uncertainty is exponentially more succinct than **ATL** with knowledge, we summarize the basic terminology. Definition 6.1 (Length of formulas in **Atlh**).: The length of formula \(\varphi\) is denoted by \(|\varphi|\), recursively defined as follows: \(|\varphi|=1\), \(|(\varphi_{1}\lor\varphi_{2})|=|\varphi_{1}\;\mathcal{U}\;\varphi_{2}|=|\varphi_ {1}|+|\varphi_{2}|+1\), \(|\neg|\neg|=|X\varphi|=|G\varphi|=1+|\varphi|\) \(|(\langle A\rangle)\varphi|=|A|+|\varphi|\), \(|\mathcal{H}_{a}^{\operatorname{\text{{\rm{em}}}}\operatorname{\text{{\rm{em}}} \operatorname{\text{{\rm{em}}}}}}\beta|=1+\sum_{\varphi_{i}\in\beta}|\varphi_{i}|\). Definition 6.2 (Succinctness).: Let \(L_{1}=\langle\varphi_{1},\models_{1},M\rangle\) and \(L_{2}=\langle\varphi_{2},\models_{2},M\rangle\) be two logics such that \(L_{1}\leqslant_{M}L_{2}\) and let \(f(x)=O(g(n))\) be a strictly increasing function. If for every \(n\in(N)\) there are two formulas \(\alpha_{n}\in\Phi_{1}\) and \(\beta_{n}\in\Phi_{2}\) where: * \(|\alpha_{n}|=f(n)\) * \(|\beta_{n}|=2^{g(n)}\) * \(\beta_{n}\) is the shortest formula on \(\Phi_{2}\) that is equivalent to \(\alpha_{n}\) on \(M\), then we say that \(L_{1}\) is exponentially more succinct than \(L_{2}\) on \(M\) and write it as: \(L_{1}\not\in_{M}^{\operatorname{\ The game tree through which the spoiler wins the FSG is the parse three of formula \(\varphi\) in the language of _Multiagent epistemic logic_. For any \(\langle A\circ B\rangle\), the set of all closed game trees with root \(\langle A\circ B\rangle\) is denoted by \(T(\langle A\circ B\rangle)\). Consequently, the set of closed trees represents also the set of all formulas \(\varphi\) that could distinguish the set of pointed models \(A\) from the set of pointed models \(B\) via the truth relation \(\models_{\textit{MEL}}\). ### ATLH Is More Succinct than ATLK Theorem 6.4 allows us to use FSG for proving the succinctness of our new logic **ATLH** with respect to **ATLK**. Theorem 6.5 ().: _The logic **ATLH** is exponentially more succinct than the logic **ATLK**._ The full proof is rather technical; it can be found in the appendix. Here we explain the sketch of the proof: Proof sketch.: Let \(Lang(\textbf{ATLK})\) and \(Lang(\textbf{ATLH})\) represent the languages of the logics **ATLK** and **ATLH** respectively. For every \(n\in\mathbb{N}\), we define a formula \(\varphi_{n}\in Lang(\textbf{ATLH})\) where \(f(n)=n\). Then, we define two sets of pointed models \(A_{n}\) and \(B_{n}\), such that \(A_{n}\models_{\mathcal{H}}\varphi_{n}\) and \(B_{n}\models_{\mathcal{H}}\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\negneg\neg\neg\neg\neg\neg\neg\negneg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\neg\negneg\neg\neg\neg\negneg\neg\neg\negneg\neg\negneg\neg\neg\neg\negneg\neg\neg\neg\neg\neg\negneg\neg\negneg\neg\neg\negneg\neg\neg\negneg\neg\negneg\neg\negneg\negneg\neg\negneg\neg\negneg\negneg\neg\negneg\negneg\negnegneg\neg ### Model of the Scenario In our example, the two propositions that determine the vote of the voter are \(V_{A}\) and \(V_{B}\). We encode a vote against an issue by using _verline_. So, \(\overline{A}B\) indicates a vote for issue \(B\) and against issue \(A\); in other words, it states that \(V_{A}\) is _false_ and \(V_{B}\) is _true_. This way, the set of possible votes from the voter is \(Votes=\{AB,\overline{A}B,\overline{A}\overline{B},\overline{A}\overline{B}\}\). Similarly, we encode a filled space by \(F\) and a blank (not filled) space by \(B\). For instance, a \(\{Blank,Filled\}\) ballot is denoted by \(BF\). Figure 3 depicts an example ThreeBallot card and the resulting ballots. We consider a scenario with two voters and a two issue referendum with issues \(A\) and \(B\). Let us call the first voter _the voter_, as she will be the one that we are focusing on in this example. We call the second voter _the other voter_. During the election, the voter has several choices. The first is what vote she is going to cast. Then for each possible vote, there are various ways that the ballots can be filled, which will result in different ballot sets. After that, the voter has the choice of which of the ballots to keep a copy as the receipt. In our scenario, there exist a coercer who after the election will force the voter to reveal her receipt. The coercer then tries to infer the actual value of the voter's vote, based on the receipt and the published bulletin board. Table 1 shows the different ways how the choices of the voter affect the possible indistinguishability set of the coercer about the value of the vote, after the receipt has been revealed. The different indistinguishability sets in each row result from various ways in which the other voter might fill his ballot. ### Analysis: Epistemic Security The coercion resistance security property is usually framed following the idea that the coercer cannot _get to know_ how the voter has voted, even if the voter cooperates with the coercer. In (Sen et al., 2016), various nuances of coercion resistance are formulated in the logic **ATLK**. In a similar way, we can use **ATLK** to express coercion resistance in our ThreeBallot example as follows: \[\wedge_{Vi\in Votes}\neg\langle\langle v,c\rangle\rangle F(V_{1}=V_{i}\wedge \langle V_{1}\neq V_{2}\rightarrow\mathcal{K}_{c}(V_{1}=V_{i}))\rangle\] The formula states that for any vote choice, there exists no common strategy for the voter and the coercer, such that the voter selects that vote and given that the choice of the two voters are different (the reason for this condition is explained below), the coercer would know the value of the vote.3 Footnote 3: Note that we use \((V_{1}=V_{2})\) in the role of an atomic proposition which evaluates to true whenever \(V_{i}\) is indeed equal to \(V_{i}\). Condition \((V_{1}=V_{2})\) is treated analogously. It is obvious that in the cases where both voters have voted identically, even without revealing the receipt, the coercer will know the value of the vote just by looking at the bulletin board. This is similar to the case when in an election all the voters vote similarly, in which case the privacy of their votes will be broken after publishing the tally (unless some sort of obfuscation is used (Sen et al., 2016)). We added the condition \((V_{1}\neq V_{2})\) to the above formula to account for this. Also in the following we only focus on the cases where the two voters has voted differently. By looking at Table 1 we can see that the model satisfies the coercion resistance property as formulated in the above **ATLK** expression. This is because there is no row in the table that consists of only one indistinguishability set for the coercer which has only one member (the actual value of the vote). However the voter votes and selects the receipt, there is at least one possible indistinguishability set with more than one member, meaning that the coercer might not get to know the actual vote of the voter. ### Information-Theoretic Security in ATL We can alternatively define the coercion resistance property in the information-theoretic sense, namely that the coercer cannot _gain information_ on how the voter has voted, even if the voter cooperates with the coercer. Phrasing this differently, we want that no matter the course of actions of the voter and coercer, the coercer has always maximum uncertainty about the actual value of the vote. We can express this property in **ATLH** as follows: \[\wedge_{Vi\in Votes}\neg\langle\langle v,c\rangle\rangle F(V_{1}=V_{i}\wedge(V_{1} \neq V_{2}\rightarrow\] \[\mathcal{H}_{c}^{\text{log}(|Votes|)}\{\langle V_{A},V_{B}\rangle\}))\] The above formula states that, for any joint strategy of the coercer and the voter, the uncertainty of the coercer will always be at the maximum. Looking at Table 1, we can see that the ThreeBallot protocol does not satisfy this property. This is because in each row there exists a possible indistinguishability set whose size is less than the number of possible votes. This example shows that, although ThreeBallot could be considered secure with respect to the epistemic notion of coercion resistance expressed in **ATLK**, it is not secure when we define the security requirement as an information-theoretic property, and formalize it in **ATLH**. ## 8. Conclusion In this work, we introduce the logic **ATLH** which extends alternating-time temporal logic **ATL** with quantitative modalities based on the Hartley measure of uncertainty. As the main technical result, we show that **ATLH** has the same expressive power and the same model checking complexity as **ATLK** (i.e., **ATL** with epistemic modalities), but it is exponentially more succinct. The succinctness result, together with the model checking complexity, is of major significance. As we have seen in Section 4.3, both **ATLK** and **ATLH** have the same verification complexity with respect to the size of the model and _the length of the formula_. Theorem 6.5 promises that, for some properties, their verification in **ATLH** will be exponentially faster than in **ATLK**. Also, a more succinct language often results in better readability, which in turn helps the designers of a system to make less mistakes in the specification of desired properties. Last but not least, many properties can be expressed in **ATLH** in a much more intuitive way than in **ATLK**. Understanding the information-theoretic intuition of a corresponding **ATLK** formula can be a real challenge. We suggest the specification of security requirements as an important application area for our proposal. In particular, the framework can be used to expose the logical structure of security claims, for example, the difference between the epistemic and information-theoretic notions of privacy. We demonstrate this on a real-life voting scenario involving the ThreeBallot protocol, which has been both claimed secure and insecure in the past. Indeed, the protocol is secure with respect to an epistemic notion of privacy, but it may fail to obtain the information-theoretic one. In the future, we plan to implement model checking for **ATLH** as an extension of the STV [42] or MCMAS [43] model checkers. ## Acknowledgments The work has been supported by NCBR Poland and FNR Luxembourg under the PoILux/FNR-CORE projects STV (POLLUX-VII/1/2019 & C18/IS/12685695/IS/STV/Ryan) and SpaceVote (POLLUX-XI/14/SpaceVote/2023).
2309.15357
Perfect Coulomb drag and exciton transport in an excitonic insulator
Strongly coupled two-dimensional electron-hole bilayers can give rise to novel quantum Bosonic states: electrons and holes in electrically isolated layers can pair into interlayer excitons, which can form a Bose-Einstein condensate below a critical temperature at zero magnetic field. This state is predicted to feature perfect Coulomb drag, where a current in one layer must be accompanied by an equal but opposite current in the other, and counterflow superconductivity, where the excitons form a superfluid with zero viscosity. Electron-hole bilayers in the strong coupling limit with an excitonic insulator ground state have been recently achieved in semiconducting transition metal dichalcogenide heterostructures, but direct electrical transport measurements remain challenging. Here we use a novel optical spectroscopy to probe the electrical transport of correlated electron-hole fluids in MoSe2/hBN/WSe2 heterostructures. We observe perfect Coulomb drag in the excitonic insulator phase up to a temperature as high as ~15K. Strongly correlated electron and hole transport is also observed at unbalanced electron and hole densities, although the Coulomb drag is not perfect anymore. Meanwhile, the counterflow resistance of interlayer excitons remains finite. These results indicate the formation of an exciton gas in the excitonic insulator which does not condensate into a superfluid at low temperature. Our work also demonstrates that dynamic optical spectroscopy provides a powerful tool for probing novel exciton transport behavior and possible exciton superfluidity in correlated quantum electron-hole fluids.
Ruishi Qi, Andrew Y. Joe, Zuocheng Zhang, Jingxu Xie, Qixin Feng, Zheyu Lu, Ziyu Wang, Takashi Taniguchi, Kenji Watanabe, Sefaattin Tongay, Feng Wang
2023-09-27T01:52:10Z
http://arxiv.org/abs/2309.15357v1
# Perfect Coulomb drag and exciton transport in an excitonic insulator ###### Abstract Strongly coupled two-dimensional electron-hole bilayers can give rise to novel quantum Bosonic states: electrons and holes in electrically isolated layers can pair into interlayer excitons, which can form a Bose-Einstein condensate below a critical temperature at zero magnetic field. This state is predicted to feature perfect Coulomb drag, where a current in one layer must be accompanied by an equal but opposite current in the other, and counterflow superconductivity, where the excitons form a superfluid with zero viscosity. Electron-hole bilayers in the strong coupling limit with an excitonic insulator ground state have been recently achieved in semiconducting transition metal dichalcogenide heterostructures, but direct electrical transport measurements remain challenging. Here we use a novel optical spectroscopy to probe the electrical transport of correlated electron-hole fluids in MoSe\({}_{2}\)/hBN/WSe\({}_{2}\) heterostructures. We observe perfect Coulomb drag in the excitonic insulator phase up to a temperature as high as \(\sim\)15K. Strongly correlated electron and hole transport is also observed at unbalanced electron and hole densities, although the Coulomb drag is not perfect anymore. Meanwhile, the counterflow resistance of interlayer excitons remains finite. These results indicate the formation of an exciton gas in the excitonic insulator which does not condensate into a superfluid at low temperature. Our work also demonstrates that dynamic optical spectroscopy provides a powerful tool for probing novel exciton transport behavior and possible exciton superfluidity in correlated quantum electron-hole fluids. \({}^{1}\) Department of Physics, University of California, Berkeley, CA 94720, USA \({}^{2}\) Materials Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA \({}^{3}\) School of Physics, Xi'an Jiaotong University, Xi'an 710049, China. \({}^{4}\) Research Center for Materials Nanoarchitectonics, National Institute for Materials Science, 1-1 Namiki, Tsukuba 305-0044, Japan \({}^{5}\) Research Center for Functional Materials, National Institute for Materials Science, 1-1 Namiki, Tsukuba 305-0044, Japan \({}^{6}\) School for Engineering of Matter, Transport and Energy, Arizona State University, Tempe, AZ 85287, USA \({}^{7}\) Kavli Energy NanoSciences Institute, University of California Berkeley and Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA. \(\dagger\) These authors contributed equally. \({}^{*}\) To whom correspondence should be addressed: andrew.joe@ucr.edu, fengwang76@berkeley.edu ## 1 Main An electron-hole bilayer - a two-dimensional electron gas (2DEG) and a two-dimensional hole gas (2DHG) coupled together by Coulomb interactions while remaining electrically isolated - provides a highly tunable platform to study strongly correlated electron-hole fluids. In the strong coupling regime where the interlayer distance is small compared to intralayer particle spacing, the electrons and holes in adjacent layers pair into indirect excitons, and the system is expected to host novel quantum Bosonic states including correlated excitonic insulators [1, 2], exciton supersolids [3], high-temperature bilayer exciton Bose-Einstein condensates (BEC) [2, 4, 5, 6] and superfluids [7, 8, 9, 10]. Such bilayer exciton condensates are characterized by a gapped energy spectrum, spontaneous interlayer phase coherence and dissipationless exciton transport [4, 11]. The interlayer coherence is supposed to manifest itself as counterflow superconductivity with perfect Coulomb drag in a counterflow transport measurement - a current in one layer must be accompanied by an equal but opposite current in the other [12, 13], and the longitudinal resistance vanishes. Evidence of exciton condensation and perfect Coulomb drag has been reported in quantum Hall bilayers in semiconductor double quantum wells [12, 14] and graphene systems [15, 16], but the formation of quasi-electrons and quasi-holes in the quantized Landau levels requires a strong external magnetic field and the excitons only form at very low temperatures [17, 18]. Recently, the research focus has shifted towards searching for an exciton condensate in electron-hole bilayers in the absence of a magnetic field [1, 4, 8, 19]. Electron-hole bilayers in semiconducting transition metal dichalcogenide (TMD) heterostructures have attracted special interest due to the strong Coulomb interaction and large exciton binding energy (hundreds of meV) [20, 21]. Consequently, the strong coupling regime becomes accessible [22, 19]. This system provides a highly tunable platform to study intriguing transport behaviors of quantum exciton fluids. When the bandgap energy is electrically tuned below the exciton binding energy, strongly correlated excitonic insulator states have been experimentally achieved [1, 23]. Theoretical study of such systems predicts an exciton BEC that persists until a very high temperature Berezinskii-Kosterlitz-Thouless (BKT) transition (a fraction of the exciton binding energy) [8, 11]. Below this temperature, the interlayer excitons are expected to form an exciton superfluid, in which the exciton transports with zero viscosity, similar to the dissipationless transport of Cooper pairs in conventional superconductors. Electrical transport measurements are therefore highly desired to observe these exotic quantum states. However, most semiconducting TMDs have poor electrical contact to common metals due to a large Schottky barrier, which makes electrical transport measurements challenging. Therefore, experimental measurement of Coulomb drag and exciton transport remains elusive to date. Here we develop a novel optical technique to quantitatively measure exciton transport behavior without the need to pass any current through contacts. This allows Coulomb drag measurements and accurate determination of exciton flow resistance. For the first time, we show that a TMD electron-hole bilayer features perfect Coulomb drag at equal electron and hole densities, which becomes non-perfect but remains very strong when additional charges are present. Surprisingly, our exciton transport measurements show the absence of an exciton superfluid for temperatures down to 2K, which is in contradiction to the theoretical predictions. ### Optical measurement of resistance and Coulomb drag Figure 1a schematically shows the device structure and the experimental setup. We choose MoSe\({}_{2}\) as the electron layer and WSe\({}_{2}\) as the hole layer for their type-II band alignment. They are separated by a thin hBN tunnelling barrier to ensure equilibrium electron-hole fluids with ultralong exciton lifetime on the order of a second [23]. The heterostructure is encapsulated by dielectric hBN on both sides and gated by few-layer graphene top gate (TG) and two back gates (BG1 and BG2). This dual-gated electron-hole bilayer device has two regions controllable by separate BGs, each easily tunable with electrical voltages. We begin by considering region 1. We keep the electron layer grounded (\(V_{e}=0\)) and apply voltages on the gates and the hole layer. The gate voltage \(V_{\text{G}}\equiv V_{\text{TG}}+V_{\text{BG1}}\) tunes the Fermi-level and thus the net charge density (electron-hole imbalance). The hole-electron voltage difference \(V_{h}-V_{e}\) and the vertical electric field \(V_{\text{TG}}-V_{\text{BG1}}\) both tune the band alignment. Therefore, the effective bias voltage \(V_{\text{B}}\equiv(V_{h}-V_{e})+\frac{t_{m}}{t_{t}+t_{m}+t_{b}}(V_{\text{TG}}- V_{\text{BG1}})\), where \(t_{t},t_{m},t_{b}\) are top, middle and bottom hBN thicknesses in region 1 respectively, controls the type-II band gap. Fig. 1b is an optical image of such a device (D1), whose detailed structure is given in Extended Data Fig. 1. While we define all the electrical voltages using region 1, we scale the d.c. voltage on BG2 appropriately such that region 1 and region 2 remain in the same gating and electric field conditions, and thus the electron and hole densities remain homogeneous (details in Methods). Unless otherwise specified, the data shown below is taken from D1 at a temperature \(T=2\)K. Similar data can be reproduced in device D2, as summarized in Extended Data Fig. 2. Using reflectance spectroscopy, we are able to determine the electron and hole densities as a function of gate and bias voltages [23]. Figure 1c shows the charge doping phase diagram as a function of \(V_{G}\) and \(V_{B}\), where red and green channels of the false color map encode the density of electrons (\(n_{e}\)) and holes (\(n_{h}\)) respectively. At low bias voltages, the system has a finite type-II band gap, so only one type of charge can enter the system at a time. When the bias voltage exceeds the gap energy of approximately 1.51eV, electrons and holes enter the system at the same time, forming correlated electron-hole fluids that consists of interlayer excitons and possibly extra charges [1, 23]. In particular, when the bias voltage reduces the single-particle band gap to be smaller than the exciton binding energy but still nonzero, an excitonic insulator phase emerges that consists of only interlayer excitons, while no extra unpaired charges are allowed to exist due to the finite single-particle gap [1, 23]. The charge compressibility map [23] in Fig. 1d reveals this charge-incompressible state at finite and equal electron and hole densities (triangle region). In this phase, perfect interlayer correlation is expected. The Coulomb drag is therefore expected to become perfect - any current flow in one layer necessarily needs to be accompanied by an equal but opposite current in the other layer. In standard electrical transport measurements, multiple contacts are made to each layer to pass currents and measure the resulting voltage drop. However, the contact resistance for TMD layers are usually orders of magnitude larger than the sheet resistance, and therefore such transport measurements are very challenging. To overcome this problem, we do not drive currents through contacts. Instead, we capacitively drive current between the two heterostructure regions [24] in our device by applying a small voltage modulation \(U=5\)mV\({}_{\text{rms}}\) at angular frequency \(\omega\) between BG1 and BG2, as illustrated in Fig. 1a. This a.c. voltage generates an oscillating potential that drives charges/excitons to flow back and forth, leading to an a.c. particle density change at \(\omega\). We then optically detect the density change. The optical absorption of the TMD layers is known to depend sensitively on its local charge density [25, 26]. For example, Figure 1e shows the density dependence of the device reflectivity spectrum near the MoSe\({}_{2}\) A exciton wavelength. With increasing electron density, the intralayer exciton peak loses its oscillator strength, and an additional absorption peak, commonly known as trions, appears at lower energy [25, 26, 27]. We focus a monochromatic laser probe at region 2 and use an avalanche photodiode (APD) to read out the reflected light intensity, whose a.c. component is proportional to the local electron density oscillation \(\Delta n_{e}\). Similarly, when the laser wavelength is tuned to the WSe\({}_{2}\) absorption peak, \(\Delta n_{h}\) can be measured. Since the density change is directly proportional to the current flow, our technique optically probes the electrical transport behavior. Figure 1f shows the effective circuit of our measurement. When the gate modulation is slow, the modulation of electron and hole density is determined by the geometric and quantum capacitance in the coupled system. With increasing modulation frequency \(\omega\), the charges/excitons need to move faster to respond to the gate modulation, until a characteristic cutoff frequency beyond which the motion of the particles will be limited by their mobility and become ineffective. This cutoff frequency is determined by the \(RC\) constant of the circuit, yielding the longitudinal resistance \(R\). ### Perfect Coulomb drag Figure 2a-b shows the real part of the hole (\(\Delta n_{h}\)) and electron (\(\Delta n_{e}\)) density oscillation averaged over the low-frequency voltage modulation regime (\(\omega=0.82-41\) kHz). The imaginary part is always essentially zero at low frequency (see Extended Data Fig. 3 for a complete data set). At low bias voltages (\(V_{B}\)\(<\)1.51V), the system forms a 2DEG or a 2DHG with only one type of charge present. The gate modulation will directly drive the charges in the active layer. The holes (electrons) have a well-defined positive (negative) response, due to their opposite charge. When a high bias voltage closes the gap and both electrons and holes are present, the experiment turns into a Coulomb drag measurement, where the holes are directly capacitively coupled to the back gate modulation and drag the electrons with them. If there were no interlayer coupling and both layers acted as perfect metals, the hole layer would completely screen the gate modulation and the electron layer would have no response. However, the experimental data shows that \(\Delta n_{e}\) remains very strong when both layers are doped, indicating strong interlayer interactions in the electron-hole bilayer. Such strong Coulomb drag, where the drag signal is comparable to the drive signal, is unusual, as most coupled bilayer systems have Coulomb drag that is on the order of a percent[28, 29]. We note that the sign of \(\Delta n_{e}\) becomes positive in this regime, in contrast to the negative response when there are only electrons in the system. This supports the picture of electrons moving together with holes. For the hole response, the sign remains the same regardless of the electron doping, but the magnitude changes for different doping conditions. Figure 2c shows the drag ratio, \(\eta=\Delta n_{e}/\Delta n_{h}\). Noticeably, the drag ratio features a very strong enhancement when the electron and hole densities are equal. In the excitonic insulator phase, we observe significantly increased drag response and decreased drive response. The drag ratio approaches 1 in this triangle region, which is consistent with the physical picture of an excitonic insulator phase that does not allow the existence of any unpaired charge. Thus, the motion of a hole must be accompanied by an electron. We now focus on a horizontal linecut at constant bias \(V_{B}=1.52\)V (Fig. 3a). In this linecut, constant bias voltage leads to fixed \(n_{e}+n_{h}\approx 0.3\times 10^{12}\)cm\({}^{-2}\) and the gate voltage tunes the electron-hole imbalance \(n_{e}-n_{h}\). The drag signal peaks at \(n_{e}=n_{h}\) and decreases rapidly when additional charges are present. Meanwhile, the drive layer has reduced response in the excitonic insulator region, outside which the drive signal varies smoothly. The frequency-dependent density change of the hole and the electron layers are displayed in Fig. 3b-c. The responses from both layers remain constant for low frequencies below 100 kHz, and start to decay after the modulation frequency exceeds the characteristic \(RC\) time constant. The gate dependence of the signal shows an abrupt change across the net charge neutrality point, indicating strong excitonic effects in the electron-hole bilayer. Figs. 3d-f displays frequency sweeps at three typical doping conditions, with \(n_{e}<n_{h}\), \(n_{e}=n_{h}\), and \(n_{e}>n_{h}\) respectively. We observe qualitatively distinct behavior among them. When \(n_{e}=n_{h}\), the electron and hole responses become identical, with the same amplitude, same phase (coded by the color of the scatter points), and same cutoff frequency. This demonstrates perfect Coulomb drag in dynamic transport, where the drive current and drag current are identical at all frequencies. When either type of additional charge is present, the active and passive layer responses are no longer identical. ### Charge and exciton transport To quantitatively understand the transport behavior of the correlated electron-hole fluids, we use the effective circuit shown in Fig. 1f to model the charge and exciton transport. Let \(\phi_{ij}\) and \(\mu_{ij}\) (\(i\in\{e,h\},j=1,2\)) denote the electric potential and chemical potential of layer \(i\) and region \(j\). In the case of strong interlayer coupling, the current in the electron and hole layers (\(I_{e},I_{h}\)) and the electrochemical potential drops (\(\Delta\phi_{i}\equiv\phi_{i1}-\phi_{i2}\), \(\Delta\mu_{i}\equiv\mu_{i1}-\mu_{i2}\)) are related by a \(2\times 2\) conductance matrix \(G\) \[\begin{bmatrix}I_{e}\\ I_{h}\end{bmatrix}=\begin{bmatrix}G_{11}&G_{12}\\ G_{21}&G_{22}\end{bmatrix}\cdot\begin{bmatrix}\Delta\phi_{e}+\Delta\mu_{e}\\ \Delta\phi_{h}+\Delta\mu_{h}\end{bmatrix} \tag{1}\] The conductance matrix can potentially take contributions from unpaired electrons, unpaired holes, and interlayer excitons. Ignoring the weak frictional drag between unpaired charges, unpaired electrons in the top layer give a contribution \(\begin{bmatrix}1/R_{e}&0\\ 0&0\end{bmatrix}\) to the conductance matrix, and similarly unpaired holes will give \(\begin{bmatrix}0&0\\ 0&1/R_{h}\end{bmatrix}\), where \(R_{e}\) and \(R_{h}\) are the effective longitudinal resistance for the electrons and the holes. These two terms both take a diagonal form and do not contribute to the drag effect. The interlayer exciton transport, however, will give an additional term that takes the form \(\frac{1}{R_{x}}\begin{bmatrix}1&-1\\ -1&1\end{bmatrix}\). This is because interlayer exciton motion is driven by the difference in the potential drops \(\Delta\phi_{h}+\Delta\mu_{h}-\Delta\phi_{e}-\Delta\mu_{e}\), and will lead to an opposite current in the two layers. The electric potentials \(\phi_{ij}\) and currents \(\{I_{e},I_{h}\}\) are related to the gate voltage modulation by basic circuit laws in this capacitor system. The chemical potential drops are related to the charge density changes by the quantum capacitance matrix \(\mathcal{C}_{Q}=\begin{bmatrix}\frac{\partial\mu_{e}}{\partial n_{e}}&\frac{ \partial\mu_{e}}{\partial n_{h}}\\ \frac{\partial\mu_{h}}{\partial n_{e}}&\frac{\partial\mu_{h}}{\partial n_{h} }\end{bmatrix}\) (details in Methods). With these equations together, the effective circuit model is fitted to the experimental \(\Delta n_{e},\Delta n_{h}\) to extract the three resistances \(R_{e}\), \(R_{h}\) and \(R_{x}\). The fitting agrees well with the experiment for both the amplitude and the phase of \(\Delta n_{e}\) and \(\Delta n_{h}\) (solid lines in Fig. 3d-f). Fig. 3g plots the fitted resistances as a function of \(V_{G}\). The exciton resistance is on the order of hundreds of kiloohms and only has a weak gate dependence across the charge neutrality. The unpaired electron and hole resistance change dramatically across net charge neutrality, and they do not become conductive at the same time. When \(n_{e}<n_{h}\), \(R_{e}\) is very large and beyond our measurement range, but when \(n_{e}\) exceeds \(n_{h}\) it quickly drops below \(1\Omega\Omega\) and decreases with increasing electron density. Similar behavior is observed for the hole resistance. This observation indicates that in the low-density regime the minority carrier cannot exist in the unpaired form, maximizing the number of interlayer excitons. At net charge neutrality, all the electrons and holes pair into bound states of indirect excitons, and the motion of both free charges becomes frozen, leading to perfect Coulomb drag where the current flow can only come from the exciton motion. The absence of unpaired electron/hole contribution to the conductance matrix results in the divergence of drag resistance, as the exciton conductance matrix is not invertible. Next, we examine the exciton phase diagram at net charge neutrality \(n_{e}=n_{h}=n\). Fig. 4a-b show the hole and electron density change as a function of pair density \(n\). At low densities, we observe the same response from the two layers, and the drag ratio (Fig. 4c, right axis) is close to 1 for \(n\lesssim 0.3\times 10^{12}\)cm\({}^{-2}\). With increased density, the drag ratio gradually decreases due to the reduced single-particle gap. It remains considerable even after the Mott density \(n_{M}\approx 0.8\times 10^{12}\)cm\({}^{-2}\)[23, 1, 22]. The drag ratio shown in Fig. 4d also decreases with elevated temperature. At low densities, the Coulomb drag remains almost perfect (\(>\)85%) up to \(\sim\)15K, which is orders of magnitude higher than semiconductor quantum wells and graphene-based systems at high magnetic field [12, 15]. The drag ratio decreases slowly with higher temperature, indicating the coexistence of excitons, unpaired electrons, and unpaired holes at finite temperatures due to finite thermal energy. After complete thermal melting of interlayer excitons at \(\sim\)70K [23], the drag signal at 80K reduces significantly to only \(\sim\)20%. The drag signal does not completely disappear after the quantum dissociation at high exciton density or thermal melting at high temperature, suggesting a considerable drag effect in the electron-hole plasma phase due to strong Coulomb attractions between electrons and holes. ## Discussion and outlook Fitting the electron and hole responses to the effective circuit model with exciton transport term yields the exciton resistance \(R_{x}\) as a function of density \(n\), as shown in Fig. 4e. With increasing density, the exciton resistance decreases slightly faster than \(1/n\) scaling (dotted lines), suggesting only a small increase in the exciton mobility at higher doping. A superfluid transition is not observed, which can also be seen from the high-frequency decay in Figs. 4a-b. We performed measurements with different driving voltages and hence different exciton current flow (Extended Data Fig. 4) and conclude that the exciton transport behavior is linear. Extended Data Fig. 5 compares the resistance of the exciton gas, the 2DHG in the WSe\({}_{2}\) layer, and the 2DEG in the MoSe\({}_{2}\) layer as a function of carrier density. At similar carrier density, the excitons here are always several times more resistive than the 2DHG in the WSe\({}_{2}\) layer. The resistance of excitons is overall closer to the resistance of the 2DEG in the MoSe\({}_{2}\) layer, but they show a different density dependence. The resistance of the 2DEG increases significantly faster than \(1/n_{e}\) at low density, potentially due to Wigner crystallization [30] or unscreened charge defects. At higher density the excitons become more resistive than both unpaired electrons and unpaired holes. The different scaling behavior suggests a different scattering mechanism for interlayer excitons compared with unpaired charges. Temperature dependence of the exciton resistance (Fig. 4f) reveals that the resistance decreases with increasing temperature. Contrary to previous theoretical predictions of superfluidity over a broad temperature range in this system[6, 8, 11], we do not observe any signature of an exciton superfluid down to our base temperature \(T=2\)K. This could possibly be due to disorders in the samples that might destroy the phase coherence between the excitons. In conclusion, we have demonstrated a powerful technique that can optically measure the Coulomb drag effect. We observe perfect Coulomb drag in the excitonic insulator phase of the electron-hole bilayer system without any external magnetic field. The transport behavior of a stable interlayer exciton fluid is measured for the first time. Our results establish TMD-based electron-hole bilayers as a promising platform for novel exciton-based electronic devices. Despite negative results in the search for exciton superfluidity in TMD bilayers, our work paves the way for further study of counterflow superconductivity and exciton condensates when the sample quality improves further. ## Methods **Device fabrication.** We use a dry-transfer method based on polyethylene terephthalate glycol (PETG) stamps to fabricate the heterostructures. Monolayer MoSe\({}_{2}\), monolayer WSe\({}_{2}\), few-layer graphene and hBN flakes are mechanically exfoliated from bulk crystals onto Si substrates with a 90-nm-thick SiO\({}_{2}\) layer. We use \(\sim\)20 nm hBN as the gate dielectric and 2-3 nm thin hBN as the interlayer spacer. A 0.5 mm thick clear PETG stamp is employed to sequentially pick up the flakes at 65-75 \({}^{\circ}\)C. The whole stack is then released onto a high resistivity Si substrate with a 90 nm SiO\({}_{2}\) layer at 95-100 \({}^{\circ}\)C, followed by dissolving the PETG in chloroform at room temperature for one day. Electrodes (50 nm Au with 5 nm Cr adhesion layer) are defined using photolithography (Durham Magneto Optics, MicroWriter) and electron beam evaporation (Angstrom Engineering). Extended Data Fig. 1 shows detailed device structures. We use two graphite back gates that are vertically stacked with a thin hBN dielectric in between to ensure homogeneous d.c. doping without any ungated region between two back gates. All the voltages are first defined in region 1. The d.c. voltage on BG2 is dependent on BG1 such that the densities in the two regions remain uniform. The scaling for device D1 is \(V_{\text{BG2}}-V_{h}=1.44(V_{\text{BG1}}-V_{h})+0.65\)V, where the slope and the intercept are determined experimentally that achieves the best matching of doping boundaries for both the electron and the hole layers. Extended Data Figs. 1e-f give a comparison of the doping phase diagram in the two regions after this scaling, which matches very well down to mV level. To ensure equilibrium electron-hole fluids, we improve the contacts to TMD layers with specially designed contact regions. For device D1, we follow ref. [1] and insert a thick hBN layer in the contact region (region 0, see Extended Data Fig. 1a). With larger interlayer spacing in this region, the vertical electric field \(V_{\text{TG}}-V_{\text{BG}}\) will easily close the type-II band gap and make the contact region heavily doped. For device D2, we misalign the top and back gates with the heterostructure such that the MoSe\({}_{2}\) contact region is only covered by the top gate while the WSe\({}_{2}\) contact region is only gated by the back gate (Extended Data Fig. 1c). This also makes the contact region heavily doped. In both designs, the contact resistance due to large Schottky barriers are reduced to eliminate any observable doping hysteresis in the voltage ranges of interest. However, the contact resistance is still large so that for the frequency range relevant in this paper (\(\omega=\)10\({}^{2}\)-10\({}^{9}\)Hz), the electron and hole contacts are both frozen and the TMD layers are effectively floating. The charge and exciton transport is therefore only between heterostructure region 1 and region 2, not from the electrode to the heterostructure. **Optical measurements.** The optical measurements are performed in an optical cryostat (Quantum Design, OptiCool) with a temperature down to 2 K (nominal). The reflection spectroscopy is performed with a supercontinuum laser (Fianium Femtopower 1060 Supercontinuum Laser) as the light source. In optical transport measurements, the gate voltage modulation is applied by an arbitrary waveform generator (Sigilent SDG6022X). A small voltage modulation with \(U_{\mathrm{rms}}=5\)mV is used to ensure minimal perturbation to the system, unless otherwise specified. Keithley 2400 or 2450 source meters are used for applying other gate and bias voltages and monitoring the leakage current. The monochromatic laser probe is either the supercontinuum laser with the wavelength selected by a reflective grating and an iris, or a diode laser with its wavelength finely tuned by a thermoelectric cooler. The laser is focused on the sample by a 20\(\times\) Mitutoyo objective with \(\sim\)1.5\(\upmu\)m beam size. We choose a low incident laser power (\(<\)20 nW) to minimize photodoping effects. The reflected light is collected by an APD (Thorlabs APD 410A) and analyzed using a lock-in amplifier (Stanford Research SR865A). For angular frequency below 25MHz, the APD voltage output is directly analyzed with the lock-in amplifier that is locked to the function generator output frequency. For higher frequencies (\(\omega\)\(>\)25MHz), we use another scheme to convert the high-frequency reflectivity oscillation to a low-frequency signal that can be easily collected by the APD and analyzed by the lock-in amplifier. The voltage input of the laser diode is modulated at frequency \(\omega^{\prime}=\omega+\delta\omega\). This leads to an incident laser intensity oscillating at a frequency slightly higher than the gate voltage modulation frequency. The multiplication of incident laser power oscillating at \(\omega+\delta\omega\) and the device reflectivity oscillating at \(\omega\) generates a low frequency component at \(\delta\omega\) for the reflected light intensity. The lock-in amplifier is then locked to the frequency difference \(\delta\omega=6.5\)kHz. The lock-in amplifier output gives the a.c. part of the APD voltage. The APD d.c. output voltage is recorded by a data acquisition card (NI USB-6212) analogue input. The ratio between the lock-in output and the APD d.c. voltage gives the relative reflectivity change \(\Delta r/r\), where \(r\) denotes the reflectivity at the laser wavelength and is a function of the charge density \(n\). The density change \(\Delta n\) can be then determined by \[\frac{\Delta r}{r}=\Delta n\times\frac{\mathrm{d}r(n)}{r\mathrm{d}n}\] The sensitivity \(\frac{\mathrm{d}r(n)}{r\mathrm{d}n}\) is directly derived from the gate dependence of the APD d.c. output. **Effective circuit model**. The conductance matrix \(G\) is related to the resistance of the free electrons, free holes, and excitons by \[G=\frac{1}{R_{e}}\begin{bmatrix}1&0\\ 0&0\end{bmatrix}+\frac{1}{R_{h}}\begin{bmatrix}0&0\\ 0&1\end{bmatrix}+\frac{1}{R_{x}}\begin{bmatrix}1&-1\\ -1&1\end{bmatrix}\] The current \(\begin{bmatrix}I_{e}\\ I_{h}\end{bmatrix}\) is driven by the electrochemical potential difference in the two regions \(\begin{bmatrix}\Delta\phi_{e}+\Delta\mu_{e}\\ \Delta\phi_{h}+\Delta\mu_{h}\end{bmatrix}\) (Equation (1) in the main text). First, the electric potentials \(\phi\) and the applied modulation voltage are related by the Kirchhoff circuit laws \[\begin{array}{l}\mathbb{i}\omega(C_{t1}+C_{m1})\phi_{e1}-\mathbb{i}\omega C_ {m1}\phi_{h1}+I_{e}=0\\ \mathbb{i}\omega(C_{t2}+C_{m2})\phi_{e2}-\mathbb{i}\omega C_{m2}\phi_{h2}-I_{e}= 0\\ \mathbb{i}\omega(C_{b1}+C_{m1})\phi_{h1}-\mathbb{i}\omega C_{m1}\phi_{e1}+I_{h}= 0\\ \mathbb{i}\omega(C_{b2}+C_{m2})\phi_{h2}-\mathbb{i}\omega C_{m2}\phi_{e2}-I_{h }=\mathbb{i}\omega C_{m2}U\end{array}\] Here \(C_{ij}\) is the geometric capacitance at the top, middle, and bottom of each heterostructure region determined by the heterostructure area \(A_{1},A_{2}\) and hBN thicknesses \(t_{t},t_{m},t_{b}\). Subscript \(i\in\{t,m,b\}\) denotes top, middle, or bottom and \(j\in\{1,2\}\) denotes region 1 or 2. The dielectric constant of hBN is 4.2. Second, the density changes in region 1 and region 2 are \(\begin{bmatrix}I_{e}/\mathbb{i}\omega eA_{1}\\ -I_{h}/\mathbb{i}\omega eA_{1}\end{bmatrix}\) and \(\begin{bmatrix}-I_{e}/\mathbb{i}\omega eA_{2}\\ I_{h}/\mathbb{i}\omega eA_{2}\end{bmatrix}\) respectively. Here \(e\) is the elementary charge. This leads to a chemical potential difference between them as \[\begin{bmatrix}\Delta\mu_{e}\\ \Delta\mu_{h}\end{bmatrix}=\begin{bmatrix}\frac{\partial\mu_{e}}{\partial n_{ e}}&\frac{\partial\mu_{e}}{\partial n_{h}}\\ \frac{\partial\mu_{h}}{\partial n_{e}}&\frac{\partial\mu_{h}}{\partial n_{h}} \end{bmatrix}\begin{bmatrix}I_{e}/\mathbb{i}\omega e(A_{1}+A_{2})\\ -I_{h}/\mathbb{i}\omega e(A_{1}+A_{2})\end{bmatrix}\] For a given set of conductance matrix and quantum capacitance matrix, the equations above determine \(\phi_{ij}\), which then gives the charge density changes \((\Delta n_{e},\Delta n_{h})\) in the electron and hole layers in region 2 by \[\begin{array}{l}-eA_{2}\Delta n_{e}=C_{t2}\phi_{e2}+C_{m2}(\phi_{e2}-\phi_{h 2})\\ eA_{2}\Delta n_{h}=C_{b2}(\phi_{h2}-U)+C_{m2}(\phi_{h2}-\phi_{e2})\end{array}\] In summary, the effective circuit model above can determine \((\Delta n_{e},\Delta n_{h})\) from a set of conductance matrix and quantum capacitance matrix. By fitting the experimental \((\Delta n_{e},\Delta n_{h})\) to the model, we can extract the electron, hole, and exciton resistances \(R_{e},R_{h},R_{x}\).
2309.12438
$\texttt{nectarchain}$, the scientific software for the Cherenkov Telescope Array -- NectarCAM
The NectarCAM is a camera that will be mounted on the Medium-Sized Telescopes of the Cherenkov Telescope Array (CTA) observatory. Along with the hardware integration of the camera, the scientific software, $\texttt{nectarchain}$, is being developed. The software is responsible for transforming the raw data from the camera into analysis-ready calibrated data. In this contribution, we present the structure of the software, which consists of two modules: the calibration pipeline and the data quality check pipeline. The calibration pipeline reduces the data, performs flat fielding, and determines the gain for the analysis. The data quality monitoring pipeline is used to select the data that meets the necessary standards for analysis. Additionally, we discuss the format of the downstream data and the integration of the $\texttt{nectarchain}$ modules in the general software framework of CTA. We also present the necessary tests for validating each part of the code. We conclude by mentioning the prospects for the future of the software.
Guillaume Grolleron, Halim Ashkar, François Brun, Heide Costantini, Denis Dumora, Pierre Jean, Daniel Kerszberg, Jean-Philippe Lenain, Vincent Marandon, Sonal Ramesh Patel, Luigi Tibaldo
2023-09-21T19:14:23Z
http://arxiv.org/abs/2309.12438v1
# nectarchain, the scientific software for the Cherenkov Telescope Array - NectarCAM ###### Abstract: The NectarCAM is a camera that will be mounted on the Medium-Sized Telescopes of the Cherenkov Telescope Array (CTA) observatory. Along with the hardware integration of the camera, the scientific software, nectarchain, is being developed. The software is responsible for transforming the raw data from the camera into analysis-ready calibrated data. In this contribution, we present the structure of the software, which consists of two modules: the calibration pipeline and the data quality check pipeline. The calibration pipeline reduces the data, performs flat fielding, and determines the gain for the analysis. The data quality monitoring pipeline is used to select the data that meets the necessary standards for analysis. Additionally, we discuss the format of the downstream data and the integration of the nectarchain modules in the general software framework of CTA. We also present the necessary tests for validating each part of the code. We conclude by mentioning the prospects for the future of the software. ## 1 Introduction The Cherenkov Telescope Array (CTA) will be the next generation ground-based imaging atmospheric Cherenkov telescopes (IACT) observatory. In comparison to the current IACT arrays, it will improve the sensitivity by a factor five to ten depending on the energy. Therefore it will bring a new outlook of the Universe. CTA will be composed of three types of telescopes, the Small-Sized-Telescopes (SST), the Medium-Sized-Telescopes (MST) and the Large-Sized-Telescopes (LST). They will be sensitive to the lower, median and higher part of the CTA energy range respectively from 20 GeV to beyond 100 TeV. NectarCAM [1] will be the camera mounted on the MSTs located in la Palma, the northern site of the Cherenkov Telescope Array Observatory (CTAO). This camera has a 8 degrees field of view and consists of 265 modules each equipped with 7 photomultiplier tubes (PMTs) able to detect Cherenkov light produced by very high energy (VHE) photons entering in the atmosphere. Each module is equipped with a trigger and readout electronic system, which operates with GHz sampling on 60 ns windows. The camera has two gain channels : the high gain to measure signals at the single photo-electron (p.e.) level and the low gain which aims at reconstructing signal up to 2000 p.e.. A full camera prototype is at the CEA Paris Saclay test bench where some tests have been performed (timing, temperature, etc.) and where flat-field (FF), pedestal (Ped) and gain calibration data acquisitions are taken to calibrate the camera and to develop the software which will be used on site to generate the calibrated data (R1) from the raw data produced by the camera (R0). In this proceeding, we are focusing on the software pipeline nectarchain1. In section 2, the main goal of the calibration pipeline is presented. Then in section 3, the nectarchain workflow is presented and the current development status is explained. Next the interface with ctapipe in presented in section 4. Monte Carlo simulations are briefly presented in section 5 and the Conclusion is done thereafter in section 6. Footnote 1: [https://github.com/cta-observatory/nectarchain](https://github.com/cta-observatory/nectarchain) ## 2 Main Goal : R0 to R1 calibration The main goal is to convert the signal in analog-to-digital-conversion ADC to the number of \(N_{\text{p.e.}}\) defined by : \[N_{\text{p.e.}}=\frac{ADC-Ped}{gain}\times FF\] The pedestal has to be estimated to be substracted to the signal in ADC. Then, the gain has to be calibrated to perform the conversion ADC to p.e., as well as the FF coefficients which are used to balance the inhomogeneities of the camera response. Ultimately, the on-site raw data streamed from the CTA camera servers to the CTA Array Control and Data Acquisition system (ACADA), dubbed R1 data in the CTA data model, will include pre-calibrated event data. The pre-calibration requires pedestal subtraction, gain linearisation (conversion from ADC to photo-electron number), gain equalisation among the camera pixels, and the selection of the most appropriate among the two gain channels. The current version of the camera server software does not perform the pre-calibration required for R1 data. We implemented in ctapipe_io_nectarcam (cf. 4) a prototype for the pre-calibration software and methods that can read calibration coefficients produced by nectarchain (cf. 3) and apply them to the raw-data to fill R1 containers. This makes it possible to ingest the data from the camera prototype in the dark room in ctapipe, and thus test the entire analysis chain. ## 3 nectarchain modules The entire workflow is described in Figure 1. In this part the calibration workflow is presented and the emphasis is made on the main part already implemented. Figure 1: Description of the NectarCAM calibration workflow. ### Pedestal estimation An accurate pedestal estimate is required to correctly evaluate the charges produced by Cherenkov photons from electromagnetic showers. The pedestal is composed of a deliberately set non-zero offset with electronic noise and, during observations, background light from the night sky. The electronic component can be measured when the camera is in dark conditions (e.g. when the shutter of the camera is closed). For NectarCAM, the average value of the pedestal baseline is set to a value of about 250 ADC. However, it may slightly change with time (warm up of electronics) and with temperature (see Figure 2). Figure 2 also shows that the pedestal waveform is affected by fluctuations in the 60 ns window due to slight offsets between the memory cells of the NECTAr chip [2] (a part of the NectarCAM front end board which produces ADC conversion). During an observation, pedestals are evaluated with periodic pedestal triggers interleaved with regular triggers. A pedestal waveform template is obtained for each pixel and for each channel by averaging waveforms that are not affected by pulses due to noise photons. This template is then used for the analysis of that observation. ### Flat-field and timing Additional information needed during the calibration of the data are the relative efficiency between pixels and the relative timing between pixels. Both are measured with a device called the flat-fielding unit which deliver short pulses of uniform blue light across the camera. The relative inhomogeneity is obtained by computing the ratio of a given pixel measured intensity to the average across the camera. The relative timing is obtained by computing the difference between the average maximum time of the waveform to the average across the camera. ### ADC to photo-electron conversion #### 3.3.1 Description of the method The Single photo-electron (SPE) calibration method is based on low illumination calibration data acquisitions where an illuminating device is set to give about 1 p.e. per pixels. Thus, after the accumulation of a reasonable number of events, the charge for each event for each pixel is computed by integrating the ADC signal over a fixed-width window centered on the main pulse. Several charge extraction methods can be used, by varying window width, with a global window for the Figure 2: Example of pedestal waveforms of a pixel of NectarCAM extracted at 5 different periods of an observation. The waveform labelled ”Average” corresponds to the template. whole camera or adapted to each pixel for instance. Charge histograms show two main peaks : one created by the pedestal signal, and another one created by the SPE signal. The separation between the two peaks makes it possible to measure the gain, which is the charge in ADC per p.e., with a two Gaussian modelling presented in [3]. However, as the illumination is low, this method can only be used to reconstruct the gain in the high gain channel. Derivation of the gain in the low gain channel will then use the High-Low method described in section 3.4. #### 3.3.2 Single photo-electron Calibration devices Two different types of SPE calibration data acquisitions can be used, the first one is simply a FF one where the flasher is set at low intensity, in this case each pixel is illuminated simultaneously. The gain computed with this method for the entire camera in presented on Figure 3. Moreover, this method can only be performed in the darkroom. On site, the SPE measurement will not be possible with the shutter open (NSB contamination/light pollution) and a special device, embedded in the camera was created to be able to make SPE measurement with the shutter closed. This alternative method does not require simultaneous illumination of the entire camera and is described below. Analysis of a few of these data acquisitions will be presented in another contribution in these proceedings [4], as well as the comparison between results from white target runs, FF data acquisitions at low illumination (cf Figure 3), and another method presented in [4] to cross-check all that procedures. ### High-Low ratio The measurement of the low gain channel amplification cannot be achieved by direct measurement using e.g the SPE device, as it is expected to be a factor 15 lower in amplification compared to the high gain channel. However, by using the regime in which both gain channels are linear (\(\sim\)30-200 p.e.), we are able to measure the ratio of amplification between the high and low gain channels using events that fall in this range. In NectarCAM, this measurement is done using events from the flat-fielding device set to provide an intensity of \(\sim\)100 p.e. The low-gain channel gain Figure 3: Gain computed with the SPE spectrum fit method applied to a FF data acquisition at low illumination. value is then the multiplication of the previously calculated high-gain amplification value by this ratio. ### Data quality monitoring The data quality monitoring (DQM) pipeline is a semi-automatic pipeline developed for the monitoring of the data taken by the camera and presented in Fig. 4. It is data acquisition independently of the calibration module and is still in development. It monitors the data for low and high gains and for different types of triggers such as pedestal and physical triggers. The main script reads the NectarCAM events and defines processor classes that are used to derive the quantities to be monitored. At this stage, the processor contains five classes presented below. **Mean waveforms:** The mean waveform class computes the average waveform for all channels and the overall camera average. The waveform of different trigger types can be compared. It allows for the verification of the quality of the camera sampling. **Mean display:** This class averages all the signal seen by the channels through all events in a given data acquisition period. It is used to flag defective pixels, abnormal pedestals and anomalous behavior in general. **Charge integration:** In this class, the charge is integrated around the peak of the signal for each event. An additional option to subtract pedestals, computed from the signal itself, is available. The class computes the charge power spectrum of the camera throughout the entire data acquisition period and computes statistical entities such as the mean, the median, the RMS and the standard deviation of the charge through the whole period. This module can be used to monitor the gain (computation described in section 3.3.2) of the camera. **Camera monitoring:** This class monitors the temperature of the modules of the camera. It uses the measurements of the camera by two sensors on the modules saved daily in the log files and matches the temperature with the times of the data acquisition. It also computes the average module temperature during these times. **Trigger statistics:** This class studies the camera trigger rates as a function of trigger ID and time. The triggers are flagged by their nature: pedestal, physical or other. Figure 4: DQM semi-automatic pipeline. ### Continuous integration Workflows dedicated to continuous integration (CI) are being developed for the nectarchain package. For instance, the package is published on PyPI on releases, as well as a conda package on the conda-forge channel. Notably, a Singularity/Apptainer [5] container image is also built and deployed on the GitHub Container Registry on releases and merged push requests, which greatly eases software deployment. When processing NectarCAM data on the EGI grid through DIRAC [6], where the data are stored, such an image can readily be instantiated without the need of embedding the software in DIRAC jobs or of deploying it to distributed file systems prior to the job processing. Ultimately, nectarchain could be deployed on the CTA Observatory CVMFS2 instance once stabilized. Some work in progress on the CI side concerns well covering unit tests as well as an extensive code documentation. Footnote 2: CernVM File System, [https://cvmfs.readthedocs.io/en/stable/](https://cvmfs.readthedocs.io/en/stable/) ## 4 Interface with ctapipe : ctapipe_io_nectarcam The interface between the data acquisition system and the CTA data processing pipeline for NectarCAM is ensured by a dedicated plugin ctapipe_io_nectarcam. The plugin reads the data in output from the camera server and fills the containers used by ctapipe, the base library upon which the CTA data processing pipeline is built. The ctapipe_io_nectarcam module is distributed through PyPI and as a conda package using the same kind of CI workflows as nectarchain. ## 5 Monte Carlo simulations To characterize the performance of the NectarCAM camera, in the past we have produced a Monte Carlo model of the camera that was compared to data taken with a partially equipped prototype of the camera at the testbench in CEA Paris-Saclay (France) and on a prototype of the MST structure in Adlershof (Germany) [7]. As the atmosphere is an integral part of the detection principle, therefore, Monte Carlo simulations are needed to convert the number of photons from the shower to the number of p.e. received in the camera. The NectarCAM camera is now complete and we are updating the Monte Carlo model comparing the simulations to the data taken with the fully equipped camera at the CEA Paris-Saclay testbench taking into account the improved timing resolution analysis [8]. ## 6 Conclusion In these proceedings, the calibration pipeline for NectarCAM has been presented. The most important part is the nectarchain pipeline which is currently used to extract the calibration quality needed to produce R1 pre-calibrated data. Moreover, nectarchain makes possible study the camera performances (pedestal estimation and evolution in time or with temperature, flat-fielding, gain estimation). ctapipe_io_nectarcam is the next part of the calibration workflow and use the output of nectarchain to produce the R1 data in a ctapipe format to be CTAO compliant. The Monte Carlo simulations are still on the rise and will be the next piece of the calibration pipeline. The current development status has been presented and we aim to have an operational workflow in the following year. ## Acknowledgments This work was conducted in the context of the CTA Consortium. We gratefully acknowledge financial support from the agencies and organizations listed here: [https://www.cta-observatory.org/consortium_acknowledgments/](https://www.cta-observatory.org/consortium_acknowledgments/).
2309.05144
Geometry of entanglement and separability in Hilbert subspaces of dimension up to three
We present a complete classification of the geometry of the mutually complementary sets of entangled and separable states in three-dimensional Hilbert subspaces of bipartite and multipartite quantum systems. Our analysis begins by finding the geometric structure of the pure product states in a given three-dimensional Hilbert subspace, which determines all the possible separable and entangled mixed states over the same subspace. In bipartite systems, we characterise the 14 possible qualitatively different geometric shapes for the set of separable states in any three-dimensional Hilbert subspace (5 classes which also appear in two-dimensional subspaces and were found and analysed by Boyer, Liss and Mor [Phys. Rev. A 95:032308, 2017], and 9 novel classes which appear only in three-dimensional subspaces), describe their geometries, and provide figures illustrating them. We also generalise these results to characterise the sets of fully separable states (and hence the complementary sets of somewhat entangled states) in three-dimensional subspaces of multipartite systems. Our results show which geometrical forms quantum entanglement can and cannot take in low-dimensional subspaces.
Rotem Liss, Tal Mor, Andreas Winter
2023-09-10T21:34:35Z
http://arxiv.org/abs/2309.05144v2
# Geometry of entanglement and separability ###### Abstract We present a complete classification of the geometry of entangled and separable states in three-dimensional Hilbert subspaces of bipartite and multipartite quantum systems. Our analysis begins by finding the geometric structure of the pure product states in a given three-dimensional Hilbert subspace, which determines all the possible separable and entangled mixed states over the same subspace. In bipartite systems, we characterise the \(14\) possible qualitatively different geometric shapes for the set of separable states in any three-dimensional Hilbert subspace (\(5\) classes which also appear in two-dimensional subspaces and were found and analysed by Boyer, Liss and Mor [_Phys. Rev. A_ 95:032308, 2017 [1]], and \(9\) novel classes which appear only in three-dimensional subspaces), describe their geometries, and provide figures illustrating them. We also generalise these results to characterise the sets of fully separable and entangled states in three-dimensional subspaces of multipartite systems. Our results show which geometrical forms quantum entanglement can and cannot take in low-dimensional subspaces. ## I Introduction In the 88 years since the discovery of quantum entanglement [2] and the realisation that it marks the main departure of quantum physics from any form of classical explanation [3; 4; 5], this quantum effect has been promoted from a counter-intuitive foundational phenomenon [6; 7] to the fuel of emerging quantum technologies [8; 9; 10; 11]. Indeed, quantum entanglement remains to this day the most studied and most crucial feature separating non-classical information processing from its classical counterpart [12]. When classifying quantum entanglement, a major problem is deciding whether a specific quantum state presents quantum entanglement (is "entangled") or does not present quantum entanglement (is "separable" or "non-entangled"). It is well known that this general problem is computationally hard (NP-hard) [13; 14; 15]: its hardness results from the fact that quantum separability is a bilinear condition which, if it were easy to solve, would allow the encoding of quadratic constraints into otherwise convex (semidefinite) problems, which is known to give rise to computationally hard problems [16; 17]. However, the known hardness results only apply to the most general quantum states, and in fact, it requires the consideration of highly mixed quantum states whose density matrices have maximal ranks. For example, any pure state (i.e., a rank-one quantum state) is known to be entangled if and only if its partial trace (reduced state) is not pure, a condition easy to check. Formally, a pure state \(\rho^{\rm AB}=|\psi\rangle\!\langle\psi|^{\rm AB}\) is separable if and only if its partial trace \(\rho^{\rm A}\triangleq{\rm Tr}_{\rm B}\,\rho^{\rm AB}\) is pure, which happens if and only if \({\rm Tr}(\rho^{\rm A})^{2}={\rm Tr}\,\rho^{\rm A}=1\). For quantum states whose rank is higher but still relatively small (compared to the dimension of the full Hilbert space \({\cal H}_{\rm A}\otimes{\cal H}_{\rm B}\)), analysing the reduced state is not enough, but separability can still be efficiently detected in some cases. First of all, if \({\rm rank}\,\rho^{\rm AB}<{\rm max}\,\{{\rm rank}\,\rho^{\rm A},{\rm rank}\, \rho^{\rm B}\}\) then the state \(\rho^{\rm AB}\) is necessarily entangled and in fact has distillable entanglement [18]. If equality holds (\({\rm rank}\,\rho^{\rm AB}={\rm max}\,\{{\rm rank}\,\rho^{\rm A},{\rm rank}\, \rho^{\rm B}\}\)), or alternatively if \(({\rm rank}\,\rho^{\rm A})({\rm rank}\,\rho^{\rm B})\leq 6\), then the state \(\rho^{\rm AB}\) is separable if and only if it has a "positive partial transpose" (PPT), namely, if applying the partial transpose operator \((\cdot)^{\rm TB}\) to the density matrix \(\rho^{\rm AB}\) results in a positive semidefinite matrix [19; 20; 21]. These results cover all bipartite density matrices of rank up to three. If \({\rm rank}\,\rho^{\rm AB}=4\), the result of Chen and Djokovic [22] shows that \(\rho^{\rm AB}\) is separable if and only if it is PPT and its support includes a non-zero product state. Furthermore, for all ranks bounded away from the maximum (concretely, for all ranks \(r\leq(\dim{\cal H}_{\rm A}-1)(\dim{\cal H}_{\rm B}-1)\)), the separable density matrices of rank \(r\) form a set of measure zero within the set of _all_ density matrices of rank \(r\)[23]. However, while deciding whether a _specific_ low-rank quantum state is entangled or separable is an easy task, characterising the set of _all_ separable states within a specific Hilbert subspace can be much harder. (This is equivalent to analysing the set of separable states inside the _support_ of a specific mixed state \(\rho^{\rm AB}\)--namely, inside the Hilbert subspace spanned by the eigenstates of \(\rho^{\rm AB}\).) For achieving this goal, the first step is classifying the different Hilbert subspaces (and the corresponding mixed states supported on them) according to the geometric picture generated by the set of all separable states inside each subspace. The analysis for two-dimensional Hilbert subspaces (Hilbert subspaces \(S\subseteq{\cal H}_{\rm A}\otimes{\cal H}_{\rm B}\) such that \(\dim S=2\)), and equivalently for rank-2 mixed states, was completed by Boyer and two of the present authors [1]: they found five (one plus one plus three) classes of two-dimensional subspaces corresponding to five qualitatively different geometries of their sets of separable states. Generally speaking, \(S\) may include _no product states_ at all; may include exactly _one_ product state; or may include _two_ linearly independent product states in one of three qualitatively different ways yielding three geometrically different sets of separable states, detailed in Theorem 1 of this paper. In the present paper we push this line of investigation further by considering the case of three-dimensional Hilbert subspaces (Hilbert subspaces \(S\subseteq{\cal H}_{\rm A}\otimes{\cal H}_{\rm B}\) such that \(\dim S=3\)), and equivalently of rank-3 mixed states. We prove that there are 14 geometrically distinct classes of such subspaces, which are the previous 5 classes of Theorem 1 (corresponding to [1]) in case \(S\) includes at most two linearly independent product states, and 9 new classes (in case \(S\) includes three linearly independent product states) with different associated geometries of the sets of separable states. These classes are most naturally described using the dimensions of the projections (partial traces) of the separable states onto \({\cal H}_{\rm A}\) and \({\cal H}_{\rm B}\), which we respectively denote as \(A_{\rm sep}\) and \(B_{\rm sep}\). The rest of this paper is organised as follows: in Section II we define the mathematical setting. In Section III we review the previous results of [1] on two-dimensional subspaces, and in Section IV we state and prove our main result as Theorem 2, where the proof is divided according to the dimensions of the local projections. Section V is dedicated to geometric descriptions of the sets of separable states described in Theorem 2, including visualisations of the possible classes. In Section VI we extend the results to multipartite systems. Finally, in Section VII we conclude and discuss the obtained results. ## II Setting We begin the analysis given a rank-3 quantum mixed state \(\rho^{\rm AB}\) on a bipartite system \({\cal H}_{\rm A}\otimes{\cal H}_{\rm B}\). The _support_ of \(\rho^{\rm AB}\) is the three-dimensional Hilbert subspace \(S={\rm supp}\,\rho^{\rm AB}\) which is spanned by the eigenstates of \(\rho^{\rm AB}\); that is, if we write \(\rho^{\rm AB}=p_{1}|\psi_{1}\rangle\!\langle\psi_{1}|^{\rm AB}+p_{2}|\psi_{2} \rangle\!\langle\psi_{2}|^{\rm AB}+p_{3}|\psi_{3}\rangle\!\langle\psi_{3}|^{ \rm AB}\), then \(S=\operatorname{supp}\rho^{\rm AB}=\operatorname{span}\left\{|\psi_{1}\rangle^{ \rm AB},|\psi_{2}\rangle^{\rm AB},|\psi_{3}\rangle^{\rm AB}\right\}\). We point out that, in general, \(\operatorname{supp}\rho=(\ker\rho)^{\perp}\) and \(\operatorname{rank}\rho=\dim(\operatorname{supp}\rho)\) (which in our case equals 3); moreover, \(S=\operatorname{supp}\rho\) includes all pure states appearing in any of the possible decompositions of the mixed state \(\rho\) (see details and proof in Lemma 3 of [1]), so the definition of \(S\) is independent of the specific decomposition chosen for \(\rho\). Formally, given a bipartite Hilbert space \(\mathcal{H}_{\rm A}\otimes\mathcal{H}_{\rm B}\) and given any Hilbert subspace \(S\subseteq\mathcal{H}_{\rm A}\otimes\mathcal{H}_{\rm B}\), we denote by \(S_{\rm sep}\) the subspace spanned by all product states in \(S\), formally defined as: \[S_{\rm sep} \triangleq \operatorname{span}\left\{|\psi\rangle^{\rm AB}\in S\;:\;|\psi \rangle^{\rm AB}\text{ is a product state}\right\} \tag{1}\] \[= \operatorname{span}\left\{|\psi\rangle^{\rm AB}\in S\;:\;\exists |\phi_{\rm A}\rangle^{\rm A}\in\mathcal{H}_{\rm A}\;,\;|\phi_{\rm B}\rangle^{ \rm B}\in\mathcal{H}_{\rm B}\;:\;|\psi\rangle^{\rm AB}=|\phi_{\rm A}\rangle^{ \rm A}|\phi_{\rm B}\rangle^{\rm B}\right\}.\] In our paper, we characterise the set of all separable states (both pure and mixed) over \(S\), a set we denote by \(\mathcal{D}_{\rm sep}^{S}\). This set includes exactly all mixtures (convex combinations) of product states in \(S\). Formally, we define: \[\mathcal{D}_{\rm sep}^{S}\triangleq\{\rho\in\mathcal{D}(S)\;:\;\rho\text{ is separable}\}, \tag{2}\] where \(\mathcal{D}(S)\) is the set of all positive semidefinite operators that have trace \(1\) on the Hilbert space \(S\). Because \(\dim S=3\), the dimension of its subspace \(S_{\rm sep}\) can be \(0\), \(1\), \(2\), or \(3\)[24]. The analysis of the first three cases (\(\dim S_{\rm sep}\in\{0,1,2\}\)) was carried out in [1][25]. We therefore focus here on the last case \(\dim S_{\rm sep}=3\), in which \(S_{\rm sep}=S\) is spanned by three linearly independent product states: \[S=\operatorname{span}\left\{|\alpha_{1}\rangle^{\rm A}|\beta_{1}\rangle^{\rm B },|\alpha_{2}\rangle^{\rm A}|\beta_{2}\rangle^{\rm B},|\alpha_{3}\rangle^{\rm A }|\beta_{3}\rangle^{\rm B}\right\}. \tag{3}\] Now, the classification will depend on the dimensionality of the two _local_ subspaces \[\begin{split} A_{\rm sep}&\triangleq\operatorname{ span}\left\{|\alpha_{1}\rangle^{\rm A},|\alpha_{2}\rangle^{\rm A},|\alpha_{3} \rangle^{\rm A}\right\},\\ B_{\rm sep}&\triangleq\operatorname{span}\left\{| \beta_{1}\rangle^{\rm B},|\beta_{2}\rangle^{\rm B},|\beta_{3}\rangle^{\rm B} \right\},\end{split} \tag{4}\] each of which can be 1-, 2-, or 3-dimensional. Writing all possible combinations of their dimensions \((\dim A_{\rm sep},\dim B_{\rm sep})\), we obtain the following possible combinations: (1,3) and (3,1), (3,3), (2,3) and (3,2), and finally (2,2). The resulting classes corresponding to each combination are presented in Theorem 2. ## III Previous Work of [1] We begin by presenting the result of [1] on a two-dimensional Hilbert subspace \(S\), which geometrically corresponds to a Bloch sphere. This result is mainly based on distinguishing the three simple cases (1,2), (2,1), and (2,2). **Theorem 1** (Boyer/Liss/Mor [1]): _Given a bipartite Hilbert space \(\mathcal{H}_{\rm A}\otimes\mathcal{H}_{\rm B}\) and any two-dimensional subspace \(S\subseteq\mathcal{H}_{\rm A}\otimes\mathcal{H}_{\rm B}\), one of the 3 following cases holds: (note that each case corresponds to a possible dimension of the Hilbert subspace \(S_{\rm sep}\) spanned by all product states in \(S\))_ * \(S\) _includes no product states (_\(\dim S_{\rm sep}=0\)_), in which case all pure and mixed states over_ \(S\) _are entangled; or_ * \(S\) _includes exactly one (pure) product state (_\(\dim S_{\rm sep}=1\)_), in which case all the other pure and mixed states over_ \(S\) _are entangled; or_ _ 3. \(S\) _is spanned by two (pure) product states (_\(\dim S_{\text{sep}}=2\)_), so:_ \(S=\operatorname{span}\left\{|\alpha_{1}\rangle^{\text{A}}|\beta_{1}\rangle^{ \text{B}},|\alpha_{2}\rangle^{\text{A}}|\beta_{2}\rangle^{\text{B}}\right\}\)_._ _In the third case, let_ \[A_{\text{sep}} \triangleq\operatorname{span}\left\{|\alpha_{1}\rangle^{\text{A}},|\alpha_{2}\rangle^{\text{A}}\right\}, \tag{5}\] \[B_{\text{sep}} \triangleq\operatorname{span}\left\{|\beta_{1}\rangle^{\text{B}},|\beta_{2}\rangle^{\text{B}}\right\},\] _whose dimensions can be \((\dim A_{\text{sep}},\dim B_{\text{sep}})\in\{(1,2),\,(2,1),\,(2,2)\}\). Then, the separability of the pure and mixed states over \(S\) is as follows, depending on the dimensions \((\dim A_{\text{sep}},\dim B_{\text{sep}})\):_ **(1,2):**: \(S=\{|\alpha_{1}\rangle^{\text{A}}\}\otimes B_{\text{sep}}\) _is simply a local qubit in subsystem_ \(\mathcal{H}_{\text{B}}\)_, and all pure and mixed states over_ \(S\) _are separable. (In fact, all of them are product states of a fixed_ \(|\alpha_{1}\rangle^{\text{A}}\) _and a qubit state over_ \(B_{\text{sep}}\)_.)_ **(2,1):**: \(S=A_{\text{sep}}\otimes\left\{|\beta_{1}\rangle^{\text{B}}\right\}\) _is simply a local qubit in subsystem_ \(\mathcal{H}_{\text{A}}\)_, and all pure and mixed states over_ \(S\) _are separable. (In fact, all of them are product states of a qubit state over_ \(A_{\text{sep}}\) _and a fixed_ \(|\beta_{1}\rangle^{\text{B}}\)_.)_ **(2,2):**: _The separable pure and mixed states over_ \(S\) _are exactly all mixtures (convex combinations) of_ \(|\alpha_{1}\rangle\!\langle\alpha_{1}|^{\text{A}}\otimes|\beta_{1}\rangle\! \langle\beta_{1}|^{\text{B}}\) _and_ \(|\alpha_{2}\rangle\!\langle\alpha_{2}|^{\text{A}}\otimes|\beta_{2}\rangle\! \langle\beta_{2}|^{\text{B}}\)_; all the other pure and mixed states over_ \(S\) _are entangled._ Proof.: Proved in [1] as its Theorem 9, which was phrased in different terms: our case (i) corresponds to Class 5 of [1], our case (ii) corresponds to Class 4 of [1], and in our case (iii), options (1,2) and (2,1) correspond to Class 1 of [1], and option (2,2) corresponds to Class 2+3 of [1]. **Remark**.: We point out that [1] combined both options \((\dim A_{\text{sep}},\dim B_{\text{sep}})\in\{(1,2),\,(2,1)\}\) into one class (Class 1), because they give the same final result (all pure and mixed over \(S\) being product states) and are related to one another by the exchange symmetry between \(\mathcal{H}_{\text{A}}\) and \(\mathcal{H}_{\text{B}}\). Here we distinguish between these two options, so that classification becomes easier in the more involved case of a three-dimensional Hilbert subspace (Theorem 2). On the other hand, [1] split option \((\dim A_{\text{sep}},\dim B_{\text{sep}})=(2,2)\) into two classes: Class 2 where the two states are orthogonal \((|\alpha_{1}\rangle^{\text{A}}|\beta_{1}\rangle^{\text{B}}\perp|\alpha_{2} \rangle^{\text{A}}|\beta_{2}\rangle^{\text{B}})\) and Class 3 where they are non-orthogonal. Here we drop this distinction because it disappears under the symmetry of local invertible operations. As we explain below in Section IV, for invertible \(X\) and \(Y\) acting on \(\mathcal{H}_{\text{A}}\) and \(\mathcal{H}_{\text{B}}\), respectively, \(\rho^{\text{AB}}\) is separable if and only if \((X\otimes Y)\rho^{\text{AB}}(X\otimes Y)^{\dagger}\) is separable. We can thus use \(X\) (or \(Y\)) to map any linearly independent set of states in \(\mathcal{H}_{\text{A}}\) (or \(\mathcal{H}_{\text{B}}\), respectively) to any other linearly independent set of states, including in particular an orthonormal basis. Classes 2 and 3 of [1] are thus the same under this symmetry, so we cannot distinguish between them here. ## IV Main result We can now state our main result, modeled after Theorem 1. The theorem uses the notations defined in Section II. We note that \(n\) vectors in a vector space of dimension \(d\) are said to be "in general position" if any subset of size \(d\) of them is linearly independent. In addition, \(\mathcal{D}(S)\) is the set of all density matrices over the Hilbert space \(S\), and \(\operatorname{conv}(A)\) is the set of all convex combinations (mixtures) of the states in the set \(A\). Moreover, in the proof, quantum states are said to be "equal" even if they differ by a physically-irrelevant global phase \(e^{i\varphi}\), and they are always assumed to be normalised. In the proof we also use the important observation that separability of a state is invariant under local invertible operations. In other words, for any two invertible matrices \(X\in\operatorname{GL}(\mathcal{H}_{\text{A}})\) and \(Y\in\operatorname{GL}(\mathcal{H}_{\text{B}})\), the state \(\rho^{\text{AB}}\) is separable if and only if the state \((X\otimes Y)\rho^{\text{AB}}(X\otimes Y)^{\dagger}\) is separable. Under this symmetry, the support is mapped accordingly: if \(S=\operatorname{supp}\rho^{\text{AB}}\), then \(\operatorname{supp}[(X\otimes Y)\rho^{\text{AB}}(X\otimes Y)^{\dagger}]=(X \otimes Y)S\). We will thus study the different shapes the set of separable states \(\mathcal{D}^{S}_{\text{sep}}\) can take, where any two subspaces related by a local invertible map \(X\otimes Y\) can be treated as equivalent. **Theorem 2**: _Given a bipartite Hilbert space \({\cal H}_{\rm A}\otimes{\cal H}_{\rm B}\) and any three-dimensional subspace \(S\subseteq{\cal H}_{\rm A}\otimes{\cal H}_{\rm B}\), let \(S_{\text{sep}}\) be the subspace spanned by all product states in \(S\) (that is, \(S_{\text{sep}}\triangleq{\rm span}\{|\psi\rangle^{\rm AB}\in S\ :\ |\psi\rangle^{\rm AB}\) is a product state\(\}\)). We would like to characterise the set of all separable states (both pure and mixed) over \(S\), a set we denote by \({\cal D}^{S}_{\text{sep}}\) and formally define as follows:_ \[{\cal D}^{S}_{\text{sep}}\triangleq\{\rho\in{\cal D}(S)\ :\ \rho\ \text{is separable}\}. \tag{6}\] _If \(\dim S_{\text{sep}}\leq 2\), then \({\cal D}^{S}_{\text{sep}}\) belongs to one of the 5 classes described in Theorem 1._ _If \(\dim S_{\text{sep}}=3\) (so \(S_{\text{sep}}=S\)), then using the notations of Eqs. (3)-(4):_ \[\begin{split} S&={\rm span}\left\{|\alpha_{1} \rangle^{\rm A}|\beta_{1}\rangle^{\rm B},|\alpha_{2}\rangle^{\rm A}|\beta_{2} \rangle^{\rm B},|\alpha_{3}\rangle^{\rm A}|\beta_{3}\rangle^{\rm B}\right\}, \\ A_{\text{sep}}&\triangleq{\rm span}\left\{|\alpha_{ 1}\rangle^{\rm A},|\alpha_{2}\rangle^{\rm A},|\alpha_{3}\rangle^{\rm A}\right\},\\ B_{\text{sep}}&\triangleq{\rm span}\left\{|\beta_{ 1}\rangle^{\rm B},|\beta_{2}\rangle^{\rm B},|\beta_{3}\rangle^{\rm B}\right\}, \end{split} \tag{7}\] _one of the 9 following cases occurs, depending on the dimensions \((\dim A_{\text{sep}},\dim B_{\text{sep}})\):_ **(1,3):**: _All pure and mixed states over_ \(S\) _are separable, because_ \(S=\left\{|\alpha_{1}\rangle^{\rm A}\right\}\otimes B_{\text{sep}}\)_. (In fact, all states over_ \(S\) _are product states of a fixed state_ \(|\alpha_{1}\rangle^{\rm A}\) _and a qutrit state over_ \(B_{\text{sep}}\)_.) Formally,_ \({\cal D}^{S}_{\text{sep}}={\cal D}(S)\)_. This case is illustrated in Fig._ 1_._ **(3,1):**: _This case is symmetric to (1,3): all pure and mixed states over_ \(S\) _are separable, because_ \(S=A_{\text{sep}}\otimes\left\{|\beta_{1}\rangle^{\rm B}\right\}\)_. (In fact, all states over_ \(S\) _are product states of a qutrit state over_ \(A_{\text{sep}}\) _and a fixed state_ \(|\beta_{1}\rangle^{\rm B}\)_.) Formally,_ \({\cal D}^{S}_{\text{sep}}={\cal D}(S)\)_. This case is illustrated in Fig._ 1_._ **(3,3):**: _The separable pure and mixed states over_ \(S\) _are exactly all mixtures (convex combinations) of_ \(|\alpha_{1}\rangle\!\langle\alpha_{1}|^{\rm A}\otimes|\beta_{1}\rangle\! \langle\beta_{1}|^{\rm B}\)_,_ \(|\alpha_{2}\rangle\!\langle\alpha_{2}|^{\rm A}\otimes|\beta_{2}\rangle\! \langle\beta_{2}|^{\rm B}\)_, and_ \(|\alpha_{3}\rangle\!\langle\alpha_{3}|^{\rm A}\otimes|\beta_{3}\rangle\! \langle\beta_{3}|^{\rm B}\)_; all the other pure and mixed states over_ \(S\) _are entangled. Formally:_ \[{\cal D}^{S}_{\text{sep}}={\rm conv}\left\{|\alpha_{1}\rangle\!\langle\alpha_{ 1}|^{\rm A}\otimes|\beta_{1}\rangle\!\langle\beta_{1}|^{\rm B}\,\ |\alpha_{2}\rangle\!\langle\alpha_{2}|^{\rm A}\otimes|\beta_{2}\rangle\! \langle\beta_{2}|^{\rm B}\,\ |\alpha_{3}\rangle\!\langle\alpha_{3}|^{\rm A} \otimes|\beta_{3}\rangle\!\langle\beta_{3}|^{\rm B}\right\}. \tag{8}\] _This case is illustrated in Fig._ 2_._ **(2,3):**: _We distinguish two subcases:_ * _If the three states_ \(|\alpha_{1}\rangle^{\rm A},|\alpha_{2}\rangle^{\rm A},|\alpha_{3}\rangle^{\rm A}\) _are linearly dependent but in general position (i.e., any two of them are linearly independent), then the separable pure and mixed states over_ \(S\) _are exactly all mixtures (convex combinations) of_ \(|\alpha_{1}\rangle\!\langle\alpha_{1}|^{\rm A}\otimes|\beta_{1}\rangle\! \langle\beta_{1}|^{\rm B}\)_,_ \(|\alpha_{2}\rangle\!\langle\alpha_{2}|^{\rm A}\otimes|\beta_{2}\rangle\! \langle\beta_{2}|^{\rm B}\)_, and_ \(|\alpha_{3}\rangle\!\langle\alpha_{3}|^{\rm A}\otimes|\beta_{3}\rangle\! \langle\beta_{3}|^{\rm B}\)_, identically to case (_3,3_). Formally:_ \[{\cal D}^{S}_{\text{sep}}={\rm conv}\left\{|\alpha_{1}\rangle\!\langle\alpha_{ 1}|^{\rm A}\otimes|\beta_{1}\rangle\!\langle\beta_{1}|^{\rm B}\,\ |\alpha_{2}\rangle\! \langle\alpha_{2}|^{\rm A}\otimes|\beta_{2}\rangle\!\langle\beta_{2}|^{\rm B}\,\ |\alpha_{3}\rangle\! \langle\alpha_{3}|^{\rm A}\otimes|\beta_{3}\rangle\! \langle\beta_{3}|^{\rm B}\right\}.\] (9) _This case is illustrated in Fig._ 2_._ * _Otherwise, without loss of generality we can assume_ \(|\alpha_{2}\rangle^{\rm A}=|\alpha_{3}\rangle^{\rm A}\)_, and the separable pure and mixed states over_ \(S\) _are exactly all mixtures (convex combinations) of_ \(|\alpha_{1}\rangle\!\langle\alpha_{1}|^{\rm A}\otimes|\beta_{1}\rangle\! \langle\beta_{1}|^{\rm B}\) _with any pure or mixed state over the space_ \(\left\{|\alpha_{2}\rangle^{\rm A}\right\}\otimes{\rm span}\left\{|\beta_{2} \rangle^{\rm B},|\beta_{3}\rangle^{\rm B}\right\}\)_. Formally, in case_ \(|\alpha_{2}\rangle^{\rm A}=|\alpha_{3}\rangle^{\rm A}\)_:_ \[{\cal D}^{S}_{\text{sep}}={\rm conv}\left[\left\{|\alpha_{1}\rangle\!\langle \alpha_{1}|^{\rm A}\otimes|\beta_{1}\rangle\!\langle\beta_{1}|^{\rm B}\right\} \cup{\cal D}\left(\left\{|\alpha_{2}\rangle^{\rm A}\right\}\otimes{\rm span} \left\{|\beta_{2}\rangle^{\rm B},|\beta_{3}\rangle^{\rm B}\right\}\right)]\,,\] (10) _and symmetric results are obtained in case_ \(|\alpha_{1}\rangle^{\rm A}=|\alpha_{2}\rangle^{\rm A}\) _or_ \(|\alpha_{1}\rangle^{\rm A}=|\alpha_{3}\rangle^{\rm A}\)_. This case is illustrated in Fig._ 3_._ **(3,2):**: _This case is symmetric to (2,3): we distinguish two subcases:_ 1. _If the three states_ \(|\beta_{1}\rangle^{\rm B},|\beta_{2}\rangle^{\rm B},|\beta_{3}\rangle^{\rm B}\) _are linearly dependent but in general position (i.e., any two of them are linearly independent), then the separable pure and mixed states over_ \(S\) _are exactly all mixtures (convex combinations) of_ \(|\alpha_{1}\rangle\!\langle\alpha_{1}|^{\rm A}\otimes|\beta_{1}\rangle\!\langle \beta_{1}|^{\rm B}\)_,_ \(|\alpha_{2}\rangle\!\langle\alpha_{2}|^{\rm A}\otimes|\beta_{2}\rangle\! \langle\beta_{2}|^{\rm B}\)_, and_ \(|\alpha_{3}\rangle\!\langle\alpha_{3}|^{\rm A}\otimes|\beta_{3}\rangle\! \langle\beta_{3}|^{\rm B}\)_, identically to case (_3_,_3_). Formally:_ _This case is illustrated in Fig._ 2_._ 2. _Otherwise, without loss of generality we can assume_ \(|\beta_{2}\rangle^{\rm B}=|\beta_{3}\rangle^{\rm B}\)_, and the separable pure and mixed states over_ \(S\) _are exactly all mixtures (convex combinations) of_ \(|\alpha_{1}\rangle\!\langle\alpha_{1}|^{\rm A}\otimes|\beta_{1}\rangle\! \langle\beta_{1}|^{\rm B}\) _with any pure or mixed state over the space_ \({\rm span}\left\{|\alpha_{2}\rangle^{\rm A},|\alpha_{3}\rangle^{\rm A}\right\} \otimes\left\{|\beta_{2}\rangle^{\rm B}\right\}\)_. Formally, in case_ \(|\beta_{2}\rangle^{\rm B}=|\beta_{3}\rangle^{\rm B}\)_:_ \[{\cal D}^{S}_{\rm sep}={\rm conv}\left[\left\{|\alpha_{1}\rangle\!\langle \alpha_{1}|^{\rm A}\otimes|\beta_{1}\rangle\!\langle\beta_{1}|^{\rm B}\right\} \cup{\cal D}\left({\rm span}\left\{|\alpha_{2}\rangle^{\rm A},|\alpha_{3} \rangle^{\rm A}\right\}\otimes\left\{|\beta_{2}\rangle^{\rm B}\right\}\right) \right],\] (12) _and symmetric results are obtained in case_ \(|\beta_{1}\rangle^{\rm B}=|\beta_{2}\rangle^{\rm B}\) _or_ \(|\beta_{1}\rangle^{\rm B}=|\beta_{3}\rangle^{\rm B}\)_. This case is illustrated in Fig._ 3_._ **(2,2):**: _We distinguish two subcases:_ 1. _If the three states_ \(|\alpha_{1}\rangle^{\rm A},|\alpha_{2}\rangle^{\rm A},|\alpha_{3}\rangle^{\rm A}\) _are not in general position, or symmetrically if the three states_ \(|\beta_{1}\rangle^{\rm B},|\beta_{2}\rangle^{\rm B},|\beta_{3}\rangle^{\rm B}\) _are not in general position, then without loss of generality we can assume_ \(|\alpha_{2}\rangle^{\rm A}=|\alpha_{3}\rangle^{\rm A}\) _and_ \(|\beta_{1}\rangle^{\rm B}\neq|\beta_{2}\rangle^{\rm B}\)_, and then the separable pure and mixed states over_ \(S\) _are exactly all mixtures (convex combinations) of any pure or mixed state over the subspace_ \(S_{1}\triangleq\left\{|\alpha_{2}\rangle^{\rm A}\right\}\otimes{\rm span} \left\{|\beta_{1}\rangle^{\rm B},|\beta_{2}\rangle^{\rm B}\right\}\) _with any pure or mixed state over the subspace_ \(S_{2}\triangleq{\rm span}\left\{|\alpha_{1}\rangle^{\rm A},|\alpha_{2}\rangle^{ \rm A}\right\}\otimes\left\{|\beta_{1}\rangle^{\rm B}\right\}\)_. Formally, in case_ \(|\alpha_{2}\rangle^{\rm A}=|\alpha_{3}\rangle^{\rm A}\) _(and_ \(|\beta_{1}\rangle^{\rm B}\neq|\beta_{2}\rangle^{\rm B}\)_):_ \[{\cal D}^{S}_{\rm sep} = {\rm conv}\left[{\cal D}(S_{1})\cup{\cal D}(S_{2})\right]\] (13) \[= {\rm conv}\left[{\cal D}\left(\left\{|\alpha_{2}\rangle^{\rm A} \right\}\otimes{\rm span}\left\{|\beta_{1}\rangle^{\rm B},|\beta_{2}\rangle^{ \rm B}\right\}\right)\cup{\cal D}\left({\rm span}\left\{|\alpha_{1}\rangle^{ \rm A},|\alpha_{2}\rangle^{\rm A}\right\}\otimes\left\{|\beta_{1}\rangle^{\rm B }\right\}\right)\right]\] \[= {\rm conv}\left[{\cal D}\left(\left\{|\alpha_{2}\rangle^{\rm A} \right\}\otimes B_{\rm sep}\right)\cup{\cal D}\left(A_{\rm sep}\otimes\left\{| \beta_{1}\rangle^{\rm B}\right\}\right)\right],\] and symmetric results are obtained in case_ \(|\alpha_{1}\rangle^{\rm A}=|\alpha_{2}\rangle^{\rm A}\)_,_ \(|\alpha_{1}\rangle^{\rm A}=|\alpha_{3}\rangle^{\rm A}\)_,_ \(|\beta_{2}\rangle^{\rm B}=|\beta_{3}\rangle^{\rm B}\)_,_ \(|\beta_{1}\rangle^{\rm B}=|\beta_{2}\rangle^{\rm B}\)_, or_ \(|\beta_{1}\rangle^{\rm B}=|\beta_{3}\rangle^{\rm B}\)_. This case is illustrated in Fig._ 4_._ 2. _If the three states_ \(|\alpha_{1}\rangle^{\rm A},|\alpha_{2}\rangle^{\rm A},|\alpha_{3}\rangle^{\rm A}\) _and the three states_ \(|\beta_{1}\rangle^{\rm B},|\beta_{2}\rangle^{\rm B},|\beta_{3}\rangle^{\rm B}\) _are both linearly dependent but in general position (i.e., any two states of_ \(|\alpha_{1}\rangle^{\rm A},|\alpha_{2}\rangle^{\rm A},|\alpha_{3}\rangle^{\rm A}\) _are linearly independent, and similarly, any two states of_ \(|\beta_{1}\rangle^{\rm B},|\beta_{2}\rangle^{\rm B},|\beta_{3}\rangle^{\rm B}\) _are linearly independent), then there exists an invertible linear map_ \(L:A_{\rm sep}\mapsto B_{\rm sep}\) _satisfying_ \(|\beta_{i}\rangle^{\rm B}\propto L|\alpha_{i}\rangle^{\rm A}\) _for all values_ \(i=1,2,3\)_, such that the product states in_ \(S\) _are exactly_ \(\left\{|\psi\rangle^{\rm A}\,(L|\psi\rangle)^{\rm B}\ :\ |\psi\rangle^{\rm A}\in A_{\rm sep}\right\}\) _(up to a normalisation factor), and the separable pure and mixed states over_ \(S\) _are all mixtures of these product states in_ \(S\)_. Formally:_ \[{\cal D}^{S}_{\rm sep}={\rm conv}\left\{|\Psi\rangle\!\langle\Psi|^{\rm AB}\ :\ | \Psi\rangle^{\rm AB}=\gamma|\psi\rangle^{\rm A}\,(L|\psi\rangle)^{\rm B}\,\ |\psi\rangle^{\rm A}\in A_{\rm sep}\,\ \gamma \in\mathbb{C}\,\ \ \left\||\Psi\rangle^{\rm AB}\right\|=1\right\}.\] _This case is illustrated in Fig._ 5_._ **Proof** If \(\dim S_{\rm sep}\leq 2\), then applying Theorem 1 to a two-dimensional subspace of \(S\) which includes \(S_{\rm sep}\) proves that the set \({\cal D}^{S_{\rm sep}}_{\rm sep}\) of pure and mixed separable states over \(S_{\rm sep}\) belongs to one of the five classes described in Theorem 1. All the other pure and mixed states over \(S\) (that are not states over \(S_{\rm sep}\)) must be entangled. Therefore, \({\cal D}^{S}_{\rm sep}={\cal D}^{S_{\rm sep}}_{\rm sep}\) indeed belongs to one of the five classes described in Theorem 1. If \(\dim S_{\rm sep}=3\), then \(S_{\rm sep}=S\), in which case we divide our proof into cases according to the dimensions \((\dim A_{\rm sep},\dim B_{\rm sep})\): ### (1,3) and (3,1) The case (1,3) means that \(\dim A_{\text{sep}}=1\), so \(|\alpha_{1}\rangle^{\text{A}}=|\alpha_{2}\rangle^{\text{A}}=|\alpha_{3}\rangle^{ \text{A}}\). Therefore, \[\begin{split} S&=\text{span}\left\{|\alpha_{1} \rangle^{\text{A}}|\beta_{1}\rangle^{\text{B}},|\alpha_{2}\rangle^{\text{A}}| \beta_{2}\rangle^{\text{B}},|\alpha_{3}\rangle^{\text{A}}|\beta_{3}\rangle^{ \text{B}}\right\}\\ &=\left\{|\alpha_{1}\rangle^{\text{A}}\right\}\otimes\text{span} \left\{|\beta_{1}\rangle^{\text{B}},|\beta_{2}\rangle^{\text{B}},|\beta_{3} \rangle^{\text{B}}\right\}=\left\{|\alpha_{1}\rangle^{\text{A}}\right\}\otimes B _{\text{sep}},\end{split} \tag{15}\] which means that all pure and mixed states over \(S\) are product states: \(\mathcal{D}_{\text{sep}}^{S}=\mathcal{D}(S)\), as needed. The proof for the case (3,1) is symmetric, so \(S=A_{\text{sep}}\otimes\left\{|\beta_{1}\rangle^{\text{B}}\right\}\) and \(\mathcal{D}_{\text{sep}}^{S}=\mathcal{D}(S)\), as needed. ### (3,3) In this case, all three states \(|\alpha_{1}\rangle^{\text{A}},|\alpha_{2}\rangle^{\text{A}},|\alpha_{3}\rangle ^{\text{A}}\) are linearly independent, and so are \(|\beta_{1}\rangle^{\text{B}},|\beta_{2}\rangle^{\text{B}},|\beta_{3}\rangle ^{\text{B}}\). Let \(\left\{|1\rangle^{\text{A}},|2\rangle^{\text{A}},|3\rangle^{\text{A}}\right\}\) and \(\left\{|1\rangle^{\text{B}},|2\rangle^{\text{B}},|3\rangle^{\text{B}}\right\}\) be orthonormal bases of \(A_{\text{sep}}\) and \(B_{\text{sep}}\), respectively. We can thus apply a local invertible map \(X\otimes Y\), where \(X\) is an invertible matrix over \(A_{\text{sep}}\) mapping \(|\alpha_{1}\rangle^{\text{A}}\mapsto|1\rangle^{\text{A}}\), \(|\alpha_{2}\rangle^{\text{A}}\mapsto|2\rangle^{\text{A}}\), and \(|\alpha_{3}\rangle^{\text{A}}\mapsto|3\rangle^{\text{A}}\), while \(Y\) is an invertible matrix over \(B_{\text{sep}}\) mapping \(|\beta_{1}\rangle^{\text{B}}\mapsto|1\rangle^{\text{B}}\), \(|\beta_{2}\rangle^{\text{B}}\mapsto|2\rangle^{\text{B}}\), and \(|\beta_{3}\rangle^{\text{B}}\mapsto|3\rangle^{\text{B}}\) (both maps exist, because there always exists an invertible matrix mapping a given set of linearly independent states into another given set of linearly independent states). Since \(S=\text{span}\left\{|\alpha_{1}\rangle^{\text{A}}|\beta_{1}\rangle^{\text{B}}, |\alpha_{2}\rangle^{\text{A}}|\beta_{2}\rangle^{\text{B}},|\alpha_{3}\rangle ^{\text{A}}|\beta_{3}\rangle^{\text{B}}\right\}\), all elements of \(S\) are superpositions \(u|\alpha_{1}\rangle^{\text{A}}|\beta_{1}\rangle^{\text{B}}+v|\alpha_{2} \rangle^{\text{A}}|\beta_{2}\rangle^{\text{B}}+w|\alpha_{3}\rangle^{\text{A} }|\beta_{3}\rangle^{\text{B}}\). Under the above map, they are mapped to the respective superpositions \(u|1\rangle^{\text{A}}|1\rangle^{\text{B}}+v|2\rangle^{\text{A}}|2\rangle^{ \text{B}}+w|3\rangle^{\text{A}}|3\rangle^{\text{B}}\). Since any local invertible map preserves separability, we notice that each state in \(S\) is a product state if and only if the state to which it is mapped is a product state. However, since \(u|1\rangle^{\text{A}}|1\rangle^{\text{B}}+v|2\rangle^{\text{A}}|2\rangle^{ \text{B}}+w|3\rangle^{\text{A}}|3\rangle^{\text{A}}|3\rangle^{\text{B}}\) is in its Schmidt form and has a Schmidt number of \(3\) or less, it is a product state if and only if two of its coefficients are zero (otherwise, it is entangled). Therefore, the only product states in \(S\) are the spanning states \(|\alpha_{1}\rangle^{\text{A}}|\beta_{1}\rangle^{\text{B}},|\alpha_{2}\rangle^{ \text{A}}|\beta_{2}\rangle^{\text{B}},|\alpha_{3}\rangle^{\text{A}}|\beta_{3} \rangle^{\text{B}}\). This finding implies that the set of separable pure and mixed states over \(S\) includes exactly all mixtures of these three product states, so: \[\mathcal{D}_{\text{sep}}^{S}=\text{conv}\left\{|\alpha_{1}\rangle\!\langle \!\alpha_{1}|^{\text{A}}\otimes|\beta_{1}\rangle\!\langle\!\beta_{1}|^{\text{B }}\,\ |\alpha_{2}\rangle\!\langle\!\alpha_{2}|^{\text{A}}\otimes|\beta_{2}\rangle\! \langle\!\beta_{2}|^{\text{B}}\,\ |\alpha_{3}\rangle\!\langle\!\alpha_{3}|^{\text{A}}\otimes|\beta_{3} \rangle\!\langle\!\beta_{3}|^{\text{B}}\right\}, \tag{16}\] as needed. ### (2,3) and (3,2) In the case (2,3) we can assume, without loss of generality, that \(|\alpha_{1}\rangle^{\text{A}}\neq|\alpha_{2}\rangle^{\text{A}}\), so \(|\alpha_{3}\rangle^{\text{A}}=a|\alpha_{1}\rangle^{\text{A}}+b|\alpha_{2}\rangle ^{\text{A}}\). On the other hand, \(|\beta_{1}\rangle^{\text{B}},|\beta_{2}\rangle^{\text{B}},|\beta_{3}\rangle^{ \text{B}}\) are linearly independent. Using again local invertible linear maps, we can map \(|\alpha_{1}\rangle^{\text{A}}\mapsto|1\rangle^{\text{A}}\quad,\quad|\alpha_{2} \rangle^{\text{A}}\mapsto|2\rangle^{\text{A}}\quad,\quad|\beta_{1}\rangle^{ \text{B}}\mapsto|1\rangle^{\text{B}}\quad,\)\(|\beta_{2}\rangle^{\text{B}}\mapsto|2\rangle^{\text{B}}\), and \(|\beta_{3}\rangle^{\text{B}}\mapsto|3\rangle^{\text{B}}\), which also maps \(|\alpha_{3}\rangle^{\text{A}}\mapsto a|1\rangle^{\text{A}}+b|2\rangle^{\text{A}}\). A general state in \(S\) is then a superposition of the following form: \[\begin{split} u|\alpha_{1}\rangle^{\text{A}}|\beta_{1}\rangle^{ \text{B}}+v|\alpha_{2}\rangle^{\text{A}}|\beta_{2}\rangle^{\text{B}}+w|\alpha_{3 }\rangle^{\text{A}}|\beta_{3}\rangle^{\text{B}}&\mapsto u|1\rangle^{ \text{A}}|1\rangle^{\text{B}}+v|2\rangle^{\text{A}}|2\rangle^{\text{B}}+wa|1 \rangle^{\text{A}}|3\rangle^{\text{B}}+wb|2\rangle^{\text{A}}|3\rangle^{\text{B}} \\ &=|1\rangle^{\text{A}}\left(u|1\rangle^{\text{B}}+wa|3\rangle^{ \text{B}}\right)+|2\rangle^{\text{A}}\left(v|2\rangle^{\text{B}}+wb|3\rangle^{ \text{B}}\right).\end{split} \tag{17}\] This is a product state if and only if \(u|1\rangle^{\text{B}}+wa|3\rangle^{\text{B}}\) and \(v|2\rangle^{\text{B}}+wb|3\rangle^{\text{B}}\) are linearly dependent (including the possibility that one of them is \(0\)). There are thus two subcases: 1. If the three states \(|\alpha_{1}\rangle^{\rm A},|\alpha_{2}\rangle^{\rm A},|\alpha_{3}\rangle^{\rm A}\) are in general position (that is, any two of them are linearly independent), then in particular \(a,b\neq 0\). This means that for \(u|1\rangle^{\rm B}+wa|3\rangle^{\rm B}\) and \(v|2\rangle^{\rm B}+wb|3\rangle^{\rm B}\) to be linearly dependent, there are only three possibilities: \(u=v=0\), \(u=w=0\), and \(v=w=0\). Therefore, the only product states in \(S\) are the spanning states \(|\alpha_{1}\rangle^{\rm A}|\beta_{1}\rangle^{\rm B},|\alpha_{2}\rangle^{\rm A} |\beta_{2}\rangle^{\rm B},|\alpha_{3}\rangle^{\rm A}|\beta_{3}\rangle^{\rm B}\). This implies, identically to case (3,3), that the only separable states over \(S\) are the mixtures of these three product states, so: \[\mathcal{D}^{S}_{\rm sep}={\rm conv}\left\{|\alpha_{1}\rangle\!\langle\alpha_ {1}|^{\rm A}\otimes|\beta_{1}\rangle\!\langle\beta_{1}|^{\rm B}\,\,|\alpha_{2}\rangle\!\langle\alpha_{2}|^{\rm A} \otimes|\beta_{2}\rangle\!\langle\beta_{2}|^{\rm B}\,\,|\alpha_{3}\rangle\! \langle\alpha_{3}|^{\rm A}\otimes|\beta_{3}\rangle\!\langle\beta_{3}|^{\rm B} \right\},\] (18) as needed. 2. Otherwise, without loss of generality we can assume \(|\alpha_{2}\rangle^{\rm A}=|\alpha_{3}\rangle^{\rm A}\), so \(a=0\quad,\quad b=1\). In this case, the resulting states \(u|1\rangle^{\rm B}\) and \(v|2\rangle^{\rm B}+w|3\rangle^{\rm B}\) are linearly dependent if and only if \(u=0\) or \(v=w=0\). Therefore, the product states in \(S\) are either of the form \(|\alpha_{2}\rangle^{\rm A}\left(v|\beta_{2}\rangle^{\rm B}+w|\beta_{3}\rangle^{ \rm B}\right)\) (corresponding to case \(u=0\)) which are exactly all states in the Hilbert subspace \(\left\{|\alpha_{2}\rangle^{\rm A}\right\}\otimes{\rm span}\left\{|\beta_{2} \rangle^{\rm B},|\beta_{3}\rangle^{\rm B}\right\}\), or the single state \(|\alpha_{1}\rangle^{\rm A}|\beta_{1}\rangle^{\rm B}\) (corresponding to case \(v=w=0\)). This implies that the separable pure and mixed states over \(S\) are all mixtures of \(|\alpha_{1}\rangle^{\rm A}|\beta_{1}\rangle^{\rm B}\) with any pure or mixed state from \(\left\{|\alpha_{2}\rangle^{\rm A}\right\}\otimes{\rm span}\left\{|\beta_{2} \rangle^{\rm B},|\beta_{3}\rangle^{\rm B}\right\}\), so: \[\mathcal{D}^{S}_{\rm sep}={\rm conv}\left[\{|\alpha_{1}\rangle\!\langle\alpha_ {1}|^{\rm A}\otimes|\beta_{1}\rangle\!\langle\beta_{1}|^{\rm B}\}\cup\mathcal{ D}\left(\left\{|\alpha_{2}\rangle^{\rm A}\right\}\otimes{\rm span}\left\{|\beta_{2} \rangle^{\rm B},|\beta_{3}\rangle^{\rm B}\right\}\right)\right],\] (19) as needed. The proof for the case (3,2) is symmetric. ### (2,2) We can assume, without loss of generality, that \(|\alpha_{1}\rangle^{\rm A}\neq|\alpha_{2}\rangle^{\rm A}\) and \(|\beta_{1}\rangle^{\rm B}\neq|\beta_{2}\rangle^{\rm B}\), so \(|\alpha_{3}\rangle^{\rm A}=a|\alpha_{1}\rangle^{\rm A}+b|\alpha_{2}\rangle^{ \rm A}\) and \(|\beta_{3}\rangle^{\rm B}=c|\beta_{1}\rangle^{\rm B}+d|\beta_{2}\rangle^{\rm B}\). As before, using local invertible linear maps, we can map \(|\alpha_{1}\rangle^{\rm A}\mapsto|1\rangle^{\rm A}\quad,\quad|\alpha_{2} \rangle^{\rm A}\mapsto|2\rangle^{\rm A}\quad,\quad|\beta_{1}\rangle^{\rm B} \mapsto|1\rangle^{\rm B}\), and \(|\beta_{2}\rangle^{\rm B}\mapsto|2\rangle^{\rm B}\), which also maps \(|\alpha_{3}\rangle^{\rm A}\mapsto a|1\rangle^{\rm A}+b|2\rangle^{\rm A}\) and \(|\beta_{3}\rangle^{\rm B}\mapsto c|1\rangle^{\rm B}+d|2\rangle^{\rm B}\). We can thus write a general state in \(S\) as a superposition: \[\begin{split} u|\alpha_{1}\rangle^{\rm A}|\beta_{1}\rangle^{\rm B }+v|\alpha_{2}\rangle^{\rm A}|\beta_{2}\rangle^{\rm B}+w|\alpha_{3}\rangle^{ \rm A}|\beta_{3}\rangle^{\rm B}&\mapsto u|1\rangle^{\rm A}|1 \rangle^{\rm B}+v|2\rangle^{\rm A}|2\rangle^{\rm B}\\ &+wac|1\rangle^{\rm A}|1\rangle^{\rm B}+wbd|2\rangle^{\rm A}|2 \rangle^{\rm B}+wad|1\rangle^{\rm A}|2\rangle^{\rm B}+wbc|2\rangle^{\rm A}|1 \rangle^{\rm B}\\ &=(u+wac)|1\rangle^{\rm A}|1\rangle^{\rm B}+(v+wbd)|2\rangle^{\rm A }|2\rangle^{\rm B}\\ &+wad|1\rangle^{\rm A}|2\rangle^{\rm B}+wbc|2\rangle^{\rm A}|1 \rangle^{\rm B}.\end{split} \tag{20}\] This is a product state if and only if the determinant of the coefficients is \(0\), i.e. \[(u+wac)(v+wbd)=w^{2}abcd. \tag{21}\] There are thus two subcases: 1. If either the three states \(|\alpha_{1}\rangle^{\rm A},|\alpha_{2}\rangle^{\rm A},|\alpha_{3}\rangle^{\rm A}\) or the three states \(|\beta_{1}\rangle^{\rm B},|\beta_{2}\rangle^{\rm B},|\beta_{3}\rangle^{\rm B}\) are not in general position (that is, if not every two of these states are linearly independent), then one of \(a\), \(b\), \(c\), or \(d\) is \(0\). Without loss of generality, assume \(a=0\), so \(|\alpha_{2}\rangle^{\rm A}=|\alpha_{3}\rangle^{\rm A}\). This means in particular that the following three states are in \(S\): \(|\alpha_{1}\rangle^{\rm A}|\beta_{1}\rangle^{\rm B}\), \(|\alpha_{2}\rangle^{\rm A}|\beta_{2}\rangle^{\rm B}\), and \(|\alpha_{3}\rangle^{\rm A}|\beta_{3}\rangle^{\rm B}=|\alpha_{2}\rangle^{\rm A} \left(c|\beta_{1}\rangle^{\rm B}+d|\beta_{2}\rangle^{\rm B}\right)\), so by linearity we deduce that \(|\alpha_{2}\rangle^{\rm A}|\beta_{1}\rangle^{\rm B}\) is also in (note that \(c\neq 0\), because otherwise the equality \(|\alpha_{2}\rangle^{\rm A}|\beta_{2}\rangle^{\rm B}=|\alpha_{3}\rangle^{\rm A}| \beta_{3}\rangle^{\rm B}\) would hold, which would contradict the fact \(\dim S=3\)). Thus: \[S={\rm span}\left\{|\alpha_{1}\rangle^{\rm A}|\beta_{1}\rangle^{\rm B},|\alpha_ {2}\rangle^{\rm A}|\beta_{1}\rangle^{\rm B},|\alpha_{2}\rangle^{\rm A}|\beta_{2 }\rangle^{\rm B}\right\}.\] (22) \(S\) is thus the span of a union of the two local qubit spaces \(S_{1}\triangleq\left\{|\alpha_{2}\rangle^{\rm A}\right\}\otimes{\rm span} \left\{|\beta_{1}\rangle^{\rm B},|\beta_{2}\rangle^{\rm B}\right\}\) and \(S_{2}\triangleq{\rm span}\left\{|\alpha_{1}\rangle^{\rm A},|\alpha_{2}\rangle^ {\rm A}\right\}\otimes\left\{|\beta_{1}\rangle^{\rm B}\right\}\), which intersect in \(|\alpha_{2}\rangle^{\rm A}|\beta_{1}\rangle^{\rm B}\). Thus, the separable pure and mixed states over \(S\) are exactly all mixtures (convex combinations) of any pure or mixed state over the space \(S_{1}\) with any pure or mixed state over the space \(S_{2}\). Formally: \[\mathcal{D}^{S}_{\rm sep} = {\rm conv}\left[\mathcal{D}(S_{1})\cup\mathcal{D}(S_{2})\right]\] (23) \[= {\rm conv}\left[\mathcal{D}\left(\left\{|\alpha_{2}\rangle^{\rm A }\right\}\otimes{\rm span}\left\{|\beta_{1}\rangle^{\rm B},|\beta_{2}\rangle^ {\rm B}\right\}\right)\cup\mathcal{D}\left({\rm span}\left\{|\alpha_{1}\rangle^ {\rm A},|\alpha_{2}\rangle^{\rm A}\right\}\otimes\left\{|\beta_{1}\rangle^{\rm B }\right\}\right)\right]\] \[= {\rm conv}\left[\mathcal{D}\left(\left\{|\alpha_{2}\rangle^{\rm A }\right\}\otimes B_{\rm sep}\right)\cup\mathcal{D}\left(A_{\rm sep}\otimes \left\{|\beta_{1}\rangle^{\rm B}\right\}\right)\right],\] as needed. 2. Otherwise (if both \(|\alpha_{1}\rangle^{\rm A},|\alpha_{2}\rangle^{\rm A},|\alpha_{3}\rangle^{\rm A}\) and \(|\beta_{1}\rangle^{\rm B},|\beta_{2}\rangle^{\rm B},|\beta_{3}\rangle^{\rm B}\) are in general position), this means that \(a,b,c,d\neq 0\). We can thus choose new variables \(x\triangleq u+wac\quad,\quad y\triangleq v+wbd\), and \(z\triangleq wbc\), so that Eq. (21) becomes \[xy=w^{2}abcd=z^{2}\frac{ad}{bc},\] (24) which means that all product states in \(S\) are of the form: \[\begin{split} u|\alpha_{1}\rangle^{\rm A}|\beta_{1}\rangle^{ \rm B}+v|\alpha_{2}\rangle^{\rm A}|\beta_{2}\rangle^{\rm B}+w|\alpha_{3} \rangle^{\rm A}|\beta_{3}\rangle^{\rm B}&\mapsto x|1\rangle^{ \rm A}|1\rangle^{\rm B}+y|2\rangle^{\rm A}|2\rangle^{\rm B}\\ &+z\frac{ad}{bc}|1\rangle^{\rm A}|2\rangle^{\rm B}+z|2\rangle^{ \rm A}|1\rangle^{\rm B}\\ &\propto x^{2}|1\rangle^{\rm A}|1\rangle^{\rm B}+xy|2\rangle^{\rm A }|2\rangle^{\rm B}\\ &+xz\frac{ad}{bc}|1\rangle^{\rm A}|2\rangle^{\rm B}+xz|2\rangle^{ \rm A}|1\rangle^{\rm B}\\ &=x^{2}|1\rangle^{\rm A}|1\rangle^{\rm B}+z^{2}\frac{ad}{bc}|2 \rangle^{\rm A}|2\rangle^{\rm B}\\ &+xz\frac{ad}{bc}|1\rangle^{\rm A}|2\rangle^{\rm B}+xz|2\rangle^{ \rm A}|1\rangle^{\rm B}\\ &=\left(x|1\rangle^{\rm A}+z|2\rangle^{\rm A}\right)\left(x|1 \rangle^{\rm B}+z\frac{ad}{bc}|2\rangle^{\rm B}\right).\end{split}\] (25) We can thus see that \(S\) includes an infinite number of product states, all of the form \(\gamma\left(x|\alpha_{1}\rangle^{\rm A}+z|\alpha_{2}\rangle^{\rm A}\right) \left(x|\beta_{1}\rangle^{\rm B}+z\frac{ad}{bc}|\beta_{2}\rangle^{\rm B}\right)\) for some \(x,z\in\mathbb{C}\) and a normalisation factor \(\gamma\). Accordingly, we can define an invertible linear map \(L:A_{\rm sep}\mapsto B_{\rm sep}\) mapping: \[L|\alpha_{1}\rangle^{\rm A} = |\beta_{1}\rangle^{\rm B},\] (26) \[L|\alpha_{2}\rangle^{\rm A} = \frac{ad}{bc}|\beta_{2}\rangle^{\rm B},\] (27) and we conclude that the set of separable states in \(S\) is exactly: \[\left\{|\Psi\rangle^{\rm AB}\triangleq\gamma|\psi\rangle^{\rm A}\left(L|\psi \rangle\right)^{\rm B}\ :\ |\psi\rangle^{\rm A}\in A_{\rm sep}\,\ \gamma\in\mathbb{C}\,\ \left\||\Psi \rangle^{\rm AB}\right\|=1\right\}.\] (28) Thus, the separable pure and mixed states over \(S\) are exactly all mixtures (convex combinations) of the states in Eq. (28). Formally: \[\mathcal{D}^{S}_{\rm sep}={\rm conv}\left\{|\Psi\rangle\!\langle\Psi|^{\rm AB} \ :\ |\Psi\rangle^{\rm AB}=\gamma|\psi\rangle^{\rm A}\left(L|\psi\rangle\right)^{\rm B} \,\ |\psi\rangle^{\rm A}\in A_{\rm sep}\,\ \gamma\in\mathbb{C}\,\ \left\||\Psi \rangle^{\rm AB}\right\|=1\right\},\] (29) as needed. Geometric descriptions and figures In Theorem 2 we gave an algebraic description of the possible sets \(\mathcal{D}^{S}_{\text{sep}}\) of separable pure and mixed states over \(S\). Now we can also give a geometric description of the sets \(\mathcal{D}^{S}_{\text{sep}}\) as closed convex subsets of the qutrit space \(\mathcal{D}(S)\) which is shown schematically in Fig. 1. The possible sets \(\mathcal{D}^{S}_{\text{sep}}\) are denoted according to the relevant case in Theorem 2--that is, \(\mathcal{D}^{S_{(1,3)}}_{\text{sep}}\), \(\mathcal{D}^{S_{(3,3)}}_{\text{sep}}\), \(\mathcal{D}^{S_{(2,3)-i}}_{\text{sep}}\), etc. In this section we develop three-dimensional stereographic illustrations of the possible classes from Theorem 2; because the sets \(\mathcal{D}^{S}_{\text{sep}}\) typically have dimension higher than \(3\), we use lower-dimensional sections and projections to present these schematic illustrations. Fig. 1 is courtesy of Tadeusz Dorozinski [26], and the remaining figures (Figs. 2-5) were produced using _CalcPlot3D_[27]. Recall that \(\mathcal{D}(S)\) is the set of all density matrices acting on the Hilbert space \(S\), and \(\operatorname{conv}(A)\) is the convex hull of \(A\)--that is, the set of all convex combinations (probabilistic mixtures) of the states in the set \(A\subseteq\mathcal{D}(S)\). We now present all possible classes from Theorem 2 and their geometric descriptions: **(1,3)/(3,1):** The set of separable states \(\mathcal{D}^{S_{(1,3)}}_{\text{sep}}=\mathcal{D}^{S_{(3,1)}}_{\text{sep}}= \mathcal{D}(S)\) is the entire set of pure and mixed states over the qutrit Hilbert space \(S\equiv\mathcal{H}_{3}\). Unlike a qubit Hilbert space (whose corresponding Bloch sphere, which represents all pure and mixed states over the qubit, is three-dimensional and can thus be geometrically represented), the set of all pure and mixed states over a qutrit space is \(8\)-dimensional and cannot be easily represented in three dimensions. Formally: \[\mathcal{D}^{S_{(1,3)}}_{\text{sep}}=\mathcal{D}^{S_{(3,1)}}_{\text{sep}}= \mathcal{D}(S)\equiv\mathcal{D}(\mathcal{H}_{3}), \tag{30}\] the dimension of the set of separable states is indeed \(\dim\mathcal{D}^{S_{(1,3)}}_{\text{sep}}=\dim\mathcal{D}^{S_{(3,1)}}_{\text{sep }}=8\), and the set \(S\) is the product of a fixed state with a local qutrit space: in case (1,3), \(S=\left\{|\alpha_{1}\rangle^{\text{A}}\right\}\otimes\operatorname{span}\left\{ |\beta_{1}\rangle^{\text{B}},|\beta_{2}\rangle^{\text{B}},|\beta_{3}\rangle^{ \text{B}}\right\}\), and in case (3,1), \(S=\operatorname{span}\left\{|\alpha_{1}\rangle^{\text{A}},|\alpha_{2}\rangle^{ \text{A}},|\alpha_{3}\rangle^{\text{A}}\right\}\otimes\left\{|\beta_{1}\rangle ^{\text{B}}\right\}\), which are all product states. Figure 1: **The qutrit Hilbert space and cases (1,3) and (3,1):**\(\mathcal{D}(S)\equiv\mathcal{D}(\mathcal{H}_{3})\) is an \(8\)-dimensional convex set whose extreme points form a \(4\)-dimensional smooth manifold, and whose other faces all have dimension \(3\). It is a highly symmetric body with an \(SU(3)\) symmetry group that acts transitively on the extreme points and also on the \(3\)-dimensional faces. It is also its own polar. There is no three-dimensional convex body presenting all these features, but other researchers have tried to construct visualisations [28; 29; 30]. Here we propose to consider the so-called sphericyl (left) and elliptic sphericyl (right) as decent schematic illustrations: they are convex hulls of continuous curves that are each a union of four circular arcs. These pictures are courtesy of Tadeusz Dorozinski [26]. A decent schematic illustration of the qutrit space can be the convex hull of the seam on a tennis ball. For concreteness, we choose the sphericyl and elliptic sphericyl as described in Fig. 1. **(3,3) and (2,3)/(3,2)-i:** The set of separable states \(\mathcal{D}_{\text{sep}}^{S_{(3,3)}}=\mathcal{D}_{\text{sep}}^{S_{(2,3)-i}}= \mathcal{D}_{\text{sep}}^{S_{(3,2)-i}}\) simply consists of all mixtures of the three product states \(|\alpha_{1}\rangle^{\text{A}}|\beta_{1}\rangle^{\text{B}},|\alpha_{2}\rangle^{ \text{A}}|\beta_{2}\rangle^{\text{B}},|\alpha_{3}\rangle^{\text{A}}|\beta_{3} \rangle^{\text{B}}\). Formally: \[\begin{split}\mathcal{D}_{\text{sep}}^{S_{(3,3)}}& =\mathcal{D}_{\text{sep}}^{S_{(2,3)-i}}=\mathcal{D}_{\text{sep}}^{S_{(3,2)-i}} \\ &=\mathrm{conv}\left\{|\alpha_{1}\rangle\!\langle\alpha_{1}|^{ \text{A}}\otimes|\beta_{1}\rangle\!\langle\beta_{1}|^{\text{B}}\;,\;|\alpha_{2 }\rangle\!\langle\alpha_{2}|^{\text{A}}\otimes|\beta_{2}\rangle\!\langle \beta_{2}|^{\text{B}}\;,\;|\alpha_{3}\rangle\!\langle\alpha_{3}|^{\text{A}} \otimes|\beta_{3}\rangle\!\langle\beta_{3}|^{\text{B}}\right\},\end{split} \tag{31}\] and the dimension is \(\dim\mathcal{D}_{\text{sep}}^{S_{(3,3)}}=\dim\mathcal{D}_{\text{sep}}^{S_{(2, 3)-i}}=\dim\mathcal{D}_{\text{sep}}^{S_{(3,2)-i}}=2\). Geometrically, this set simply forms a triangle with these three product states as corners (which is exactly a classical two-dimensional probability simplex of ternary probability distributions), as shown in Fig. 2. **(2,3)/(3,2)-ii:** In case (2,3)-ii, the set of separable states \(\mathcal{D}_{\text{sep}}^{S_{(2,3)-i}}\) consists of all mixtures of the state \(|\alpha_{1}\rangle\!\langle\alpha_{1}|^{\text{A}}\otimes|\beta_{1}\rangle\! \langle\beta_{1}|^{\text{B}}\) with any pure or mixed state over the local qubit space \(\left\{|\alpha_{2}\rangle^{\text{A}}\right\}\otimes\mathrm{span}\left\{|\beta _{2}\rangle^{\text{B}},|\beta_{3}\rangle^{\text{B}}\right\}\); and similarly, in case (3,2)-ii, the set of separable states \(\mathcal{D}_{\text{sep}}^{S_{(3,2)-ii}}\) consists of all mixtures of the state \(|\alpha_{1}\rangle\!\langle\alpha_{1}|^{\text{A}}\otimes|\beta_{1}\rangle\! \langle\beta_{1}|^{\text{B}}\) with any pure or mixed state over the local qubit space \(\mathrm{span}\left\{|\alpha_{2}\rangle^{\text{A}},|\alpha_{3}\rangle^{\text{A} }\right\}\otimes\left\{|\beta_{2}\rangle^{\text{B}}\right\}\). Formally: \[\mathcal{D}_{\text{sep}}^{S_{(2,3)-ii}} = \mathrm{conv}\left[\left\{|\alpha_{1}\rangle\!\langle\alpha_{1}|^ {\text{A}}\otimes|\beta_{1}\rangle\!\langle\beta_{1}|^{\text{B}}\right\}\cup \mathcal{D}\left(\left\{|\alpha_{2}\rangle^{\text{A}}\right\}\otimes\mathrm{ span}\left\{|\beta_{2}\rangle^{\text{B}},|\beta_{3}\rangle^{\text{B}}\right\}\right)\right], \tag{32}\] \[\mathcal{D}_{\text{sep}}^{S_{(3,2)-ii}} = \mathrm{conv}\left[\left\{|\alpha_{1}\rangle\!\langle\alpha_{1}|^ {\text{A}}\otimes|\beta_{1}\rangle\!\langle\beta_{1}|^{\text{B}}\right\}\cup \mathcal{D}\left(\mathrm{span}\left\{|\alpha_{2}\rangle^{\text{A}},|\alpha_{3 }\rangle^{\text{A}}\right\}\otimes\left\{|\beta_{2}\rangle^{\text{B}}\right\} \right)\right], \tag{33}\] and the dimension is \(\dim\mathcal{D}_{\text{sep}}^{S_{(2,3)-ii}}=\dim\mathcal{D}_{\text{sep}}^{S_{( 3,2)-ii}}=4\). Thus, in both cases, the set of separable states forms a spherical cone connecting a single point (representing the state \(|\alpha_{1}\rangle\!\langle\alpha_{1}|^{\text{A}}\otimes|\beta_{1}\rangle\! \langle\beta_{1}|^{\text{B}}\)) with a Bloch sphere (representing the local qubit space) in an overall four-dimensional space. The extreme points of this set form two connected components: the single point and the two-dimensional surface of the Bloch sphere. This body has a two-dimensional family of faces of dimension 1 and a single face of dimension three. To illustrate this in a three-dimensional figure, we present the convex hull (that is, the set of all convex combinations) of the equatorial qubits of the Bloch sphere (a full disc) with the single point, and we get a three-dimensional cone over a circular disc, shown in Fig. 3. **(2,2)-i:** The set of separable states \(\mathcal{D}_{\text{sep}}^{S_{(2,2)-i}}\) consists of all mixtures of any pure or mixed state over the local qubit space \(S_{1}\triangleq\left\{|\alpha_{2}\rangle^{\text{A}}\right\}\otimes\text{span} \left\{|\beta_{1}\rangle^{\text{B}},|\beta_{2}\rangle^{\text{B}}\right\}\) with any pure or mixed state over the local qubit space \(S_{2}\triangleq\text{span}\left\{|\alpha_{1}\rangle^{\text{A}},|\alpha_{2} \rangle^{\text{A}}\right\}\otimes\left\{|\beta_{1}\rangle^{\text{B}}\right\}\); note that the intersection of the two spaces is \(S_{1}\cap S_{2}=\left\{|\alpha_{2}\rangle^{\text{A}}\otimes|\beta_{1}\rangle^{ \text{B}}\right\}\). Formally: \[\begin{split}\mathcal{D}_{\text{sep}}^{S_{(2,2)-i}}& =\text{conv}\left[\mathcal{D}(S_{1})\cup\mathcal{D}(S_{2})\right]\\ &=\text{conv}\left[\mathcal{D}\left(\left\{|\alpha_{2}\rangle^{ \text{A}}\right\}\otimes\text{span}\left\{|\beta_{1}\rangle^{\text{B}},|\beta _{2}\rangle^{\text{B}}\right\}\right)\cup\mathcal{D}\left(\text{span}\left\{| \alpha_{1}\rangle^{\text{A}},|\alpha_{2}\rangle^{\text{A}}\right\}\otimes \left\{|\beta_{1}\rangle^{\text{B}}\right\}\right)\right]\\ &=\text{conv}\left[\mathcal{D}\left(\left\{|\alpha_{2}\rangle^{ \text{A}}\right\}\otimes B_{\text{sep}}\right)\cup\mathcal{D}\left(A_{\text{sep }}\otimes\left\{|\beta_{1}\rangle^{\text{B}}\right\}\right)\right],\end{split} \tag{34}\] and the dimension is \(\dim\mathcal{D}_{\text{sep}}^{S_{(2,2)-i}}=6\). Thus, the set of separable states is the convex hull (that is, the set of all convex combinations) of two Bloch spheres intersecting at a single point, where the single point represents \(|\alpha_{2}\rangle^{\text{A}}\otimes|\beta_{1}\rangle^{\text{B}}\). The two Bloch spheres are transversal to each other: they are two three-dimensional spaces intersecting at a single point. We can thus repeat the same idea as in case (2,3)/(3,2)-ii and take the equatorial qubits of both Bloch spheres, which are two full discs transversal to each other; however, the convex hull of the two discs is still \(4\)-dimensional. We thus project this figure into three dimensions, obtaining the convex hull of two circles intersecting at one point, where their respective two-dimensional planes intersect at right angles, as illustrated in Fig. 4. **(2,2)-ii:** The set of separable states \(\mathcal{D}_{\text{sep}}^{S_{(2,2)-ii}}\) consists of all mixtures of states of the form \(\left|\psi\right\rangle^{\text{A}}\left(L|\psi\rangle\right)^{\text{B}}\), up to normalisation, for a specific (known) invertible linear map \(L\). Formally: \[\mathcal{D}_{\text{sep}}^{S}=\text{conv}\left\{|\Psi\rangle\!\langle\Psi|^{ \text{AB}}\ :\ |\Psi\rangle^{\text{AB}}=\gamma|\psi\rangle^{\text{A}}\left(L|\psi\rangle \right)^{\text{B}}\,\ |\psi\rangle^{\text{A}}\in\text{span}\left\{|\alpha_{1}\rangle^{\text{A}},| \alpha_{2}\rangle^{\text{A}}\right\}\,\ \gamma\in\mathbb{C}\,\ \left\||\Psi \rangle^{\text{AB}}\right\|=1\right\}, \tag{35}\] where, given that \(|\alpha_{3}\rangle^{\text{A}}=a|\alpha_{1}\rangle^{\text{A}}+b|\alpha_{2} \rangle^{\text{A}}\) and \(|\beta_{3}\rangle^{\text{B}}=c|\beta_{1}\rangle^{\text{B}}+d|\beta_{2}\rangle^{ \text{B}}\) for \(a,b,c,d\in\mathbb{C}\setminus\{0\}\) that were defined in the proof of Theorem 2 (case (2,2)) in Subsection IV.4, the map \(L\) is defined as: \[L|\alpha_{1}\rangle^{\rm A} = |\beta_{1}\rangle^{\rm B}, \tag{36}\] \[L|\alpha_{2}\rangle^{\rm A} = \frac{ad}{bc}|\beta_{2}\rangle^{\rm B},\] (37) \[L|\alpha_{3}\rangle^{\rm A} = a|\beta_{1}\rangle^{\rm B}+\frac{ad}{c}|\beta_{2}\rangle^{\rm B }=\frac{a}{c}|\beta_{3}\rangle^{\rm B}, \tag{38}\] Figure 4: **Case (2,2)-i:** a three-dimensional section of the actual 6-dimensional two Bloch spheres intersecting at one point. Figure 5: **Case (2,2)-ii:** a three-dimensional section of the full eight-dimensional convex set, it is the convex hull of \(\left(x=\cos\varphi\;,\;y=\sin\varphi\;,\;z=\cos^{2}\varphi-\sin^{2}\varphi\right)\) for \(0\leq\varphi\leq 2\pi\), which is the intersection of two isomorphic degenerate paraboloids and represents the possible product states \(|\psi\rangle^{\rm A}|\psi\rangle^{\rm B}\) (up to local invertible maps). Except for its one-dimensional manifold of extreme points, its only other faces are two disjoint one-dimensional families of lines. and the dimension is \(\dim\mathcal{D}_{\text{sep}}^{S_{(2,2)-ii}}=8\). Thus, the set of separable states is an \(8\)-dimensional convex subset of the full qutrit space \(\mathcal{D}(S)\equiv\mathcal{D}(\mathcal{H}_{3})\), and its extreme points form the two-dimensional smooth manifolds of pure product states \(|\psi\rangle\!\langle\psi|^{\text{A}}\otimes\big{(}L|\psi\rangle\!\langle\psi|L ^{\dagger}\big{)}^{\text{B}}\). We notice that this is the only case (except the trivial case (1,3)/(3,1)) where the set of separable states has the same dimension as the full space \(\mathcal{D}(S)\); in all other cases the set of separable states has a lower dimension and is thus of measure zero. An interesting special case is \(ad=bc\), because it satisfies \((L|\psi\rangle)^{\text{B}}\equiv|\psi\rangle^{\text{A}}\) if we apply the local invertible operations mapping \(|\alpha_{1}\rangle^{\text{A}}\mapsto|1\rangle^{\text{A}}\quad,\quad|\alpha_ {2}\rangle^{\text{A}}\mapsto|2\rangle^{\text{A}}\quad,\quad|\beta_{1}\rangle^{ \text{B}}\mapsto|1\rangle^{\text{B}}\), and \(|\beta_{2}\rangle^{\text{B}}\mapsto|2\rangle^{\text{B}}\) and take the equivalences \(|1\rangle^{\text{A}}\equiv|1\rangle^{\text{B}}\) and \(|2\rangle^{\text{A}}\equiv|2\rangle^{\text{B}}\). In this case, the set of product states in \(S\) is equivalent to the set of all states of form \(|\psi\rangle^{\text{A}}|\psi\rangle^{\text{B}}\), which is the set of all product states in the _symmetric subspace_ (also known as the "triplet subspace") of the two-qubit Hilbert space \(\mathcal{H}_{2}\times\mathcal{H}_{2}\) (that is, the three-dimensional subspace spanned by \(\left\{|1\rangle^{\text{A}}|1\rangle^{\text{B}}\,,\,\,|2\rangle^{\text{A}}|2 \rangle^{\text{B}}\,,\,\,|\Psi^{+}\rangle^{\text{AB}}\triangleq\frac{|1 \rangle^{\text{A}}|2\rangle^{\text{B}}+|2\rangle^{\text{A}}|1\rangle^{\text{B }}}{\sqrt{2}}\right\}\)). Moreover, we can see that given this special case, the general case (2,2)-ii can simply be obtained by applying a local invertible operation \(X\otimes Y\) (not affecting entanglement) to the special case. To obtain the figure, we look at the special case \(ad=bc\); we apply the local invertible operations mapping \(|\alpha_{1}\rangle^{\text{A}}\mapsto|1\rangle^{\text{A}}\quad,\quad|\alpha_ {2}\rangle^{\text{A}}\mapsto|2\rangle^{\text{A}}\quad,\quad|\beta_{1}\rangle^ {\text{B}}\mapsto|1\rangle^{\text{B}}\), and \(|\beta_{2}\rangle^{\text{B}}\mapsto|2\rangle^{\text{B}}\), so that the product states are equivalent to \(|\psi\rangle^{\text{A}}|\psi\rangle^{\text{B}}\) where \(|\psi\rangle^{\text{A}}\in\text{span}\left\{|1\rangle^{\text{A}},|2\rangle^{ \text{A}}\right\}\); and we limit our view to the section of equatorial qubit states--that is, qubits of the form: \[|\psi\rangle^{\text{A}}=\frac{|1\rangle^{\text{A}}+e^{i\varphi}|2\rangle^{ \text{A}}}{\sqrt{2}} \tag{39}\] for \(0\leq\varphi\leq 2\pi\). All product states are thus of the form: \[\begin{split}|\psi\rangle^{\text{A}}|\psi\rangle^{\text{B}}& =\frac{|1\rangle^{\text{A}}+e^{i\varphi}|2\rangle^{\text{A}}}{ \sqrt{2}}\otimes\frac{|1\rangle^{\text{B}}+e^{i\varphi}|2\rangle^{\text{B}}}{ \sqrt{2}}\\ &=\frac{|1\rangle^{\text{A}}|1\rangle^{\text{B}}+e^{i\varphi}|1 \rangle^{\text{A}}|2\rangle^{\text{B}}+e^{i\varphi}|2\rangle^{\text{A}}|1 \rangle^{\text{B}}+e^{2i\varphi}|2\rangle^{\text{A}}|2\rangle^{\text{B}}}{ 2}\\ &=\frac{|11\rangle^{\text{AB}}}{\sqrt{2}}+\frac{e^{i\varphi}|\Psi^ {+}\rangle^{\text{AB}}}{\sqrt{2}}+\frac{e^{2i\varphi}|22\rangle^{\text{AB}}}{ 2},\end{split} \tag{40}\] (where we denote \(|\Psi^{+}\rangle^{\text{AB}}\triangleq\frac{|1\rangle^{\text{A}}|2\rangle^{ \text{B}}+|2\rangle^{\text{A}}|1\rangle^{\text{B}}}{\sqrt{2}}\)), and the resulting density matrices are: \[\begin{split}|\psi\rangle\!\langle\psi|^{\text{A}}\otimes|\psi \rangle\!\langle\psi|^{\text{B}}&=\frac{|11\rangle\!\langle 1|^{\text{AB}}}{4}+ \frac{|\Psi^{+}\rangle\!\langle\Psi^{+}|^{\text{AB}}}{2}+\frac{|22\rangle\! \langle 22|^{\text{AB}}}{4}\\ &+e^{i\varphi}\frac{|\Psi^{+}\rangle\langle 11|^{\text{AB}}}{2\sqrt{2}}+ e^{-i\varphi}\frac{|11\rangle\!\langle\Psi^{+}|^{\text{AB}}}{2\sqrt{2}}+e^{-i \varphi}\frac{|\Psi^{+}\rangle\langle 22|^{\text{AB}}}{2\sqrt{2}}+e^{i\varphi}\frac{|22 \rangle\!\langle\Psi^{+}|^{\text{AB}}}{2\sqrt{2}}\\ &+e^{2i\varphi}\frac{|22\rangle\langle 11|^{\text{AB}}}{4}+e^{-2i \varphi}\frac{|11\rangle\langle 22|^{\text{AB}}}{4}.\end{split} \tag{41}\] Thus, the resulting density matrices are of the following form: \[|\psi\rangle\!\langle\psi|^{\text{A}}\otimes|\psi\rangle\!\langle\psi|^{\text{ B}}=C+e^{i\varphi}A+e^{-i\varphi}A^{\dagger}+e^{2i\varphi}B+e^{-2i\varphi}B^{ \dagger}=C+e^{i\varphi}A+\big{(}e^{i\varphi}A\big{)}^{\dagger}+e^{2i\varphi}B+ \big{(}e^{2i\varphi}B\big{)}^{\dagger}, \tag{42}\] where \(A,A^{\dagger},B,B^{\dagger},C\) are linearly independent matrices. The resulting set of separable states is the convex hull of an infinite number of extreme points (representing these product states) forming a smooth curve; these extreme points are affine-linearly parametrised by the pair of complex numbers \(\big{(}e^{i\varphi},e^{2i\varphi}\big{)}\) for \(0\leq\varphi\leq 2\pi\). Since the resulting body is still four-dimensional, we project it into three dimensions by retaining only the real part of the second complex number \(e^{2i\varphi}\), getting the parametrisation \(\left(e^{i\varphi}\;,\;\;\cos(2\varphi)=\cos^{2}\varphi-\sin^{2}\varphi\right)\), or equivalently \(\left(\cos\varphi\;,\;\;\sin\varphi\;,\;\;\cos^{2}\varphi-\sin^{2}\varphi\right)\). Using real number parameters \(x,y,z\in\mathbb{R}\), the obtained convex body is \[\mathrm{conv}\left\{(x,y,z)\;:\;x^{2}+y^{2}=1\;,\;z=x^{2}-y^{2}\right\}=\left\{ (x,y,z)\;:\;x^{2}+y^{2}\leq 1\;,\;2x^{2}-1\leq z\leq 1-2y^{2}\right\}, \tag{43}\] which is shown in Fig. 5. ## VI Generalisation to multipartite systems We can now generalise our result regarding bipartite systems (Theorem 2) to all multipartite systems. In [1], the generalised classification of _fully separable_ states in two-dimensional subspaces of multipartite systems (Theorem 11 in [1]) turned out to be completely identical to the classification of separable states in two-dimensional subspaces of bipartite systems (Theorem 9 in [1], stated in our paper as Theorem 1). However, in our paper, we will be able to find several differences between the bipartite case and the multipartite case: most notably, some bipartite classes disappear for multipartite systems. Similarly to Section II, we begin our analysis with a rank-3 quantum mixed state \(\rho\triangleq\rho^{\mathrm{A}_{1}\cdots\mathrm{A}_{k}}\) on a \(k\)-partite system \(\mathcal{H}_{\mathrm{A}_{1}}\otimes\mathcal{H}_{\mathrm{A}_{2}}\otimes\cdots \otimes\mathcal{H}_{\mathrm{A}_{k}}\) (where \(k\geq 3\)). As before, the support of \(\rho\) is the three-dimensional subspace \(S=\mathrm{supp}\,\rho\) which is spanned by the eigenstates of \(\rho\), and we define: \[\begin{split} S_{\text{sep}}&\triangleq\mathrm{ span}\left\{|\psi\rangle\in S\;:\;|\psi\rangle\text{ is a product state}\right\}\\ &=\mathrm{span}\left\{|\psi\rangle\in S\;:\;\exists|\phi^{(1)} \rangle\in\mathcal{H}_{\mathrm{A}_{1}}\;,\;\;\ldots\;,\;|\phi^{(k)}\rangle\in \mathcal{H}_{\mathrm{A}_{k}}\;:\;|\psi\rangle=|\phi^{(1)}\rangle\otimes \cdots\otimes|\phi^{(k)}\rangle\right\}.\end{split} \tag{44}\] Since the cases \(\dim S_{\text{sep}}\in\{0,1,2\}\) were analysed in [1], we focus again on the case \(\dim S_{\text{sep}}=3\), where \(S_{\text{sep}}=S\) and it is spanned by three linearly independent product states: \[S=\mathrm{span}\left\{|\alpha_{1}^{(1)}\rangle\otimes\cdots\otimes|\alpha_{1} ^{(k)}\rangle\;,\;|\alpha_{2}^{(1)}\rangle\otimes\cdots\otimes|\alpha_{2}^{(k )}\rangle\;,\;|\alpha_{3}^{(1)}\rangle\otimes\cdots\otimes|\alpha_{3}^{(k)} \rangle\right\}. \tag{45}\] We can now define the set of fully separable states we are going to analyse: \[\mathcal{D}_{\text{sep}}^{S}\triangleq\{\rho\in\mathcal{D}(S)\;:\;\rho\text{ is fully separable}\}, \tag{46}\] where \(\mathcal{D}(S)\) is (as before) the set of all density matrices over the Hilbert space \(S\), and a state \(\rho\) is said to be "fully separable" if it is a mixture of product states \(|\phi^{(1)}\rangle\otimes\cdots\otimes|\phi^{(k)}\rangle\). Similarly to Theorem 2, our analysis will depend on the dimensions of the \(k\) local subspaces \(A_{\text{sep}}^{(j)}\) (for \(1\leq j\leq k\)), which are defined as follows: \[A_{\text{sep}}^{(j)}\triangleq\mathrm{span}\left\{|\alpha_{1}^{(j)}\rangle,| \alpha_{2}^{(j)}\rangle,|\alpha_{3}^{(j)}\rangle\right\}, \tag{47}\] and each of them can be \(1\)-, \(2\)-, or \(3\)-dimensional. We would now like to point out that whenever the dimension of \(A_{\text{sep}}^{(j)}\) is \(1\), it does not contribute any genuine entanglement and can be removed from our analysis. Indeed, if \(\dim A_{\text{sep}}^{(j)}=1\), then necessarily \(|\alpha_{1}^{(j)}\rangle=|\alpha_{2}^{(j)}\rangle=|\alpha_{3}^{(j)}\rangle\), in which case all states in \(S\) are actually tensor products of a \((k-1)\)-partite state with a constant state \(|\alpha_{1}^{(j)}\rangle\). This state can be trivially absorbed into another system without affecting the analysis of entanglement and separability. Thus, the state is _genuinely_ only \((k-1)\)-partite, not \(k\)-partite. (A similar observation regarding the bipartite case of Theorem 2 is that systems corresponding to its cases (1,3) and (3,1) are actually not bipartite at all, but "monopartite".) We can therefore assume, without loss of generality, that \(\dim A^{(j)}_{\text{sep}}\in\{2,3\}\) for all \(1\leq j\leq k\), thus focusing on _genuinely_ multipartite systems with \(k\geq 3\) subsystems. For the analysis, we shall also use the following Lemma, which (informally) says that three quantum states that are either _linearly independent_ or _linearly dependent but in general position_ become linearly independent when we take their tensor product with another (non-trivial) Hilbert space: **Lemma 3**: _In a bipartite Hilbert space \(\mathcal{H}_{\mathrm{A}}\otimes\mathcal{H}_{\mathrm{B}}\), if three quantum states \(|\alpha_{1}\rangle^{\mathrm{A}},|\alpha_{2}\rangle^{\mathrm{A}},|\alpha_{3} \rangle^{\mathrm{A}}\in\mathcal{H}_{\mathrm{A}}\) are either linearly independent or linearly dependent but in general position (that is, any two of them are linearly independent), then for any three quantum states \(|\beta_{1}\rangle^{\mathrm{B}},|\beta_{2}\rangle^{\mathrm{B}},|\beta_{3} \rangle^{\mathrm{B}}\in\mathcal{H}_{\mathrm{B}}\) that are not all identical (up to normalisation and global phase), the three states \(|\alpha_{1}\rangle^{\mathrm{A}}|\beta_{1}\rangle^{\mathrm{B}}\,,\;|\alpha_{2 }\rangle^{\mathrm{A}}|\beta_{2}\rangle^{\mathrm{B}}\,,\;|\alpha_{3}\rangle^{ \mathrm{A}}|\beta_{3}\rangle^{\mathrm{B}}\in\mathcal{H}_{\mathrm{A}}\otimes \mathcal{H}_{\mathrm{B}}\) are linearly independent._ **Proof** Assume by contradiction that the three states \(|\alpha_{1}\rangle^{\mathrm{A}}|\beta_{1}\rangle^{\mathrm{B}}\,,\;|\alpha_{2 }\rangle^{\mathrm{A}}|\beta_{2}\rangle^{\mathrm{B}}\,,\;|\alpha_{3}\rangle^{ \mathrm{A}}|\beta_{3}\rangle^{\mathrm{B}}\) are linearly dependent. They must be in general position, because any two of them cannot be equal (otherwise, two of \(|\alpha_{1}\rangle^{\mathrm{A}},|\alpha_{2}\rangle^{\mathrm{A}},|\alpha_{3} \rangle^{\mathrm{A}}\) would be equal). Therefore, without loss of generality, \(|\alpha_{3}\rangle^{\mathrm{A}}|\beta_{3}\rangle^{\mathrm{B}}\in\mathrm{span} \left\{|\alpha_{1}\rangle^{\mathrm{A}}|\beta_{1}\rangle^{\mathrm{B}}\,,\;| \alpha_{2}\rangle^{\mathrm{A}}|\beta_{2}\rangle^{\mathrm{B}}\right\}\), so there are \(a,b\in\mathbb{C}\setminus\{0\}\) such that: \[|\alpha_{3}\rangle^{\mathrm{A}}|\beta_{3}\rangle^{\mathrm{B}}=a|\alpha_{1} \rangle^{\mathrm{A}}|\beta_{1}\rangle^{\mathrm{B}}+b|\alpha_{2}\rangle^{ \mathrm{A}}|\beta_{2}\rangle^{\mathrm{B}}. \tag{48}\] This equation means, in particular, that \(|\alpha_{1}\rangle^{\mathrm{A}},|\alpha_{2}\rangle^{\mathrm{A}},|\alpha_{3} \rangle^{\mathrm{A}}\in\mathcal{H}_{\mathrm{A}}\) cannot be linearly independent, so they must be linearly dependent and, by assumption, in general position; this observation implies that \(|\alpha_{3}\rangle^{\mathrm{A}}\in\mathrm{span}\left\{|\alpha_{1}\rangle^{ \mathrm{A}},|\alpha_{2}\rangle^{\mathrm{A}}\right\}\), so there are \(x,y\in\mathbb{C}\setminus\{0\}\) such that: \[|\alpha_{3}\rangle^{\mathrm{A}}=x|\alpha_{1}\rangle^{\mathrm{A}}+y|\alpha_{2} \rangle^{\mathrm{A}}, \tag{49}\] and substituting this in Eq. (48), we find: \[\left(x|\alpha_{1}\rangle^{\mathrm{A}}+y|\alpha_{2}\rangle^{\mathrm{A}}\right) |\beta_{3}\rangle^{\mathrm{B}}=a|\alpha_{1}\rangle^{\mathrm{A}}|\beta_{1} \rangle^{\mathrm{B}}+b|\alpha_{2}\rangle^{\mathrm{A}}|\beta_{2}\rangle^{ \mathrm{B}}, \tag{50}\] or equivalently, \[|\alpha_{1}\rangle^{\mathrm{A}}\left(x|\beta_{3}\rangle^{\mathrm{B}}-a|\beta_ {1}\rangle^{\mathrm{B}}\right)=|\alpha_{2}\rangle^{\mathrm{A}}\left(b|\beta_{2 }\rangle^{\mathrm{B}}-y|\beta_{3}\rangle^{\mathrm{B}}\right). \tag{51}\] Because we know that \(|\alpha_{1}\rangle^{\mathrm{A}}\neq|\alpha_{2}\rangle^{\mathrm{A}}\), this necessarily implies \(x|\beta_{3}\rangle^{\mathrm{B}}-a|\beta_{1}\rangle^{\mathrm{B}}=0\) and \(b|\beta_{2}\rangle^{\mathrm{B}}-y|\beta_{3}\rangle^{\mathrm{B}}=0\). Because we know that \(a,b,x,y\neq 0\), it holds that: \[\frac{a}{x}|\beta_{1}\rangle^{\mathrm{B}}=|\beta_{3}\rangle^{\mathrm{B}}= \frac{b}{y}|\beta_{2}\rangle^{\mathrm{B}}, \tag{52}\] which contradicts the assumption that the three quantum states \(|\beta_{1}\rangle^{\mathrm{B}},|\beta_{2}\rangle^{\mathrm{B}},|\beta_{3} \rangle^{\mathrm{B}}\in\mathcal{H}_{\mathrm{B}}\) are not all identical (up to normalisation and global phase). Thus, the three bipartite states mentioned above, \(|\alpha_{1}\rangle^{\mathrm{A}}|\beta_{1}\rangle^{\mathrm{B}}\,,\;|\alpha_{2} \rangle^{\mathrm{A}}|\beta_{2}\rangle^{\mathrm{B}}\,,\;|\alpha_{3}\rangle^{ \mathrm{A}}|\beta_{3}\rangle^{\mathrm{B}}\), must be linearly independent, as we wanted. \(\sqcap\)\(\sqcup\) For the full analysis, we use the following notations: the set of integer numbers from \(1\) to \(k\) is denoted by \([k]\) (formally, \([k]\triangleq\{1,2,\ldots,k\}\)); and for any subset \(L\subseteq[k]\) and \(1\leq i\leq 3\), the product of all states \(|\alpha_{i}^{(j)}\rangle\) for all \(j\in L\) is denoted \(|\alpha_{i}^{(L)}\rangle\) (formally, \(|\alpha_{i}^{(L)}\rangle\triangleq\bigotimes_{j\in L}|\alpha_{i}^{(j)}\rangle\)). Similarly, for all \(L\subseteq[k]\) we define the generalised local set \(A_{\text{sep}}^{(L)}\) as the span of all states \(|\alpha_{i}^{(L)}\rangle\): \[A_{\text{sep}}^{(L)}\triangleq\mathrm{span}\left\{|\alpha_{1}^{(L)}\rangle,| \alpha_{2}^{(L)}\rangle,|\alpha_{3}^{(L)}\rangle\right\}. \tag{53}\] We can now present the Theorem which classifies all three-dimensional subspaces of genuinely multipartite systems (\(k\)-partite systems with \(k\geq 3\), where none of the local subsystems is one-dimensional) into two general cases, corresponding to Fig. 2 (a triangle) and Fig. 3 (a spherical cone): **Theorem 4**: _Given a multipartite Hilbert space \(\mathcal{H}_{\mathrm{A}_{1}}\otimes\mathcal{H}_{\mathrm{A}_{2}}\otimes\cdots \otimes\mathcal{H}_{\mathrm{A}_{k}}\) and any three-dimensional subspace \(S\subseteq\mathcal{H}_{\mathrm{A}_{1}}\otimes\mathcal{H}_{\mathrm{A}_{2}} \otimes\cdots\otimes\mathcal{H}_{\mathrm{A}_{k}}\), let \(S_{\text{sep}}\) be the subspace spanned by all product states in \(S\) (that is, \(S_{\text{sep}}\triangleq\mathrm{span}\{|\psi\rangle\in S\;:\;|\psi\rangle\) is a product state\(\}\)). We would like to characterise the set of all fully separable states (both pure and mixed) over \(S\), a set we denote by \(\mathcal{D}^{S}_{\text{sep}}\) and formally define as follows:_ \[\mathcal{D}^{S}_{\text{sep}}\triangleq\{\rho\in\mathcal{D}(S)\;:\;\rho\;\text {is fully separable}\}. \tag{54}\] _If \(\dim S_{\text{sep}}\leq 2\), then \(\mathcal{D}^{S}_{\text{sep}}\) belongs to one of the five classes described in Theorem 11 of [1] (or, equivalently, to one of the five multipartite generalisations of the classes described in Theorem 1)._ _If \(\dim S_{\text{sep}}=3\) (so \(S_{\text{sep}}=S\)), then using the notations of Eqs. (45) and (47):_ \[\begin{split} S&=\mathrm{span}\left\{|\alpha_{1}^{(1)} \rangle\otimes\cdots\otimes|\alpha_{1}^{(k)}\rangle\;,\;|\alpha_{2}^{(1)} \rangle\otimes\cdots\otimes|\alpha_{2}^{(k)}\rangle\;,\;|\alpha_{3}^{(1)} \rangle\otimes\cdots\otimes|\alpha_{3}^{(k)}\rangle\right\},\\ A^{(j)}_{\text{sep}}&\triangleq\mathrm{span}\left\{| \alpha_{1}^{(j)}\rangle,|\alpha_{2}^{(j)}\rangle,|\alpha_{3}^{(j)}\rangle \right\},\end{split} \tag{55}\] _and assuming, without loss of generality (as explained above), that \(\dim A^{(j)}_{\text{sep}}\in\{2,3\}\) for all \(1\leq j\leq k\), one of the two following cases occurs:_ * _Triangle: The fully separable pure and mixed states over_ \(S\) _are exactly all mixtures (convex combinations) of_ \(|\alpha_{1}^{(1)}\rangle\!\langle\alpha_{1}^{(1)}|\otimes\cdots\otimes|\alpha_ {1}^{(k)}\rangle\!\langle\alpha_{1}^{(k)}|\;\;\)_,_ \(|\alpha_{2}^{(1)}\rangle\!\langle\alpha_{2}^{(1)}|\otimes\cdots\otimes|\alpha_ {2}^{(k)}\rangle\!\langle\alpha_{2}^{(k)}|\)_, and_ \(|\alpha_{3}^{(1)}\rangle\!\langle\alpha_{3}^{(1)}|\otimes\cdots\otimes|\alpha_ {3}^{(k)}\rangle\!\langle\alpha_{3}^{(k)}|\)_; all the other pure and mixed states over_ \(S\) _are entangled. Formally:_ \[\begin{split}\mathcal{D}^{S}_{\text{sep}}&=\mathrm{ conv}\left\{|\alpha_{1}^{(1)}\rangle\!\langle\alpha_{1}^{(1)}|\otimes\cdots \otimes|\alpha_{1}^{(k)}\rangle\!\langle\alpha_{1}^{(k)}|\;,\;|\alpha_{2}^{(1) }\rangle\!\langle\alpha_{2}^{(1)}|\otimes\cdots\otimes|\alpha_{2}^{(k)}\rangle \!\langle\alpha_{2}^{(k)}|\;,\\ &|\alpha_{3}^{(1)}\rangle\!\langle\alpha_{3}^{(1)}|\otimes\cdots \otimes|\alpha_{3}^{(k)}\rangle\!\langle\alpha_{3}^{(k)}|\right\}.\end{split}\] (56) _This case is a generalisation of the bipartite cases (3,3), (2,3)-i, and (3,2)-i in Theorem_ 2_, and it is illustrated in Fig._ 2_._ * _Spherical cone: There exists a subsystem_ \(\mathcal{H}_{\mathrm{A}_{\ell}}\) _(namely, there exists an index_ \(1\leq\ell\leq k\)) such that out of the three states_ \(|\alpha_{1}^{([k]\setminus\{\ell\})}\rangle,|\alpha_{2}^{([k]\setminus\{\ell\}) }\rangle,|\alpha_{3}^{([k]\setminus\{\ell\})}\rangle,|\alpha_{3}^{([k] \setminus\{\ell\})}\rangle\)_, two states are equal (without loss of generality, we can assume_ \(|\alpha_{2}^{([k]\setminus\{\ell\})}\rangle=|\alpha_{3}^{([k]\setminus\{\ell\})}\rangle\)_), and the fully separable pure and mixed states over_ \(S\) _are exactly all mixtures (convex combinations) of_ \(|\alpha_{1}^{(1)}\rangle\!\langle\alpha_{1}^{(1)}|\otimes\cdots\otimes|\alpha_ {1}^{(k)}\rangle\!\langle\alpha_{1}^{(k)}|\) _with any pure or mixed state over the space_ \(\mathrm{span}\left\{|\alpha_{2}^{([\ell)}\rangle,|\alpha_{3}^{(\ell)}\rangle \right\}\otimes\!\left\{|\alpha_{2}^{([k]\setminus\{\ell\})}\rangle\right\}\)_; all the other pure and mixed states over_ \(S\) _are entangled. Formally, in case_ \(|\alpha_{2}^{([k]\setminus\{\ell\})}\rangle=|\alpha_{3}^{([k]\setminus\{\ell\})}\rangle\)_:_ \[\begin{split}\mathcal{D}^{S}_{\text{sep}}&=\mathrm{ conv}\left[\left\{|\alpha_{1}^{(1)}\rangle\!\langle\alpha_{1}^{(1)}|\otimes\cdots \otimes|\alpha_{1}^{(k)}\rangle\!\langle\alpha_{1}^{(k)}|\right\}\right.\\ &\qquad\qquad\left.\cup\;\mathcal{D}\left(\mathrm{span}\left\{| \alpha_{2}^{(\ell)}\rangle,|\alpha_{3}^{(\ell)}\rangle\right\}\otimes\left\{| \alpha_{2}^{([k]\setminus\{\ell\})}\rangle\right\}\right)\right],\end{split}\] (57) _and symmetric results are obtained in case_ \(|\alpha_{1}^{([k]\setminus\{\ell\})}\rangle=|\alpha_{2}^{([k]\setminus\{\ell\})}\rangle\) _or_ \(|\alpha_{1}^{([k]\setminus\{\ell\})}\rangle=|\alpha_{3}^{([k]\setminus\{\ell\})}\rangle\)_. This case is a generalisation of the bipartite case (3,2)-ii (and its symmetric case (2,3)-ii) in Theorem_ 2_, and it is illustrated in Fig._ 3_._ **Remark**: We point out that there is _no_ generalisation of neither bipartite cases (2,2)-i and (2,2)-ii of Theorem 2. Thus, both cases (2,2)-i and (2,2)-ii (illustrated in Figs. 4 and 5, respectively) exist in bipartite systems but do not exist in (genuinely) multipartite systems. **Proof** If \(\dim S_{\text{sep}}\leq 2\), then applying Theorem 11 of [1] to a two-dimensional subspace of \(S\) which includes \(S_{\text{sep}}\) proves that the set \(\mathcal{D}_{\text{sep}}^{S_{\text{sep}}}\) of pure and mixed separable states over \(S_{\text{sep}}\) belongs to one of the five classes described in [1] for multipartite systems. All the other pure and mixed states over \(S\) (that are not states over \(S_{\text{sep}}\)) must be entangled (that is, they cannot be fully separable). Therefore, \(\mathcal{D}_{\text{sep}}^{S}=\mathcal{D}_{\text{sep}}^{S_{\text{sep}}}\) indeed belongs to one of the five classes described in [1] for two-dimensional subspaces of multipartite systems, which are direct generalisations of the classes described in [1] (or in our Theorem 1) for two-dimensional subspaces of bipartite systems. If \(\dim S_{\text{sep}}=3\), then \(S_{\text{sep}}=S\), in which case the proof typically applies Theorem 2 to \(S\) under specific bipartite _partitions_ (or _cuts_) and analyses entanglement and separability with respect to these partitions. For our \(k\)-partite system \(\mathcal{H}_{\text{A}_{1}}\otimes\mathcal{H}_{\text{A}_{2}}\otimes\cdots \otimes\mathcal{H}_{\text{A}_{k}}\) and the set of indexes \([k]\triangleq\{1,2,\ldots,k\}\), a bipartite partition is a pair \((X,Y)\) of two disjoint sets of indexes \(X,Y\subseteq[k]\) such that \(X\cup Y=[k]\) and \(X\cap Y=\emptyset\). A state is said to be _entangled (or separable) with respect to the partition \((X,Y)\)_ if merging most subsystems and leaving only the two large quantum subsystems \(\mathcal{H}_{X}\otimes\mathcal{H}_{Y}\triangleq\left(\bigotimes_{j\in X} \mathcal{H}_{\text{A}_{j}}\right)\otimes\left(\bigotimes_{j\in Y}\mathcal{H}_ {\text{A}_{j}}\right)\) (reordering the quantum subsystems) results in an entangled (or separable) bipartite state. In particular, if a state is fully separable, it is separable with respect to all possible bipartite partitions; therefore, to prove that a multipartite state is entangled (i.e., not fully separable), it is sufficient to find a bipartite partition such that the state is entangled with respect to it. We can now analyse the entanglement class of \(S\) in three different situations, given the dimensions of the \(k\) local subspaces \(A_{\text{sep}}^{(1)}\), \(A_{\text{sep}}^{(2)},\ldots,A_{\text{sep}}^{(k)}\) (all dimensions are in \(\{2,3\}\)) defined in Eq. (47): _Situation 1: There are at least two local subspaces of dimension 3:_ Formally, in this case, there exist \(\ell,m\in[k]\) (\(\ell\neq m\)) such that \(\dim A_{\text{sep}}^{(\ell)}=\dim A_{\text{sep}}^{(m)}=3\). We can therefore analyse the entanglement with respect to the bipartite partition \((\{\ell\}\,\ [k]\setminus\{\ell\})\): the dimensions of the relevant local subspaces are \(\dim A_{\text{sep}}^{(\ell)}=3\) and \(\dim A_{\text{sep}}^{([k]\setminus\{\ell\})}\geq\dim A_{\text{sep}}^{(m)}=3\) (according to Lemma 3), so the relevant case is (3,3). Therefore, applying Theorem 2 (case (3,3)) to \(S\) under this partition implies that we are in case (i) (a triangle): the fully separable pure and mixed states over \(S\) are exactly all mixtures (convex combinations) of \(|\alpha_{1}^{(1)}\rangle\!\langle\alpha_{1}^{(1)}|\otimes\cdots\otimes|\alpha_{ 1}^{(k)}\rangle\!\langle\alpha_{1}^{(k)}|\ \,\ \ |\alpha_{2}^{(1)}\rangle\!\langle\alpha_{2}^{(1)}| \otimes\cdots\otimes|\alpha_{2}^{(k)}\rangle\!\langle\alpha_{2}^{(k)}|\), and \(|\alpha_{3}^{(1)}\rangle\!\langle\alpha_{3}^{(1)}|\otimes\cdots\otimes|\alpha _{3}^{(k)}\rangle\!\langle\alpha_{3}^{(k)}|\), and all the other pure and mixed states over \(S\) are entangled. _Situation 2: There is only one local subspace of dimension 3:_ Formally, in this case, there exists \(\ell\in[k]\) such that \(\dim A_{\text{sep}}^{(\ell)}=3\), and for any other \(m\in[k]\setminus\{\ell\}\) it holds that \(\dim A_{\text{sep}}^{(m)}=2\). We divide into two cases: 1. There is a subsystem \(m\in[k]\setminus\{\ell\}\) such that the three states \(|\alpha_{1}^{(m)}\rangle,|\alpha_{2}^{(m)}\rangle,|\alpha_{3}^{(m)}\rangle\) are linearly dependent but in general position. In this case, if we apply Lemma 3 we get that the three states \(|\alpha_{1}^{([k]\setminus\{\ell\})},|\alpha_{2}^{([k]\setminus\{\ell\})} \rangle,|\alpha_{3}^{([k]\setminus\{\ell\})}\rangle\) are linearly independent, so \(\dim A_{\text{sep}}^{([k]\setminus\{\ell\})}=3\). We can therefore analyse the entanglement with respect to the bipartite partition \((\{\ell\}\,\ [k]\setminus\{\ell\})\): the dimensions of the relevant local subspaces are \(\dim A_{\text{sep}}^{(\ell)}=\dim A_{\text{sep}}^{([k]\setminus\{\ell\})} =3\), so the relevant case is (3,3). Therefore, applying Theorem 2 (case (3,3)) to \(S\) under this partition implies that we are in case (i) (a triangle), identically to Situation 1 above. 2. For all subsystems \(m\in[k]\setminus\{\ell\}\), two of the three states \(|\alpha_{1}^{(m)}\rangle,|\alpha_{2}^{(m)}\rangle,|\alpha_{3}^{(m)}\rangle\) are equal to one another. Formally, for each \(m\in[k]\setminus\{\ell\}\) there exists a two-element subset \(I_{m}\triangleq\{i_{m},j_{m}\}\subset\{1,2,3\}\) such that \(|\alpha_{i_{m}}^{(m)}\rangle=|\alpha_{j_{m}}^{(m)}\rangle\). We divide into two subcases: 1. If there exist two subsets \(I_{m}\) and \(I_{n}\) which are different from one another (formally, if there exist \(m,n\in[k]\setminus\{\ell\}\) such that \(I_{m}\neq I_{n}\)), then the three states \(|\alpha_{1}^{(m)}\rangle|\alpha_{1}^{(n)}\rangle\,\ |\alpha_{2}^{(m)} \rangle|\alpha_{2}^{(n)}\rangle\,\ |\alpha_{3}^{(n)}\rangle|\alpha_{3}^{(n)}\rangle\) must be linearly independent. Therefore, according to Lemma 3, the three states \(|\alpha_{1}^{([k]\setminus\{\ell\})}\rangle,|\alpha_{2}^{([k]\setminus\{\ell\} )}\rangle,|\alpha_{3}^{([k]\setminus\{\ell\})}\rangle\) are, too, linearly independent, so \(\dim A_{\text{sep}}^{([k]\setminus\{\ell\})}=3\). We can therefore analyse the entanglement with respect to the bipartite partition \((\{\ell\}\,\ [k]\setminus\{\ell\})\): the dimensions of the relevant local subspaces are \(\dim A_{\text{sep}}^{(\ell)}=\dim A_{\text{sep}}^{([k]\setminus\{\ell\})}=3\), so the relevant case is (3,3). Therefore, applying Theorem 2 (case (3,3)) to \(S\) under this partition implies that we are in case (i) (a triangle), identically to the two cases above. 2. If all subsets \(I_{n}\) are identical to each other for all \(n\in[k]\setminus\{\ell\}\), then we can assume, without loss of generality, \(I_{n}=\{2,3\}\). Then, for all \(n\in[k]\setminus\{\ell\}\) it holds that \(|\alpha_{2}^{(n)}\rangle=|\alpha_{3}^{(n)}\rangle\), which in particular implies \(|\alpha_{2}^{([k]\setminus\{\ell\})}\rangle=|\alpha_{3}^{([k]\setminus\{\ell\} )}\rangle\). Therefore, \(\dim A_{\text{sep}}^{([k]\setminus\{\ell\})}=2\). We can therefore analyse the entanglement with respect to the bipartite partition \((\{\ell\}\,\ [k]\setminus\{\ell\})\): the dimensions of the relevant local subspaces are \(\dim A_{\text{sep}}^{(\ell)}=3\) and \(\dim A_{\text{sep}}^{([k]\setminus\{\ell\})}=2\) (where \(|\alpha_{2}^{([k]\setminus\{\ell\})}\rangle=|\alpha_{3}^{([k]\setminus\{\ell\} )}\rangle\) are _not_ in general position), so the relevant case is (3,2)-ii. Therefore, applying Theorem 2 (case (3,2)-ii) to \(S\) under this partition implies that we are in case (ii) (a spherical cone): the fully separable pure and mixed states over \(S\) are exactly all mixtures (convex combinations) of \(|\alpha_{1}^{(1)}\rangle\!\langle\alpha_{1}^{(1)}|\otimes\cdots\otimes|\alpha_{ 1}^{(k)}\rangle\!\langle\alpha_{1}^{(k)}|\) with any pure or mixed state over the space \(\operatorname{span}\left\{|\alpha_{2}^{(\ell)}\rangle,|\alpha_{3}^{(\ell)} \right\}\otimes\left\{|\alpha_{2}^{([k]\setminus\{\ell\})}\rangle\right\}\), and all the other pure and mixed states over \(S\) are entangled. _Situation 3:_**All _local subspaces are of dimension 2:_** Formally, in this case, for all \(j\in[k]\) it holds that \(\dim A_{\text{sep}}^{(j)}=2\). We divide into three cases: 1. There are _two_ subsystems \(\ell,m\in[k]\) such that the three states \(|\alpha_{1}^{(\ell)}\rangle,|\alpha_{2}^{(\ell)}\rangle,|\alpha_{3}^{(\ell)}\rangle\) are linearly dependent but in general position _and_ the three states \(|\alpha_{1}^{(m)}\rangle,|\alpha_{2}^{(m)}\rangle,|\alpha_{3}^{(m)}\rangle\) are linearly dependent but in general position. In this case, if we apply Lemma 3 we get that the three states \(|\alpha_{1}^{([k]\setminus\{\ell\})}\rangle,|\alpha_{2}^{([k]\setminus\{\ell \})}\rangle,|\alpha_{3}^{([k]\setminus\{\ell\})}\rangle\) are linearly independent, so \(\dim A_{\text{sep}}^{([k]\setminus\{\ell\})}=3\). We can therefore analyse the entanglement with respect to the bipartite partition \((\{\ell\}\,\ [k]\setminus\{\ell\})\): the dimensions of the relevant local subspaces are \(\dim A_{\text{sep}}^{(\ell)}=2\) (where the spanning states are in general position) and \(\dim A_{\text{sep}}^{([k]\setminus\{\ell\})}=3\), so the relevant case is (2,3)-i. Therefore, applying Theorem 2 (case (2,3)-i) to \(S\) under this partition implies that we are in case (i) (a triangle), identically to three of the cases above. 2. There is exactly _one_ subsystem \(\ell\in[k]\) such that the three states \(|\alpha_{1}^{(\ell)}\rangle,|\alpha_{2}^{(\ell)}\rangle,|\alpha_{3}^{(\ell)}\rangle\) are linearly dependent but in general position. Thus, for all subsystems \(m\in[k]\setminus\{\ell\}\), two of the three states \(|\alpha_{1}^{(m)}\rangle,|\alpha_{2}^{(m)}\rangle,|\alpha_{3}^{(m)}\rangle\) are equal to one another; formally, for each \(m\in[k]\setminus\{\ell\}\) there exists a two-element subset \(I_{m}\triangleq\{i_{m},j_{m}\}\subset\{1,2,3\}\) such that \(|\alpha_{i_{m}}^{(m)}\rangle=|\alpha_{j_{m}}^{(m)}\rangle\). We divide into two subcases: 1. If there exist two subsets \(I_{m}\) and \(I_{n}\) which are different from one another (formally, if there exist \(m,n\in[k]\setminus\{\ell\}\) such that \(I_{m}\neq I_{n}\)), then the three states \(|\alpha_{1}^{(m)}\rangle|\alpha_{1}^{(n)}\rangle\), \(|\alpha_{2}^{(m)}\rangle|\alpha_{2}^{(n)}\rangle\), \(|\alpha_{3}^{(m)}\rangle|\alpha_{3}^{(n)}\rangle\) must be linearly independent. Therefore, according to Lemma 3, the three states \(|\alpha_{1}^{([k]\setminus\{\ell\})}\rangle,|\alpha_{2}^{([k]\setminus\{\ell\} )}\rangle,|\alpha_{3}^{([k]\setminus\{\ell\})}\rangle\) are, too, linearly independent, so \(\dim A_{\text{sep}}^{([k]\setminus\{\ell\})}=3\). We can therefore analyse the entanglement with respect to the bipartite partition \((\{\ell\}\,\ [k]\setminus\{\ell\})\): the dimensions of the relevant local subspaces are \(\dim A_{\text{sep}}^{(\ell)}=2\) (where the spanning states are in general position) and \(\dim A_{\text{sep}}^{([k]\setminus\{\ell\})}=3\), so the relevant case is (2,3)-i. Therefore, applying Theorem 2 (case (2,3)-i) to \(S\) under this partition implies that we are in case (i) (a triangle), identically to four of the cases above. 2. If all subsets \(I_{n}\) are identical to each other for all \(n\in[k]\setminus\{\ell\}\), then we can assume, without loss of generality, \(I_{n}=\{2,3\}\). Let us choose an arbitrary \(m\in[k]\setminus\{\ell\}\) (so \(|\alpha_{2}^{(m)}\rangle=|\alpha_{3}^{(m)}\rangle\)), then, because \(|\alpha_{1}^{(\ell)}\rangle,|\alpha_{2}^{(\ell)}\rangle,|\alpha_{3}^{(\ell)}\rangle\) are in general position, the three states \(|\alpha_{1}^{(\ell)}\rangle|\alpha_{1}^{(m)}\rangle\), \(|\alpha_{2}^{(\ell)}\rangle|\alpha_{2}^{(m)}\rangle\), \(|\alpha_{3}^{(\ell)}|\alpha_{3}^{(m)}\rangle\) are linearly independent according to Lemma 3. This implies that \(\dim A_{\text{sep}}^{(\ell,m)}=3\). On the other hand, for all \(n\in[k]\setminus\{\ell,m\}\) it holds that \(|\alpha_{2}^{(n)}\rangle=|\alpha_{3}^{(n)}\rangle\), which in particular implies \(|\alpha_{2}^{([k]\setminus\{\ell,m\})}\rangle=|\alpha_{3}^{([k]\setminus\{ \ell,m\})}\rangle\). Therefore, \(\dim A_{\text{sep}}^{([k]\setminus\{\ell,m\})}=2\). We can therefore analyse the entanglement with respect to the bipartite partition \((\{\ell,m\}\,\ [k]\setminus\{\ell,m\})\): the dimensions of the relevant local subspaces are \(\dim A_{\text{sep}}^{(\{\ell,m\})}=3\) and \(\dim A_{\text{sep}}^{([k]\setminus\{\ell,m\})}=2\) (where \(|\alpha_{2}^{([k]\setminus\{\ell,m\})}\rangle=|\alpha_{3}^{([k]\setminus\{ \ell,m\})}\rangle\) are _not_ in general position), so the relevant case is (3,2)-ii. Therefore, applying Theorem 2 (case (3,2)-ii) to \(S\) under this partition implies that we are in case (ii) (a spherical cone): the fully separable pure and mixed states over \(S\) are exactly all mixtures (convex combinations) of \(|\alpha_{1}^{(1)}\rangle\!\langle\alpha_{1}^{(1)}|\otimes\cdots\otimes|\alpha_{ 1}^{(k)}\rangle\!\langle\alpha_{1}^{(k)}|\) with any pure or mixed state over the space \(\operatorname{span}\left\{|\alpha_{2}^{(\{\ell,m\})}\rangle,|\alpha_{3}^{( \{\ell,m\})}\rangle\right\}\otimes\left\{|\alpha_{2}^{([k]\setminus\{\ell,m\} )}\rangle\right\}=\operatorname{span}\left\{|\alpha_{2}^{(\ell)}\rangle,| \alpha_{3}^{(\ell)}\rangle\right\}\otimes\left\{|\alpha_{2}^{([k]\setminus\{ \ell\})}\rangle\right\}\) (this equality holds because \(|\alpha_{2}^{(m)}\rangle=|\alpha_{3}^{(m)}\rangle\), and it proves that all those states are indeed fully separable), and all the other pure and mixed states over \(S\) are entangled. 3. There are _no_ subsystems \(\ell\in[k]\) such that the three states \(|\alpha_{1}^{(\ell)}\rangle,|\alpha_{2}^{(\ell)}\rangle,|\alpha_{3}^{(\ell)}\rangle\) are in general position. Thus, for all subsystems \(\ell\in[k]\), two of the three states \(|\alpha_{1}^{(\ell)}\rangle,|\alpha_{2}^{(\ell)}\rangle,|\alpha_{3}^{(\ell)}\rangle\) are equal to one another; formally, for each \(\ell\in[k]\) there exists a two-element subset \(I_{\ell}\triangleq\{i_{\ell},j_{\ell}\}\subset\{1,2,3\}\) such that \(|\alpha_{i_{\ell}}^{(\ell)}\rangle=|\alpha_{j_{\ell}}^{(\ell)}\rangle\). The sets \(I_{\ell}\) cannot be all identical to one another (for all \(\ell\in[k]\)), because then we would get \(\dim S_{\text{sep}}=2\). Therefore, there must exist \(\ell,m\in[k]\) such that \(I_{\ell}\neq I_{m}\). This means that the three states \(|\alpha_{1}^{(\ell)}\rangle|\alpha_{1}^{(m)}\rangle\), \(|\alpha_{2}^{(\ell)}\rangle|\alpha_{2}^{(m)}\rangle\), \(|\alpha_{3}^{(\ell)}\rangle|\alpha_{3}^{(m)}\rangle\) are linearly independent, which means that \(\dim A_{\text{sep}}^{(\{\ell,m\})}=3\). We now divide into two subcases: 1. If there exist two subsystems \(n,o\in[k]\setminus\{\ell,m\}\) such that \(I_{n}\neq I_{o}\), then the three states \(|\alpha_{1}^{(n)}\rangle|\alpha_{1}^{(o)}\rangle\), \(|\alpha_{2}^{(n)}\rangle|\alpha_{2}^{(o)}\rangle\), \(|\alpha_{3}^{(n)}\rangle|\alpha_{3}^{(o)}\rangle\) are linearly independent, which means that \(\dim A_{\text{sep}}^{([k]\setminus\{\ell,m\})}=3\). We can therefore analyse the entanglement with respect to the bipartite partition \((\{\ell,m\}\,\ [k]\setminus\{\ell,m\})\): the dimensions of the relevant local subspaces are \(\dim A_{\text{sep}}^{(\{\ell,m\})}=\dim A_{\text{sep}}^{([k]\setminus\{\ell,m \})}=3\), so the relevant case is (3,3). Therefore, applying Theorem 2 (case (3,3)) to \(S\) under this partition implies that we are in case (i) (a triangle), identically to five of the cases above. 2. If all subsets \(I_{n}\) are identical to each other for all \(n\in[k]\setminus\{\ell,m\}\), then we can assume, without loss of generality, \(I_{n}=\{2,3\}\). Then, for all \(n\in[k]\setminus\{\ell,m\}\) it holds that \(|\alpha_{2}^{(n)}\rangle=|\alpha_{3}^{(n)}\rangle\), which in particular implies \(|\alpha_{2}^{([k]\setminus\{\ell,m\})}\rangle=|\alpha_{3}^{([k]\setminus\{ \ell,m\})}\rangle\). Therefore, \(\dim A_{\text{sep}}^{([k]\setminus\{\ell,m\})}=2\). We can therefore analyse the entanglement with respect to the bipartite partition \((\{\ell,m\}\,\ [k]\setminus\{\ell,m\})\): the dimensions of the relevant local subspaces are \(\dim A_{\text{sep}}^{([\ell,m))}=3\) and \(\dim A_{\text{sep}}^{([k]\setminus\{\ell,m\})}=2\) (where \(|\alpha_{2}^{([k]\setminus\{\ell,m\})}\rangle=|\alpha_{3}^{([k]\setminus\{ \ell,m\})}\rangle\) are _not_ in general position), so the relevant case is (3,2)-ii. Therefore, applying Theorem 2 (case (3,2)-ii) to \(S\) under this partition, we find that the separable pure and mixed states over \(S\)_with respect to the bipartite partition_\((\{\ell,m\}\,\ [k]\setminus\{\ell,m\})\) (notice that these states are not necessarily fully separable!) are exactly all mixtures (convex combinations) of \(|\alpha_{1}^{(1)}\rangle\langle\alpha_{1}^{(1)}|\otimes\cdots\otimes|\alpha_{ 1}^{(k)}\rangle\langle\alpha_{1}^{(k)}|\) with any pure or mixed state over the space \(\text{span}\left\{|\alpha_{2}^{(\{\ell,m\})}\rangle,|\alpha_{3}^{(\{\ell,m\}) }\rangle\right\}\otimes\left\{|\alpha_{2}^{([k]\setminus\{\ell,m\})}\rangle \right\}=\text{span}\left\{|\alpha_{2}^{(\ell)}|\alpha_{2}^{(m)}\rangle\,\ |\alpha_{3}^{( \ell)}\rangle|\alpha_{3}^{(m)}\rangle\right\}\otimes\left\{|\alpha_{2}^{([k] \setminus\{\ell,m\})}\rangle\right\}\), and all the other pure and mixed states over \(S\) are entangled with respect to this partition (and, thus, certainly not fully separable). Now we divide into two subsubcases: 1. If \(I_{n}\) is equal to either \(I_{\ell}\) or \(I_{m}\) (for all \(n\in[k]\setminus\{\ell,m\}\)), then without loss of generality, we can assume \(I_{n}=I_{m}\). Therefore, \(I_{m}=I_{n}=\{2,3\}\), which means that \(|\alpha_{2}^{(m)}\rangle=|\alpha_{3}^{(m)}\rangle\). The partition-specific result we reached above thus means that we are in case (ii) (a spherical cone): the fully separable pure and mixed states over \(S\) are exactly all mixtures (convex combinations) of \(|\alpha_{1}^{(1)}\rangle\langle\alpha_{1}^{(1)}|\otimes\cdots\otimes|\alpha_{ 1}^{(k)}\rangle\langle\alpha_{1}^{(k)}|\) with any pure or mixed state over the space \(\text{span}\left\{|\alpha_{2}^{(\{\ell,m\})}\rangle,|\alpha_{3}^{(\{\ell,m\}) }\rangle\right\}\otimes\left\{|\alpha_{2}^{([k]\setminus\{\ell,m\})}\rangle \right\}=\text{span}\left\{|\alpha_{2}^{(\ell)}\rangle,|\alpha_{3}^{(\ell)} \rangle\right\}\otimes\left\{|\alpha_{2}^{([k]\setminus\{\ell\})}\rangle\right\}\) (this equality holds because \(|\alpha_{2}^{(m)}\rangle=|\alpha_{3}^{(m)}\rangle\), and it proves that all those states are indeed fully separable), and all the other pure and mixed states over \(S\) are entangled. 2. If \(I_{\ell}\neq I_{n}\neq I_{m}\) (for all \(n\in[k]\setminus\{\ell,m\}\)), then in particular \(I_{\ell}\neq\{2,3\}\) and \(I_{m}\neq\{2,3\}\), which means that both \(|\alpha_{2}^{(\ell)}\rangle,|\alpha_{3}^{(\ell)}\rangle\) and \(|\alpha_{2}^{(m)}\rangle,|\alpha_{3}^{(m)}\rangle\) are linearly independent. Therefore, we can reconsider the meaning of the partition-specific result we reached above, which says that the only states over \(S\) that _may_ be fully separable are the mixtures of \(|\alpha_{1}^{(1)}\rangle\langle\alpha_{1}^{(1)}|\otimes\cdots\otimes|\alpha_{ 1}^{(k)}\rangle\langle\alpha_{1}^{(k)}|\) with any pure or mixed state over the local qubit space \(\text{span}\left\{|\alpha_{2}^{(\ell)}\rangle|\alpha_{2}^{(m)}\rangle\,\ |\alpha_{3}^{(\ell)} \rangle|\alpha_{3}^{(m)}\rangle\right\}\otimes\left\{|\alpha_{2}^{([k] \setminus\{\ell,m\})}\rangle\right\}\), and all the other pure and mixed states over \(S\) are entangled. The only states in that local qubit space that _could_ be entangled (that is, not fully separable) are the non-trivial superpositions \(\left(a|\alpha_{2}^{(\ell)}\rangle|\alpha_{2}^{(m)}\rangle+b|\alpha_{3}^{( \ell)}\rangle|\alpha_{3}^{(m)}\rangle\right)\otimes|\alpha_{2}^{([k]\setminus \{\ell,m\})}\rangle\) with \(a,b\neq 0\); and we can indeed see that these states _are_ entangled, since both \(|\alpha_{2}^{(\ell)}\rangle,|\alpha_{3}^{(\ell)}\rangle\) and \(|\alpha_{2}^{(m)}\rangle,|\alpha_{3}^{(m)}\rangle\) are linearly independent, so a local invertible map (which does not affect entanglement and separability) can easily map them to orthonormal states: \(|\alpha_{2}^{(\ell)}\rangle\mapsto|1^{(\ell)}\rangle\quad,\)\(|\alpha_{3}^{(\ell)}\rangle\mapsto|2^{(\ell)}\rangle\quad,\)\(|\alpha_{2}^{(m)}\rangle\mapsto|1^{(m)}\rangle\), and \(|\alpha_{3}^{(m)}\rangle\mapsto|2^{(m)}\rangle\). The resulting state is thus \(\left(a|1^{(\ell)}\rangle|1^{(m)}\rangle+b|2^{(\ell)}\rangle|2^{(m)}\rangle \right)\otimes|\alpha_{2}^{([k]\setminus\{\ell,m\})}\rangle\), which is separable if and only if \(|\alpha_{2}^{(\ell)}\rangle\neq|1^{(\ell)}\rangle\), then \(|\alpha_{3}^{(\ell)}\rangle\neq|1^{(\ell)}\rangle\). Therefore, we can consider the case (3,2)-ii. We can now consider the case (3,2)-iii. only if \(a=0\) or \(b=0\). Hence, the non-trivial superpositions in the local qubit space are indeed entangled. The remaining states are only the three states \(|\alpha_{1}^{(1)}\rangle\otimes\cdots\otimes|\alpha_{1}^{(k)}\rangle\), \(|\alpha_{2}^{(1)}\rangle\otimes\cdots\otimes|\alpha_{2}^{(k)}\rangle\), and \(|\alpha_{3}^{(1)}\rangle\otimes\cdots\otimes|\alpha_{3}^{(k)}\rangle\), which are trivially product states. Thus, our analysis implies that we are in case (i) (a triangle): the fully separable pure and mixed states over \(S\) are exactly all mixtures (convex combinations) of \(|\alpha_{1}^{(1)}\rangle\langle\alpha_{1}^{(1)}|\otimes\cdots\otimes|\alpha_{ 1}^{(k)}\rangle\langle\alpha_{1}^{(k)}|\), \(|\alpha_{2}^{(1)}\rangle\langle\alpha_{2}^{(1)}|\otimes\cdots\otimes|\alpha_{ 2}^{(k)}\rangle\langle\alpha_{2}^{(k)}|\), and \(|\alpha_{3}^{(1)}\rangle\langle\alpha_{3}^{(1)}|\otimes\cdots\otimes|\alpha_ {3}^{(k)}\rangle\langle\alpha_{3}^{(k)}|\), and all the other pure and mixed states over \(S\) are entangled. ## VII Discussion We have presented a full classification of entanglement and separability in three-dimensional Hilbert subspaces of bipartite systems, as well as a generalisation to multipartite systems. This is a general classification that is independent of entanglement measures and applies to all three-dimensional subspaces of any possible Hilbert space. While our results are not aimed at deciding whether a single state is entangled or separable (because that problem is easily solvable for rank-\(3\) states using the partial transpose criterion (PPT) [21], and in some cases also using alternative criteria [31]), they are aimed at understanding the possible structures of entanglement and separability in Hilbert spaces and their internal relations. Our results generalise the findings of Boyer and two of the present authors in [1], which applied to two-dimensional subspaces. In addition to the expected classes in three dimensions (mostly generalisations or combinations of the classes from [1]), we suggested an easy classification for each class using the dimensions of local subspaces (along with information on whether the spanning states of the subspaces are in general position, if applicable), and we also found a few interesting classes that do not exist in two dimensions. Our most interesting novel class, which is not similar to anything found in two dimensions [1], is named (2,2)-ii in Theorem 2 and is described in Fig. 5: it does not simply consist of a finite number of product states or a complete Bloch sphere created by degeneration of local eigenstates, like other classes, but it includes all states of the form \(|\psi\rangle^{\mathrm{A}}|\psi\rangle^{\mathrm{B}}\) (up to local invertible operations)--that is, all product states in the symmetric subspace (the triplet subspace), as described in Section V. The relation between the foundational phenomenon of the symmetric subspace, appearing naturally here, and the possible classes of entanglement in three dimensions and higher dimensions, is an intriguing topic for future research. Other possible directions for future research include extending our results to four-dimensional subspaces and higher-dimensional subspaces (where we may encounter more physical phenomena not appearing here, including bound entanglement) and finding more ways to utilise the geometric figures we suggested in Section V for representing the sets of separable states and their geometric features. One observation we can make based on Theorems 1 and 2 is that for \(2\)- and \(3\)-dimensional subspaces, the number of (pure) product states is either at most the dimension of the subspace (\(2\) or \(3\), respectively) or infinite (cardinality of the continuum). It would be particularly interesting to investigate the maximum finite number of pure product states in general subspaces of arbitrary given dimension. We have also generalised the results to multipartite systems. We point out that, unlike the two-dimensional subspaces discussed in [1] where the bipartite and multipartite cases are essentially equivalent, in three-dimensional subspaces we found substantive differences between bipartite and multipartite systems: most importantly, some of the classes for set of separable states existing in bipartite systems completely disappear for (genuinely) multipartite systems--including the interesting symmetric class (2,2)-ii mentioned above. It could be interesting to find out why these classes disappear in the multipartite case, whether their disappearance hints at a foundational phenomenon of quantum entanglement, and whether similar differences between bipartite and multipartite appear in higher dimensions, too. ###### Acknowledgements. The authors thank Itai Arad for useful initial discussions on generalising the results of [1], Ajit Iqbal Singh for discussions on possible generalisations of the present work, and Kentaro Moto for insightful suggestions regarding identity and representations. The work of RL and TM was partly supported by the Israeli MOD Research and Technology Unit and the Technion's Helen Diller Quantum Center (Haifa, Israel). The work of RL was also partly supported by the Government of Spain (FIS2020-TRANQI and Severo Ochoa CEX2019-000910-S), Fundacio Cellex, Fundacio Mir-Puig, Generalitat de Catalunya (CERCA program), and the European Union NextGenerationEU. AW is supported by the European Commission QuantERA grant ExTRAQT (Spanish MICINN project PCI2022-132965), by the Spanish MICINN (project PID2019-107609GB-I00) with the support of FEDER funds, the Generalitat de Catalunya (project 2017-SGR-1127), by the Spanish MICINN with funding from European Union NextGenerationEU (PRTR-C17.11) and the Generalitat de Catalunya, by the Alexander von Humboldt Foundation, and by the Institute for Advanced Study of the Technical University Munich. ## Conflict of interest statement The authors confirm they have no conflict of interest. ## Data availability statement This manuscript has no associated data.
2309.13726
A compact 20-pass thin-disk multipass amplifier stable against thermal lensing effects and delivering 330 mJ pulses with $\bf{M^2 < 1.17}$
We report on an Yb:YAG thin-disk multipass amplifier delivering 50 ns long pulses at a central wavelength of 1030 nm with an energy of 330 mJ at a repetition rate of 100 Hz. The beam quality factor at the maximum energy was measured to be $\text{M}^2 = 1.17$. The small signal gain is 20, and the gain at 330 mJ was measured to be 6.9. The 20-pass amplifier is designed as a concatenation of stable resonator segments in which the beam is alternately Fourier transformed and relay-imaged back to the disk by a 4f-imaging optical scheme stage. The Fourier transform propagation makes the output beam robust against spherical phase front distortions, while the 4f-stage is used to compensate the thermal lens of the thin-disk and to reduce the footprint of the amplifier.
Manuel Zeyen, Lukas Affolter, Marwan Abdou Ahmed, Thomas Graf, Oguzhan Kara, Klaus Kirch, Miroslaw Marszalek, François Nez, Ahmed Ouf, Randolf Pohl, Siddharth Rajamohanan, Pauline Yzombard, Karsten Schuhmann, Aldo Antognini
2023-09-24T19:11:45Z
http://arxiv.org/abs/2309.13726v1
A compact 20-pass thin-disk multipass amplifier stable against thermal lensing effects and delivering 330 mJ pulses with M\({}^{2}<1.17\) ###### Abstract We report on an Yb:YAG thin-disk multipass amplifier delivering 50 ns long pulses at a central wavelength of 1030 nm with an energy of 330 mJ at a repetition rate of 100 Hz. The beam quality factor at the maximum energy was measured to be M\({}^{2}=1.17\). The small signal gain is 20, and the gain at 330 mJ was measured to be 6.9. The 20-pass amplifier is designed as a concatenation of stable resonator segments in which the beam is alternately Fourier transformed and relay-imaged back to the disk by a 4f-imaging optical scheme stage. The Fourier transform propagation makes the output beam robust against spherical phase front distortions, while the 4f-stage is used to compensate the thermal lens of the thin-disk and to reduce the footprint of the amplifier. osajournal ## 1 Introduction Multipass amplifiers are versatile tools for the generation of high-power and high-energy laser beams. Systems delivering kWs of average power have been demonstrated at all pulse lengths from continuous wave down to femtoseconds. In particular, thin-disk multipass amplifiers have shown their great potential in combining high power/energy output with diffraction limited beam quality and high flexibility regarding the input pulse length [1, 2, 3]. Most thin-disk multipass amplifier designs are based on three main categories: relay-imaging (4f-imaging) [4, 5, 6, 7, 8, 9], quasi-collimated propagation (lens guide) [10, 11, 12, 13, 14] or resonator-based optical Fourier transform (FT) propagations [15, 16, 17]. In 4f-based designs, the beam is imaged from active medium (AM) to active medium so that its size remains constant for each pass (if soft aperture effects are neglected). These systems are relatively compact and input beams with a wide variety of beam parameters can be propagated. However, phase front distortions introduced by thermal effects in the AM add up with each pass over the AM, limiting the achievable beam quality and stability of such an amplifier. In the quasi-collimated design the beam propagates almost freely in the amplifier. Only weakly focusing elements are used to compensate the beam divergence and ensure the same size of the beam on each pass over the AM. This type of multipass amplifier is relatively convenient to set up and has the additional advantage that the propagating beam stays large along the whole propagation path, which makes the quasi-collimated scheme an excellent choice for the highest power applications with excellent beam quality [18]. While this design is less sensitive to phase front distortions at the AM than the 4f-design, a third design based on FT propagations provides the highest stability against both spherical and aspherical phase front distortions [19, 20]. The FT design can be understood as an unfolded FT resonator of a corresponding regenerative amplifier. The amplifier thus inherits the stability of the FT resonator w.r.t. thermal lensing so that FT-based multipass amplifiers have the potential to deliver near diffraction limited output beams at kW (Joule) level output power (energy). However, a resonator-based multipass amplifier typically requires careful setup and mode matching of the input beam to the resonator mode, and a rather large mirror array. Here we report on the development of a multipass amplifier that combines the advantages of the 4f-based imaging approach with the stability of the FT-resonator design. This hybrid amplifier is part of the laser system of the HyperMu experiment, in which the CREMA (Charge Radius Experiments with Muonic Atoms) collaboration aims at measuring the ground-state hyperfine splitting of muonic hydrogen [21, 22]. In this system a single-frequency Yb:YAG thin-disk laser (TDL), composed of an oscillator followed by the here presented amplifier, is used for pumping mid-IR optical parametric oscillators (OPOs) and optical parametric amplifiers (OPAs). The input laser pulses to the amplifier are provided by an injection seeded TDL oscillator which is frequency stabilized via the Pound-Drever-Hall method [23, 24]. The TDL oscillator delivers single-frequency pulses of up to 50 mJy and a pulse length between 50-110 ns with nearly diffraction limited beam quality. Ultimately, the goal is to develop an amplifier which delivers nanoseconds pulses with 500 mJy of pulse energy and an excellent beam quality factor together with a very high long-term energy/power and pointing stability. In the following, Section 2 introduces the multipass amplifier concept, and Section 3 further elaborates on the optical layout. In Section 4, we describe the experimental realization of our hybrid multipass amplifier and discusses the advantages of including a 4f-stage in the propagation scheme. Finally the experimental results are presented in Section 5. ## 2 Amplifier concept The amplifier architecture includes a double 4f-imaging stage and an optical Fourier transform (FT) segment, so that the propagation follows the sequence \[4f-\text{ disk }-\text{FT}-\text{ disk }-4f-4f-\text{ disk }-\text{FT}-\text{ disk }\cdots, \tag{1}\] where the disk forms the beginning and the end of the FT propagation. Neglecting thermal lensing and aperture effects in the disk, the ABCD matrix of the basic double-pass segment \(4f-\text{disk}-\text{FT}-\text{disk}-4f\) is conveniently given by \[M=M_{4f}M_{\text{FT}}M_{4f}=\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix}\begin{pmatrix}0&F\\ -\frac{1}{F}&0\end{pmatrix}\begin{pmatrix}-1&0\\ 0&-1\end{pmatrix}=\begin{pmatrix}0&F\\ -\frac{1}{F}&0\end{pmatrix}, \tag{2}\] where \(F\) is the so-called Fourier parameter [19, 20]. Only a collimated input beam with a radius (1/e\({}^{2}\)) of \[w_{0}=\sqrt{\frac{\lambda F}{\pi}} \tag{3}\] is reproduced by the ABCD matrix (2) of this double-pass segment. In general, this makes resonator-based multipass amplifiers less flexible compared to 4f-based designs. However, the advantage of the FT-resonator based approach becomes apparent when the disk's focal power (i.e., inverse focal length) changes by a small amount \(\Delta V\), e.g. due to thermal lensing or variations of is curvature in the manufacturing process. Indeed, the FT propagation stabilizes the output beam parameters (size and phase front curvature) against \(\Delta V\). After a single round-trip through a double-pass segment, the sensitivity of the beam parameters to \(\Delta V\) is small, and given by [19] \[\frac{w_{\text{out}}(\Delta V)}{w_{0}}=1+\frac{1}{2}F^{2}\Delta V^{2}-\frac{1}{8 }F^{4}\Delta V^{4}+\ldots, \tag{4}\] \[\frac{1}{R_{\text{out}}(\Delta V)}=-F^{2}\Delta V^{3}+F^{4}\Delta V^{5}-\ldots, \tag{5}\] where \(F\) is given by Eq. (3). For a 20-pass amplifier, as used in this study, the sensitivity of the beam parameters is \[\frac{w_{\text{20-pass}}(\Delta V)}{w_{0}}=1+50F^{4}\Delta V^{4}+\ldots, \tag{6}\] \[\frac{1}{R_{\text{20-pass}}(\Delta V)}=10F^{2}\Delta V^{3}+\ldots. \tag{7}\] As illustrated in Fig. 1, the beam parameters change only marginally for small \(\Delta V\) (in the stability zone), and the width of this stability zone is independent of \(N\) contrary to the 4f-based design [19, 15]. As visible in Fig. 2 the width of the stability zone strongly depends on \(w_{0}\) which is a consequence of the design of the FT multipass amplifier as a resonator. Nevertheless, the stability zone is still noticeable in contrast to purely 4f-based designs [20, 25]. The stability against small changes of the focal power of the disk and the suppression of higher order aberrations by the FT not only make long-term operation of the amplifier robust but also simplify commissioning of the system (the amplifier can be initially setup at zero pump load and the beam shape always remains Gaussian). ## 3 Optical layout In practice, it is crucial to shorten the FT propagation by realizing it as a Galilean telescope [19]. Such a telescope can be realized by a concave-shaped disk and a convex-shaped mirror, with the FT being performed over a round-trip through the telescope, i.e., from just before the disk back to just after the disk. The layout of the telescope is then calculated by matching the Galilean telescope to the ABCD matrix given in Eq. (2). A realization of such an amplifier is shown in Fig. 3 where the separation \(d_{1}\) between the disk and the convex mirror and the separation Figure 1: Variation of the output beam size **a**) and phase front curvature **b**) against changes \(\Delta V\) in the focal power of the disk for 2-, 8- and 20-passes in our hybrid-amplifier. The Fourier propagation of the amplifier was setup to accomodate an input beam of waist \(w_{0}=2.75\) mm, corresponding to \(F=23.1\) m. between the convex mirror and the end mirror are given by [20] \[d_{1} =f_{\rm wx}+f_{\rm D}\frac{F}{F-f_{\rm D}}, \tag{8}\] \[d_{2} =f_{\rm wx}+\frac{1}{2}\left(\frac{f_{\rm wx}}{f_{\rm D}}\right)^{2 }F-\frac{f_{\rm wx}^{2}}{2F}, \tag{9}\] where \(f_{\rm D}\) is the disk's focal length, and \(f_{\rm wx}\) is the focal length of the convex mirror. The beam waist \(w_{2}\) on the end-mirror M\({}_{2}\) is given by \[w_{2}=\frac{\sqrt{2}}{2}w_{0}f_{\rm wx}\left(\frac{1}{F}-\frac{1}{f_{\rm D}} \right), \tag{10}\] which serves to evaluate potential issues related to laser-induced damage. While it is typically advantageous to realise the Galilean telescope in the FT branch with a curved disk, we had to use flat disks. Indeed, to maximize the extractable energy from the disk, i.e., to minimize amplified spontaneous emission effects, we used a rather thick and low-doped disk (600 \(\mathrm{\SIUnitSymbolMicro m}\), 2.3 at.%) [20]. It turned out to be problematic to contact such disks on curved diamond heat-spreaders so we contacted the disk on a flat diamond. The resulting focal length of the disk was \(f_{\rm D}\approx 10-15\,\mathrm{m}\), which is too large for a standard Galilean telescope setup. However, if the 4f-stage is detuned, i.e., the separation between the focusing elements is changed from \(2f\) to \(2f+\delta\), an effective lens is produced at the disk. Indeed, the ABCD matrix of such a detuned 4f-stage is given by \[M_{\rm 4f}(\delta)=\begin{pmatrix}1&f\\ 0&1\end{pmatrix}\begin{pmatrix}1&0\\ -\frac{1}{f}&1\end{pmatrix}\begin{pmatrix}1&2f+\delta\\ 0&1\end{pmatrix}\begin{pmatrix}1&0\\ -\frac{1}{f}&1\end{pmatrix}\begin{pmatrix}1&f\\ 0&1\end{pmatrix}=\begin{pmatrix}-1&0\\ \frac{\delta}{f^{2}}&-1\end{pmatrix}, \tag{11}\] which is equivalent to a 4f-stage followed by a thin lens of focal length \(f^{\prime}=f^{2}/\delta\). Placing the disk directly after the detuned 4f-stage thus allows to tune the disk's effective focal length. With a detuned 4f-stage, Eq. (8) and Eq. (9) are still valid when replacing \[\frac{1}{f_{\rm D}}\rightarrow\frac{1}{f_{\rm eff}}=\frac{1}{f_{\rm D}}+\frac {1}{f^{\prime}}. \tag{12}\] The output beam parameters are again stable against variations of \(\Delta V\) since the round-trip ABCD matrix still has the form given by (2). Figure 2: Variation of the output beam size **a)** and phase front curvature **b)** against changes \(\Delta V\) in the focal power of the disk for various beam sizes in a 20-pass hybrid-amplifier with \(w_{0}=1.75\), 2.75 and 3.75 mm, corresponding to F = 9.3, 23.1 and 42.9 m, respectively. During operation, the effective lens introduced by the 4f-stage can also be employed to compensate for deviations of the focal power of the disk from its design value (e.g. due to thermal lensing under high pump load or manufacturing uncertainties) [20, 26]. The detuning \(\delta=\delta_{\mathrm{D}}+\delta_{\mathrm{V}}\) can be split into two parts: a fixed part \[\delta_{\mathrm{D}}=f^{2}\left(\frac{1}{f^{\prime}}-\frac{1}{f_{\mathrm{D}}} \right), \tag{13}\] used to set \(f_{\mathrm{eff}}\) to realize the nominal design, and a variable part \[\delta_{\mathrm{V}}=-f^{2}\Delta V \tag{14}\] used to compensate for uncertain \(\Delta V\). Operation in the center of the stability zone of the amplifier can be achieved by fine tuning \(\delta\), resulting in optimal stability of the output beam parameters against thermal lensing. Figure 3 shows the calculated beam size in the basic double-pass segment of our hybrid amplifier. While the black curves indicate the beam size for a disk with nominal focal power \(1/f_{\mathrm{D}}\approx 0.01\) m\({}^{-1}\) (i.e., about 10 m focal length), the red and blue curves illustrate the situation where the disk's focal power was changed by \(\Delta V=\pm 0.02\) m\({}^{-1}\), respectively. The 4f-stage is realized with two concave mirrors of focal length \(f=0.75\) m. The separation between the two concave mirrors was increased by \(\delta=0.515\) m to yield \(f_{\mathrm{eff}}\approx 1.5\) m. The FT propagation was designed to keep the beam radius (1/e\({}^{2}\)) on the disk at \(w_{0}=2.75\) mm while preserving a large enough waist \(w_{2}\approx 0.8\) mm on the end-mirror M\({}_{2}\). The convex mirror used in the Galilean telescope of the FT branch has focal length \(f_{\mathrm{vex}}=-0.5\) m. Inserting these numbers into Eq. (8) and Eq. (9) yields \(d_{1}=0.52\) m and \(d_{2}=2.6\) m, respectively. Figure 4 shows the calculated beam size along the whole 20-pass amplifier. The comparison between the various curves (defined as in Fig. 3) demonstrates that the beam size at the disk as well as the output beam size and its divergence are insensitive to changes of the disk's focal power \(\Delta V\). This stability is provided by the FT part of the propagation in our hybrid multipass amplifier. ## 4 Experimental realization Figure 5 shows a schematic of the realized amplifier. The input beam is linearly polarized, has a beam quality of M\({}^{2}<1.1\) and a 1/e\({}^{2}\) radius of 2.75 mm [24]. It is injected into the amplifier Figure 3: Beam envelope (1/e\({}^{2}\) intensity-radius) in the basic double-pass segment of our hybrid amplifier with a detuned 4f-stage. The input beam has a waist of 2.75 mm. The location of the disk is shown in magenta. The black line indicates the beam size for a disk with nominal focal power, i.e., \(\Delta V=0\). The red and blue lines are propagations for \(\Delta V=\pm 0.01\) m\({}^{-1}\). through a thin-film polarizer (TFP) and relay imaged to the disk via the 4f-stage (consisting of M\({}_{\text{cav, 1}}\) and M\({}_{\text{cav, 2}}\) separated by two plane folding mirrors M\({}_{\text{plan, 1}}\) and M\({}_{\text{plan, 1}}\)). The beam then enters the FT part of the amplifier where it is guided from the disk, via the convex mirror M\({}_{\text{vex}}\) and a folding mirror, to array mirror 1. From there, the beam is directed to end-mirror M\({}_{\text{2, 1}}\) which is equipped with a quarter-wave plate that ensures 90\({}^{\circ}\) rotation of the polarization after the double pass. Upon reflection again at the disk, the beam has undergone a Fourier transform (i.e., Figure 4: Beam envelope in our 20-pass hybrid amplifier. The location of the disk is shown in magenta. The black line indicates the beam size for a disk with nominal focal power, i.e., \(\Delta V=0\). The red and blue lines are propagations for \(\Delta V=\pm 0.01\) m\({}^{-1}\). Figure 5: Schematic of the hybrid 20-pass amplifier. Color coding: Disk – magenta, 4f-stage – green, Fourier propagation – blue, end-mirrors – cyan, thin film polarizer and wave plates – yellow. Inset: Mirror array with the mirrors numbered in order of use. The symbols indicate where the symmetry axis of the corresponding optical element intersects the plane of the mirror array. All four locations act as point reflectors leading to the indicated propagation sequence. The yellow shaded mirrors of the end-mirror array have a quarter-wave plate placed in front of them. the profile of the beam leaving the disk corresponds to the FT of the beam impinging for the first time on the disk). Since the polarization was rotated, the beam is now imaged to end-mirror M\({}_{1}\) undergoing a reflection at the TFP. In the following round trips, the 4f-stage allows angular multiplexing on the disk. Thus, a second mirror array in the 4f branch of the amplifier is not required. The small angular spread of the beams on the disk is amplified by M\({}_{\text{vex}}\). The mirror array sends the beam alternately to M\({}_{2,2}\) and M\({}_{2,3}\) and the beam is kept circulating in the amplifier by reflecting at the TFP. On the final pass, the beam is sent over M\({}_{2,4}\), which is also equipped with a quarter-wave plate. In a double-pass through this wave plate, the polarization of the beam is rotated back so that the beam is transmitted by the TFP and exits the amplifier. Since M\({}_{2,1}\) and M\({}_{2,4}\) are separated vertically, the in- and out-going beams propagate under different angles and can easily be separated. The propagation scheme over the mirror array and the end-mirrors is sketched in the inset of Fig. 5. While purely FT based amplifier designs are well known to deliver high-quality beams [16, 27], the addition of a 4f-stage to such a multipass amplifier is new and provides several advantages. For example, as mentioned, our multipass amplifier design requires on average only one array mirror per pass over the disk thanks to the 4f-stage. This is half the number of mirrors used in previous resonator-based designs [15, 16, 25]. Furthermore, the 4f-stage allows for small incidence angles on the disk so that the angular spread between individual passes is small. The mirror array can thus be placed relatively far from the disk, in the telescope part of the resonator segment where the beam diameter is small, and 0.5" diameter array mirrors can be used. This minimizes astigmatism and reduces both cost and footprint of the multipass amplifier. The 4f-stage also helps to mode match the input beam to the eigenmode of the multipass amplifier. The key idea is to measure the beam size on the unpumped disk versus effective focal power of the disk [20], where the distance between the focusing elements in the 4f-part is varied to tune \(\Delta V\) according to (14) (see Figure 6). In such plots, the beam size of a properly mode matched beam will show minimal oscillation around \(\Delta V=0\), i.e., around the nominal value of the focal length of the disk. Using this procedure, the disk cannot be pumped, since the soft aperture of the pump spot would suppress these oscillations [19, 20]. In this data set, the beam size was measured after 6 passes over the disk as a compromise between sensitivity (sufficiently large oscillation signal) and time needed for alignment. In Fig. 6a, the input beam is estimated to converge with half angle \(\theta=0.15\,\text{mrad}\), which explains the relatively large oscillations in beam size [20]. In Fig. 6b, the collimation of the input beam is optimized by slightly moving one of the telescope lenses used to mode match the input beam to the amplifier. The oscillations around \(\delta_{\text{V}}=0\,\text{mm}\) are suppressed, meaning that the beam is correctly injected. The beam size is different in X (horizontal) and Y (vertical) directions because of a slight astigmatism, mainly introduced by the 4f-stage. ## 5 Experimental results With a properly mode matched input beam with a waist of \(w_{0}=2.75\,\text{mm}\) at the disk, a small signal gain of about 20 was reached with 20 passes (reflections) over the disk. As shown in Fig. 7, the output beam was close to diffraction limited with M\({}^{2}\leq 1.17\) in horizontal and vertical direction (measurement taken according to ISO 11146 with Thorlabs BP209-IR2). The pump radiation is delivered by a volume Bragg grating stabilized diode stack running at 969 nm at the zero-phonon line of Yb:YAG. Compared to conventional pumping at 940 nm, the heat load is reduced by about 30 %, which further reduces thermal lensing effects in the amplifier. Still the pump intensity was limited to 3.5 kW/cm\({}^{2}\) (pump spot diameter of 8 mm at full width half maximum) to avoid overheating the water-cooled disk. Seeded by pulses from our single-frequency TDL [24], the hybrid 20-pass amplifier delivered single frequency pulses of 60 ns and 110 ns length at 275 mJ and 330 mJ, respectively. The repetition rate was fixed to 100 Hz. To the best of our knowledge, this is the highest single frequency pulse energy obtained from a TDL to date. The pulsed gain was about 7 at 40 mJ input pulse energy for both pulse lengths. Figure 8 shows how the gain (red) decreases as the extracted pulse energy (black) increasingly depletes the inversion in the disk. At around 275 mJ, the 60 ns pulses ionized the air in the focus of the 4f-stage, which prevented further energy scaling. Assuming a diffraction limited spot size and basic scaling laws for laser induced breakdown of air [28] this energy approximately agrees with expectations. The input pulse energy was not increased above 50 mJ to avoid laser-induced damage in the oscillator. After aligning the amplifier under full pump load, the system was continuously operated overnight, and showed only marginal misalignment over this period, as can be seen in Fig. 9. The pointing jitter was only \(<3\,\%\) and \(<1.5\,\%\) of the beam diameter (in this case 5.5 mm) in the vertical and horizontal direction, respectively, underlining the good stability against small misalignments of this amplifier. Thanks to the compact mirror array and the space saving 4f-propagation, the footprint of the multipass amplifier was about \(1\,\mathrm{m}\times 0.4\,\mathrm{m}\), including the pump optics. Figure 6: Beam size at the disk on the 6\({}^{\mathrm{th}}\) pass vs. effective focal power of the disk for the two transverse directions (X, Y). The effective dioptric power of the disk was scanned by varying \(\delta_{\mathrm{V}}\), the separation between the imaging mirrors of the 4f-stage. The solid lines are fits with input beam diameter \(w_{0}=5.5\,\mathrm{mm}\) and beam divergence a) \(\theta=0.15\,\mathrm{mrad}\) and b) \(\theta=0.126\,\mathrm{mrad}\) (note that a “collimated” Gaussian beam has \(\theta=\lambda/\pi w_{0}\neq 0\)). Figure 7: M\({}^{2}\) measurement of the output beam of the 20-pass hybrid-type amplifier pumped at 3.5 kW/cm\({}^{2}\). Inset: Far-field beam profile with 1/e\({}^{2}\) radius of 2.75 mm. ## 6 Discussion and conclusion We demonstrated, for the first time, a compact hybrid multipass amplifier based on a succession of optical Fourier transform and 4f-relay imaging. While we realized a thin-disk amplifier with this architecture, the general scheme should be applicable to most laser types. Fourier transforming the beam in one part of the amplifier allows a passive compensation of phase front curvature errors and makes this amplifier suited for high-power applications where excellent beam quality is required. The 4f-stage in the other part of the amplifier is used to compensate shifts in focal length of the pumped disk and helps to mode match the injected beam. It also helps keeping the whole system compact. Moreover, changing the effective focal length of the disk by detuning the 4f-stage makes the system more flexible. If needed, \(f_{\mathrm{eff}}\) and the resonator layout can be adjusted to increase the beam size on the end mirrors in the FT part of the amplifier without changing the optics, to avoid laser-induced damage. Figure 8: Output pulse energy and gain of the hybrid 20-pass amplifier versus input pulse energy for 110 ns and 60 ns long input pulses. A pulsed gain of 6.9 was achieved for 110 ns pulses at a output pulse energy of 330 mJ. At around 275 mJ, optically induced breakdown of the air in the focus of the 4f-stage started to take place. The disk was 600 μm thick with 2.3 % doping, the beam size on the disk was \(w_{0}=3\) mm and the pump intensity was 2.5 kW/cm\({}^{2}\) at 1300 W pump power. Figure 9: Measured output beam position after 20 passes under full pump load and low-power CW operation. The jitter is about 2.9 % of the beam radius (2.75 mm) in the vertical (Y-direction) and about 1.3 % in the horizontal (X-direction). ## Funding The European Research Council (ERC) through CoG. #725039; the Swiss National Science Foundation through the projects SNF 200021_165854 and SNF 200020_197052; the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Initiative EXC 1098 PRISMA (194673446), Excellence Strategy EXC PRISMA+ (390831469) and DFG/ANR Project LASIMUS (DFG Grant Agreement 407008443); the French National Research Agency with project ANR-18-CE92-0030-02; ## Acknowledgments We gratefully acknowledge the support of the ETH Zurich electronics workshop. In particular, we thank Diogo Di Calafiori and Dr. Werner Lustermann. We also thank Dr. Marcos Gaspar, Dr. Carlo Vicario, Dr. Cezary Sidle and Stefan Mair from PSI. ## Disclosures The authors declare that there are no conflicts of interest related to this article. ## Data Availability Statement The data that support the findings of this study are available from the corresponding author upon reasonable request.
2301.07171
fMRI-based Static and Dynamic Functional Connectivity Analysis for Post-stroke Motor Dysfunction Patient: A Review
Functional magnetic resonance imaging (fMRI) has been widely utilized to study the motor deficits and rehabilitation following stroke. In particular, functional connectivity(FC) analyses with fMRI at rest can be employed to reveal the neural connectivity rationale behind this post-stroke motor function impairment and recovery. However, the methods and findings have not been summarized in a review focusing on post-stroke functional connectivity analysis. In this context, we broadly review the static functional connectivity network analysis (SFC) and dynamic functional connectivity network analysis (DFC) for post-stroke motor dysfunction patients, aiming to provide method guides and the latest findings regarding post-stroke motor function recovery. Specifically, a brief overview of the SFC and DFC methods for fMRI analysis is provided, along with the preprocessing and denoising procedures that go into these methods. Following that, the current status of research in functional connectivity networks for post-stoke patients under these two views was synthesized individually. Results show that SFC is the most frequent post-stroke functional connectivity analysis method. The SFC findings demonstrate that the stroke lesion reduces FC between motor areas, and the FC increase positively correlates with functional recovery. Meanwhile, the current DFC analysis in post-stroke has just been uncovered as the tip of the iceberg of its prospect, and its exceptionally rapidly progressing development can be expected.
Kaichao Wu, Beth Jelfs, Katrina Neville, John Q. Fang
2022-12-15T04:16:07Z
http://arxiv.org/abs/2301.07171v1
fMRI-based Static and Dynamic Functional Connectivity Analysis for Post-stroke Motor Dysfunction Patient: A Review ###### Abstract Functional magnetic resonance imaging (fMRI) has been widely utilized to study the motor deficits and rehabilitation following stroke. In particular, functional connectivity(FC) analyses with fMRI at rest can be employed to reveal the neural connectivity rationale behind this post-stroke motor function impairment and recovery. However, the methods and findings have not been summarized in a review focusing on post-stroke functional connectivity analysis. In this context, we broadly review the static functional connectivity network analysis (SFC) and dynamic functional connectivity network analysis (DFC) for post-stroke motor dysfunction patients, aiming to provide method guides and the latest findings regarding post-stroke motor function recovery. Specifically, a brief overview of the SFC and DFC methods for fMRI analysis is provided, along with the preprocessing and denoising procedures that go into these methods. Following that, the current status of research in functional connectivity networks for post-stroke patients under these two views was synthesized individually. Results show that SFC is the most frequent post-stroke functional connectivity analysis method. The SFC findings demonstrate that the stroke lesion reduces FC between motor areas, and the FC increase positively correlates with functional recovery. Meanwhile, the current DFC analysis in post-stroke has just been uncovered as the tip of the iceberg of its prospect, and its exceptionally rapidly progressing development can be expected. Stroke; Functional Connectivity; Static Functional Connectivity analysis Dynamic Functional Connectivity analysis Kaichao Wu et al. ## Introduction According to the report on Global Burden Disease [1], stroke is the second leading cause of death and the second largest cause of disability. In a global world, there were 101 million stroke cases and 13.7 million new stroke survivors [2] in 2019, which is nearly five times that of 2013. As more than 60% of the survivors are left with severe sequela, stroke has become the largest known cause of complex disability [3], and 77% of stroke survivors suffer from slow upper limb/hand movement disorder [4]. There are multiple causes for the quintupling of survivors in such a short period of time, including ageing, growing populations,improved post-stroke care, increased risk factors, to name a few. [2]. With the sharp growth of stroke survivors, effective post-stroke rehabilitation seems to be great necessary. Various restorative therapies can assist stroke survivors in improving their motor function deficits. They mainly include (i) dynamic splints-based therapy (e.g., physiotherapy, restraint-induced kinesidotherapy, gait therapy) [5; 6], (ii) electrical muscle stimulation (EMS) therapy [7; 8], (iii) device-driven therapy (e.g., robotics, brain-machine interfaces (BCI)) [9; 10; 11; 12], (iv) transcranial magnetic stimulation therapy (TMS) [13; 14], and (v) mirror therapy [15; 16]. The prospects for these therapies seem bright. The hand function, for example, shows signs of recovery even during the chronic recovery phase [11]. However, the effectiveness of these rehabilitation measures varies from individual to individual. Not every patient recovers their limbs to some extent after a stroke;and even for individuals with identical levels of initial functional impairment, the rehabilitation effects vary greatly among people [17; 18]. Many unknown factors influence the outcome of recovery [19; 20]. Since the lesions that results in post-stroke disability are located in the brain, to gain insight into these factors, it is crucial to comprehend the process in the brain when function recovery occurs. Emerging evidence has revealed that post-stroke recovery is a time-depended process. The interrupted connectivity within the central nervous system reorganizes in structure and function over time to adapt to damage caused by a stroke. Hence, a lot of cross-sectional and longitudinal study has been carried out recently to gain insight into this process. By taking advantage of the dynamic alteration of the brain connectivity network, the behaviour of the brain system can recognize, for instance, how recovery adapts structurally and functionally over time and how this adaptation underpins functional recovery [21]. In addition, the structure or function connectivity dynamics in brain network reorganization can help reveal what factors promote or impede the recovery process, thus informing more productive intervening therapies in accordance with this neural network modulation and ultimately facilitating greater rehabilitation programs. Structurally, stroke lesions are considered one of the most important factors that cause network disconnection. Interestingly, after stroke, the lesion-surrounding regions can bypass the damage and rebuild connections to support the patient relearn the lost function [20]. For example, one study by [22] has demonstrated that the corticospinal tract (CST) connecting the cerebellum and the primary motor cortex will be generated after stroke, and the newly found physical connections are associated to skillful motor control. As a pathway connecting the motor cortex with motor neurons projecting from the spinal cord, CST plays an essential role in the motor control system [23]. Once CST is damaged following stroke, the alternative pathways will be recruited to compensate for the lost connections [22] and thus result in further change in structure connectivity [24; 25]. Beyond impairing local physic connectivity, a stroke lesion can also alter the neural interaction between directly or indirectly connected brain areas [26]. This interaction (or communication) is shown with functional collaboration between brain regions, which is also referred to as Functional Connectivity (FC) [27]. Over past few decades, the changes in functional network architecture during stroke recovery have been continually reported. A common finding in numerous studies looking at post-stroke FC is the decreased interhemispheric FC at the initial stage after stroke. Yet, this abnormal FC will develop and return toward the nearly ordinary level [28; 29; 30]. And this gradually enhanced FC is demonstrated to favorably correlate with motor recovery in the subacute or chronic phase [31; 32; 29]. Since functional connectivity has become an essential metric in neuroscience related to stroke recovery, a variety of means recording the brain activity have been utilized to explore the brain functional connectivity pattern, including EEG(Electroencephalography), MEG (Magnetoencephalography), and fMRI (Functional Magnetic Resonance Imaging). EEG and MEG measure the FC by recording the electromagnetic neural activity, whilst fMRI achieves this target by measuring the consistency of the blood oxygenation level-dependent (BOLD) signals across brain regions over time. Among these methods, the fMRI is a well-liked non-invasive imaging method due to its high spatial-temporal resolution [20, 33]. However, whether the connection is evaluated at rest or when doing a task is a crucial factor that cannot be overlooked in post-stroke FC analysis with fMRI. According to the observations across studies, these two ways often produce variable results. Resting-State (RS)-fMRI has certain advantages compared with task-based fMRI. The first is that studies with RS-fMRI do not need to design the movement paradigm and quantify the motor executive performance, while that is hard part across studies with task-based fMRI and can be greatly diverse. Measure task performance, for example, in hand grip task [34] and hand movement task [35] are different. And importantly, RS-fMRI provides a unified manner for comparison between studies, thus facilitating further understanding of recovery mechanisms. yet, the discovered recovery mechanisms in the study with task-based fMRI may be only effective in particular tasks. Therefore, this review concentrates on research employing rest-state fMRI to examine brain functional connectivity following stroke. In addition, when analyzing the function connectivity from the perspective of the method processing the RS-fMRI data, the distinction between the studies is the time scales over which the brain function connectivity evolves. Conventional methods assume that the FC measures are stationary over a full MRI scan, while it has shown that the FC fluctuates even over the seconds [36][37] and the static FC network is too simplistic to capture the complete representation of FC evolution [38]. Recently, a growing number of methods have been introduced to explore dynamic functional connectivity following stroke and made an effort to bring an all-new perspective to investigate the recovery mechanism. In 2011, the longitudinal changes of resting-state functional connectivity during motor recovery after stroke have been evaluated [27]. The dynamic development of brain connectivity after stroke in 2018 has also been documented [21]. However, a review concentrating on post-stroke functional connectivity analysis from the standpoint of the techniques has not yet provided a summary of the most recent findings. Hence, this review attempts to provide overview of the latest progress in the changes to brain motor functional architecture following stroke from the perspective of static and dynamic functional networks(SFC and DFC). By synthesizing the present understanding of functional networks in the resting state from various viewpoints, this review can provide potential method guides and feasible references to map the post-stroke neuroplasticity of brain circuits. Additionally, the comprehension of resting-state functional network under different analysis methods can provide viable guidance in designing dynamic neural rehabilitation interventions to benefit stroke patients. ## Methodology literature search The literature search was restricted to English-language articles published between January 2000 and May 2022 in the following electronic databases: PubMed, Web of Science, IEEE Xplore, ScienceDirect, MEDLINE (OvidSP), CDSR (Cochrane database of systematic reviews), Scopus, Compendex, Wiley Online Library, Academic Search Premier, and Springer Link. The electronic search terms were Stroke AND fMRI AND Functional Connectivity AND Motor deficit AND Rehabilitation. Studies that include task-related fMRI and involve effective connectivity and beyond motor function were excluded. Besides, this review included studies that explore the development of functional connectivity analysis methods. Particularly, those studies that employ these approaches for post-stroke motor function impairment and rehabilitation have been covered. ### Terms and definition _Post-stroke stage_ Throughout this review, the terms: acute, subacute and chronic stages refer to the three phases of recovery after stroke. The timeline of these phases, which spans from the time of the initial stroke to years afterward, is summarized in Figure 1. Their definition is in accordance with the recommendation of [39] and previous studies in stroke rehabilitation programs [40, 41]. Functional Connectivity(FC), Functional Network Connectivity(FNC) and Functional Connectivity Network(FCN) The terms Functional Connectivity (FC), Functional Network Connectivity (FNC), and Functional Connectivity Network (FCN) are frequently used by the authors in the studies reviewed in this paper. These nouns are so similar that the readers risk becoming confused if they are not attentive. Here, FC is defined as correlation (or other statistical dependencies) among spatially remote brain regions [42]. The process of inferring functional connectivity among multiple brain regions by calculating pairwise correlation is summarized in Figure 2**(a)**. FNC can be seen as a higher level FC, which refers to a statistical dependency among large-scale functional networks (or functional domains) in the brain [43], for example, the default mode network(DMN) [44]and sensorimotor network (SMN) [45]. A representation of brain regions that the involved in functional networks may be found in Figure 2**(b)**. If with no specific statement, the FC and FNC refer to a pairwise Person's correlation in this review. By contrast, Functional Connectivity Network (FCN) Figure 1: The phase of post-stroke recovery stages is a concept based on FC and FNC. It refers to a functional connectivity graph where the vertices represent the brain regions and the edges represent the strength of FC/FNC between these regions. _Motor recovery measurement_ Clinically, motor recovery after stroke is measured by the absolute difference between baseline and subsequent motor function scores [46]. This can be best achieved by comparing the motor function assessment at longitudinal time points. In the studies reviewed in this paper, there are several methods used by the authors to evaluate the post-stroke motor function, including the(FMA)scale [47], which evaluates patients' single-joint and multi-joint motor ability, loss of co-motor energy, finger individualization ability, movement speed, measurement impairment, ataxia, and motor reflex, and the Paralysed Hand Function Assessment Scale [48]which measures the degree of paralysed hand for stroke patients. Other evaluation methods include the Action Research Arm Test (ARAT) [49] Modified Rankin Score(mRC) [50]. etc. Beyond that, comparing the post-stroke recovery stages that the stroke patients in at different time points can helps demonstrate the motor recovery process. One of the most well-known methods of measuring the stroke recovery stages is the Brunnstrom stages, also known as the Brunnstrom approach [51]. By classifying the assessment score into distinct zones, the National Institutes of Health Stroke Scale (NIHSS) or the evaluation scale indicated above are widely used to estimate Figure 2: **(a). Inferring functional connectivity among multiple brain regions by calculating pairwise correlation; FCN: a functional connectivity graph can be generated where the vertices represent the brain regions and the edges represent the strength of FC/FNC between these regions. (b) The large-scale functional networks: the default mode network (DMN), which involves the medial prefrontal cortex (MPFC), posterior cingulate (PCC), bilateral hippocampus; and the sensorimotor network (SMN), which includes supplementary motor areas (SMA), primary motor cortex (M1), primary sensory area(S1), posterior parietal cortex (PPC).** the patient's stroke recovery stages. This partition can also be utilized to divide the patient sample into subgroups affected by stroke. Hence, in many cross-sectional studies, the relationship between the FC and motor recovery can be investigated by examining the FC alteration between different groups [52, 53, 54, 55] ### fMRI processing for DFC and SFC Preprocessing and denoising RS-fMRI By sampling the brain's three-dimensional (3D) volume every 1-2 s (or faster), MRI scanning can obtain the brain landscape map at the millimetric spatial resolution. Then, the BOLD signal contrast can be extracted within a full MRI scan. The BOLD signal variance can represent the resulting brain neural activity through multivariate time series. As the BOLD signal is weak and suffers from the noise of multiple sources, the raw fMRI data needs to undergo extensive pre-processing before further analysis. Pre-processing of rs-fMRI signals included the following steps: * removing a number of volume from the beginning for steady BOLD signal (typically, 3 [56] or 10 [57]); * slice timing correction * realignment for head motion * outlier detection for scrubbing; * registration to structural images, segmentation [58] and lesion-masked normalization [59]; * spatial smoothing using a full-width at half-maximum (FWHM) Gaussian kernel (4 or 8 mm for the recommendation). Note that this is a general possible pre-processing pipeline. The processed methods and their orders for the studies can be varied across specific applications and tasks. Despite the pre-processing processes' capacity to remove the majority of brain activity disturbances, the BOLD signal often still contains considerable noise or non-neural variability due to a combination of a physiological, outlier, and residual subject-motion effects [60, 61, 62]. These residual noise components in the BOLD signal will introduce strong and observable biases in all functional connectivity measures. Therefore, there are typically additional strategies to remove or at least minimize these underlying interferences in the context of functional connectivity. These strategies generally refer to denoising step, which include linear regression of potential confounding effects (e.g. effect from the gray matter, white matter, and cerebrospinal fluid) in the BOLD signal, linear detrending, temporal band-pass filtering. Similarly, the denoising strategies has not gold standards, they are variable across the studying methods and tasks. Figure 3**A** and **B** exhibit various preprocessing steps and denoising strategies. One note that nearly all studies reviewed in this paper introduce the preprocessing step, while the denoising step has been not be emphasised. The denoising step has been proved that it can benefit FC estimation by improving the signal quality and reducing the motion artefacts [61]. Hence, these additional denoising strategies should be encouraged to gradually introduce into stroke studies, which probably can enlarge the variability in FC alteration caused by stroke and then promote more reliable and accurate results. Several existing open-source tools can perform the pre-processing and denoising steps and enable the analyzable fMRI to be obtained reliably and rapidly, including the Statistical Parametric Mapping software package (SPM), the FMRIB Software Library (FSL) [63], the Data Processing Assistant for Resting-State fMRI (DPARSF) [64], CONN toolbox [60], the graph-theoretical network analysis toolbox (GRETNA) [65]. ### Time series extraction After the raw RS-fMRI has been pre-processed and denoised, the time course of the brain areas during the fMRI scan has to be extracted for further post-stroke functional connectivity analysis. The methods for time series extraction are mainly divided into two categories: ROI-based and data-driven methods. The ROI-based method relies on predefined regions of interest (ROI) to extract the time course of brain areas. As the motor control region in the brain system, nearly all investigations linking to post-stroke motor active recovery covert the primary motor cortex (M1) as one of their ROIs. Meanwhile, Supplementary Motor Area (SMA) is frequently investigated as an ROI. The decreased connectivity between M1 and SMA has been found across studies in stroke patients. Beyond M1 and SMA, other regions, including the premotor cortex, and inferior frontal gyrus, are also predefined as ROI to investigate the inter-regional abnormal connectivity following stroke. In addition to manually set regions, the anatomical atlas is often used to identify ROIs. There are typically three anatomic atlases utilized to define brain regions: Destrieux [66], Harvard-Oxford [67] and AAL [68] atlas(see Figure 3**C** for the three anatomical atlas). Data-driven method, by contrast, does not need to pick brain seeds beforehand, and it is free from prior knowledge of specific brain regions. Specifically, it uses multivariate voxel-wise projection techniques such as independent component analysis (ICA) or its variants (e.g., group ICA) to decompose the raw fMRI data into multiple independent components (ICs). Each IC represents the brain areas with independent brain activity. Figure 3**C** shows the diagram of the spatial ICA based Figure 3: The pipeline of fMRI processing for SFC and DFC analysis.**A.**Preprocessing functional images **B**. Denoising functional images. **C.**Atlases based and Spatial ICA based time series extraction **D1**. SFC analysis. **D2**. Sliding window based DFC analysis timeseries extraction method. Then, these ICs will be visually inspected to identify if the brain areas they represent can have brain activity because of a nerve action or if this activity is just caused by noise.The spatial map of IC with true brain activity can be matched with the previous anatomic brain function map.Meanwhile, their power-frequency curve has a specific fluctuating pattern(peak at low frequency, decrease rapidly, and then remain relatively stable). The identified ICs were recognized as intrinsic connectivity networks(ICN) which can be utilized for further functional connectivity analysis between functional areas or high-level functional connectivity between brain functional domains(e.g., the default mode, sensorimotor, subcortical, and cerebellar functional network). Static and dynamic functional connectivity estimation _Static functional connectivity estimation_ Typically, SFC estimates functional connectivity between two brain regions by calculating the pairwise correlations between their time series. In the case of fMRI analysis, this correlation is frequently quantified with a canonical correlation coefficient [69, 70, 71] that measures the similarity of amplitudes of BLOD fluctuations. There are also studies using the indices-based phase coupling measure, including the coherence [72] or phase-locking relationships [73]. The pairwise correlations between region nodes or functional domains can construct a Functional Connectivity Network (FCN) representing the brain's functional connectivity profile. Usually, before performing the group-level FC analysis, the individual correlation-based FC is often transformed to a normalized distributed Z-score using Fisher's r-to-Z transformation, as it will not be bounded by upper or lower limits, which is helpful for further statistical analysis. Based on the pairwise correlations, many SFC analysis methods exist, such as the FCN density analysis method [74], the Regional homogeneity (ReHo) based approach [75] and Kendall correlation coefficient (KCC) approach [76]. In addition, considering the investigated brain regions as nodes and the pairwise correlations as edges, the functional connectome can be viewed as an adjacency matrix or graph. Hence, the FCN can be naturally analysed with graph-based methods. For example, the clustering coefficients, global efficiency and local efficiency of the FNC graph can be calculated with graph theory [77], and these graph-based measures can be integrated to analyze the functional connectivity of the brain. _Dynamic functional connectivity estimation_ The development of DFC analysis for RS-fMRI is incited by the fact that the FC/FNC between regions or voxels may change in a short period [78]. Accumulating evidence has demonstrated that the FC/FNC alternation follows certain functional coupling patterns over time [79, 80], and these fluctuating coupling patterns support the brain regions in processing different functional requirements [80]. A multitude of approaches to quantify these dynamic patterns and their properties have been introduced (the detailed review related to dynamic functional connectivity can be seen in [81, 80, 82, 38]). The most common and straightforward way to measure the DFC is by calculating the FC/FNC between regions/voxels in a consecutive window period. The window length is usually constant, and the sliding scheme spanning the full scan is run. The length of the sliding step is generally less than the window length; thus, two adjacent windows can partially overlap. With the sliding window method, thousands of FC/FNC matrices can be generated at the window length level. These successive matrices exhibit the dynamic evolution of brain state within an MRI scan duration, and thus, the fluctuations in the connectivity time courses can be easily assessed. The measurements to quantify the fluctuation include the temporal standard deviation [83, 84, 85], coefficient of variation [86], or amplitude of low-frequency fluctuations (ALFF) [87, 88]. In addition, the matrix factorisation techniques can summarise a series of FNC patterns into multiple FNC states. The recurrent or repeating connectivity patterns/states represent the underlying functional coupling patterns. The typical method to summarize the connectivity patterns is K-means clustering [89], which has been widely used in fMRI studies for DFC analysis since a study by Allen et al [90]. The clustering approach reveals the reoccurring connectivity patterns from the windowed FC matrices, and the dynamic brain states corresponding to ongoing processing can be reflected. Another representative approach summarising the brain state is the innovation-driven co-activation patterns (iCAPs) method [82]. iCAPs method does not rely on the sliding window scheme but needs a clustering algorithm's assistance. It identifies BOLD signals' transients (or innovation frames) and feeds them into clustering methods to obtain iCAPs [38, 91]. Besides, there are also other DFC methods, such as the principal component analysis (PCA) method [92], the dictionary learning (DL) method [93, 94], the tensor decomposition(TE) methods [95], the dynamic community detection based method [96, 97], the Hidden Markov Model(HMM) [98] method and the wavelet transform coherence (WTC) method [99]. However, in studies about post-stroke motor dysfunction, these methods are rarely investigated for DFC analysis. #### 5.1.1 SFC and DFC comparison In principle, the SFC method assumes that FC/FNC is stationary over MRI scan time, and the functional connectivity is constant during a MRI scan. Nevertheless, SFC ignores that the human brain is a dynamic system that fluctuates even at the time scale of milliseconds [100]. Therefore, DFC analysis approaches developed rapidly in the last decade to investigate the dynamic nature of the brain. In methodology, the SFC analysis approaches are just based on the average FC/FNC that aggregates over an entire fMRI scan. Despite SFC's methodological simplicity and ease, SFC approaches can vividly show the disease-induced FC alterations. This is particularly important when investigating the damage's impact on the specific brain functional area. For example, many post-stroke studies have employed SFC to investigate the stroke-lesion caused FC changes and to verify if the post-stroke FC reorganization can underpin functional recovery [52, 53, 54, 55, 101, 31, 28]. For that reason, the SFC can provide a valuable reference to functional rehabilitation programs. Many disease treatments have utilized invasive or non-invasive tools (e.g. the transcranial magnetic stimulation [30, 102]) in clinical trials to help the affected brain motor function areas promote recovery [103]. By contrast, DFC analysis approaches employ time-resolving or signal decomposition methods to investigate a time-varying FC/FNC that span the fMRI scan. It can extract relevant dynamic features to reflect functional network flexibility and brain state transition. However, despite this potential of dynamic FC analysis, only a handful of studies have employed such time-resolved approaches to explore the dynamic neural mechanism following stroke. In contrast to the thrive research of SFC methods in other neurological diseases such as Parkinson's disease [104] and Huntington's disease [105], DFC analysis on rs-fMRI following stroke is still developing. Besides, compared with SFC analysis, DFC approaches typically have a complicated mathematical/probability theories [46](e.g., the HMM-based DFC analysis [98]). Hence, DFC is probably not friendly to those clinicians without mathematical or engineering backgrounds. Furthermore, due to increased fMRI temporal-spatial resolution, the DFC is more sensitive to noise [80, 38]. Thus, the extra hype-parameter setup (e.g., the window length and sliding step in the sliding-window DFC method) needs to be fine-tuned to verify the validity of the results [81, 80]. Overall, SFC and DFC both have advantages and disadvantages(summarized in Table 1). Compared with DFC, SFC analysis is more mature and has more applications in post-stroke fMRI analysis; however, the DFC analysis can capture the intrinsic dynamic nature of the brain, which shows a potential promising prospect. Even though DFC models can rarely be seen in post-stroke studies currently, they are expected to boom in the future of post-stroke investigation. ### Static connectivity analysis Decreased functional connectivity is a common finding In numerous rehabilitation studies estimating resting-state FC, a decrease in interhemispheric FC was commonly found in the functional network or at least between some regions by comparing results from stroke patients with healthy controls. This reduction is caused by the disruptions in multiple large-scale functional brain networks, which has been viewed as one of the characteristics of motor impairment following stroke [28]. In detail, the lowest level of FC appears in the acute phase of stroke, and the earliest time of this finding is within a few hours post-stroke. Hence, interhemispheric FC reduction is often reported in many studies studying FC in acute ischemic stroke patients. For example, Liu et al. [74] observed disrupted interhemispheric FC between the motor cortices of acute stroke patients, which was associated with motor deficits. A recent study investigating dynamic structural and functional reorganizations following motor stroke reported a significantly lower interhemispheric FC in the first week [106]. This decrease in interhemispheric connectivity is more distinct than those in intrahemispheric connectivity. In addition, Nai-Fang et al [107] observed decreased interhemispheric FC within the cortical motor network in the first acute unilateral ischemic stroke patients. Specifically, the FC between the ipsilesional M1 and contralesional M1, contralesional postcentral gyrus (PoCG), dorsolateral PMC are significantly lower than that in controls. Besides, the interhemispheric FC reduction can also be observed in the post-stroke subacute and chronic stages. In [31], the FC between M1 and the contralateral cerebral cortex was reduced in stroke patients with unilateral ischemic motor neural network injury 2 weeks after onsets. And the stroke participants recruited by [57] showed a decreased FC between the ipsilesional SMN and the contralesional SMN and auditory network (AN), which supports the early findings in the [108] that the disruption of interhemispheric interactions between bilateral SMNs may result in motor deficits in patients with chronic stroke. Beyond interhemispheric FC, the decreased contralesional M1 FC can be found in a study [109]. The lowest FC levels in the contralateral sensorimotor cortex are 2 weeks after stroke. The [110] reported the reduced within-network FC in the contralesional precentral gyrus within the dorsal sensorimotor network and the contralesional superior parietal lobe. Brain functional connectivity network topology supports the decreased FC Brian functional connectivity network (FCN) can be described as a set of nodes (ROIs or independent components ) connected with edges (functional connectivity measure). Thus, graph theory analysis on FCN can provide important information about the topological properties of the brain functional network. When analyzing FC from the perspective of a brain graph, the various FC network topology measures also approve of the reductions in FC in contrast to a healthy brain. For the local topological measure of the FC graph, a study has proved the shortest path length to be lower than healthy controls in acute ischemic stroke [111]. And the clustering coefficient was reported to decrease [112, 113, 114], though [111] suggested that stroke patients maintained the local clustering coefficient. In addition, another local measure, the weighted node degree which measure the number of connections has been found to decrease in the contralesional M1 of patients with bad recovery post-stroke [115].There are no significant finding in topological measures between patient and control was reported in other studies [114] following stroke. Regarding the global topological measure of FCN, a reduction in network modularity was observed in an experiment including 25 patients with focal lesions because of stroke [116]. This finding was consistent with the decrease in interhemispheric function integration. Modularity measured by the connection density within communities over that between communities reflects the degree of functional integration and segregation [117]. A recent study supported the findings in [118], where significantly lower modularity was found in patients compared with controls, indicating decreased segregation in the FNC. Beyond modularity, other global measures of the FC network tended to decline. In [114], the FC concordance, which measures the network stability in time, was observed to decrease over time in contrast to intact networks. In addition, the small-worldness which reflects the ability of brain networks satisfying the needs of local and global information processing was significantly lower in patients than controls two weeks after stroke [119]. There are a great rarity of graph theory based topology measures, resulting in the dynamics patterns in the network topology investigated in studies are various. In general, the graph topology alteration of FCN post-stroke can be interpreted in a concept of network randomization [116], the way the network reorganizes itself to adapt to the lost function, which also demonstrates the process of neuroplasticity that occurs in the brain post-stroke. ### Increased functional connectivity is positively related to recovery Even though a decreased FC is a dominant trend in patients following a stroke in contrast with healthy controls, many findings also include the increased FC. Several longitudinal studies have demonstrated that the interhemispheric FC in the SMN first decreases in the early stages after stroke while increases in the following weeks or months [120, 29, 109]. Other studies support this FC-increased finding as well. For example, in a systemic study [28], the decreased FC was observed in stroke patients between the ipsilesional M1 and the sensorimotor cortex, the occipital cortex, the middle frontal gyrus(MFG) and the posterior parietal cortex, while the increased FC was also shown between ipsilesional M1 and cerebellum, the thalamus, the middle frontal gyrus (MFC) and the posterior parietal cortex. Besides, in the study [74], although the FC in the motor area decreased after stroke, the opposite occurred in cognitive networks. In addition, a recent study [83] reported that compared with healthy groups, the patients exhibit a significantly increased static FC between a large number of structures, including ipsilesional M1 and the contralesional precentral gyrus(PrG), contralesional M1 and the ipsilesional PrG, contralesional precuneus; ipsilesional MFG and precuneus, the contralesional cerebellar, PoCG, ipsilesional sub-syral region; SMA and ipsilesional PrG, frontal-temporal space, MFG; ipsilesional SMA and the ipsilesional middle temporal gyrus (MTG). The increased FC post-stroke seems to be restricted between specific structures [21], while the highly consistent finding is that the increased FC occurred in connectivity with the cerebellum [28, 114, 121]. This conclusion derived from a potential recovery mechanism that FC in the cerebellum was reported to be crucial for recovery [122, 123]. In fact, not limited to the cerebellum, many structures that found increased FC correlate with motor recovery. The commonly mentioned brain region is M1. Many studies have reported that FC increases between the ipsilesional M1 and contralesional M1 and between the ipsilesional SMC and contralesional SMC,which is related to better recovery [124, 74, 121]. Besides, Ktena et al. in [125] observed increased FC between lesion areas and M1 in the unaffected hemisphere of the chronic patient. They concluded that this phenomenon reveals the network reorganization process associated with motor recovery. Furthermore, the enhanced FC between M1 and SMA and other motion-related regions like the dorsolateral prefrontal cortex (PMD) has been be archived and proved that this enhancement is a potential mechanism for motor function recovery [125, 126]. In many studies related to the prediction of motor recovery or outcomes, the increased FC is normally associated with minor severity or a better motor functional outcome. In a study that included 34 patients and healthy control, the author found a significantly lower interhemispheric FC in stroke patients compared to healthy controls in the first week, which support the previous common finding. However, after that, the FC continually increased to week 12. The correlation analysis shows that the percentage of FC changes was significantly positive with the improved FMA score from week 1 to week 4 [106]. In an early prediction study [127], 37 stroke patients were scanned on day 3 after stroke, and the fMRI data were used to predict 90-day outcomes. The results show that patients with good outcomes had higher FC than those with poor outcomes. Adding the FC improves the model's accuracy to 94.7%, reflecting that increased FC plays an essential role in motor recovery. The stroke patients at the chronic stage have a similar increase. In a study with a total of 107 participants, compared to the patients with a completely paralyzed hand, the patients with a partially paralyzed hand had increased FC in the ipsilesional superior temporal gyrus, the ipsilesional middle occipital gyrus and the contralateral calcarine [57]. This finding is also enhanced in a larger cohort of patients with ischemic stroke at the acute stage. In a study with 85 stoke acute ischemic patients, the FC between ipsilesional M1 and contralesional PMD in patients with favorable outcomes was significantly greater than with unfavourable outcomes [107], which demonstrate that the increased FC can serve as an independent outcome predictor. In total, the FC of the motor network is impaired after stroke onset, and the decreased FC is a consistent finding, but the increased FC can be observed. The increased FC is a way of neuroplasticity, which means that the lost FC has tended to grow to the normal level. From the perspective of neural activation, the decreased FC implies the existing pathways are disinhibited in the recruitment stage after stroke [20], and the increased FC shows that the damaged pathways surrounding the lesion area are newly built thus improving motor function. ### Dynamic connectivity analysis For patients with motor dysfunction following stroke, the number of good examples of DFC analysis is not as high as SFC. However, from the existing investigations, we can conclude the following findings. #### The temporal variability of FC is altered following stroke. DFC analysis methods resort to the temporal variability of FC to reveal neural dynamic properties and recovery mechanisms of stroke. The temporal variability between specific regions illustrates the dynamic reconfiguration of the brain system over time in response to going processing and globally reflects the degree of synchronization between functional areas in the brain. The definitions of temporal variability are variable across studies. The study [84] calculated the temporal variability between specific regions as the average functional connectivity over different windows. By contrast, the temporal variability of FC in [83] was characterized as the standard deviation of time courses at predefined seed regions across a series of windows. In terms of the difference, Hu et al. in [84] reported a significantly reduced temporal variability in stroke patients compared to the healthy group; however, the FC temporal variability decreased-regions exhibited disparity at distinct poststroke stages. In the acute stage, the reduced regions cover the primary sensorimotor and default mode network (DMN), while only the ipsilesional PoCG and ipsilesional anterior cingulate gyri (ACG) showed a declining trend in the subacute stage. Nevertheless, this finding is incompatible with a study [83] finding, where an increased temporal variability in ipsilesional M1 and contralesional PrG was observed. And this increase shows a longitudinal trend, which is exhibited over the stages of stroke. Taking advantage of the temporal variability, the relationship between dynamic FC and motor function recovery poststroke was investigated in both studies. In [83], the authors detected a significantly negative correlation between FMA scores and FC variability in ipsilesional M1 and contralesional PrG. And in [84], the increased FC variability from the acute to the subacute stage was reported to correlate positively with the increased FMA. The results across studies appear to be different, or to an extent, the opposite. This exhibits a preference for these publications. It is not expected but can be accepted as the subjects, the stroke severity, the post-stroke stage, the brain motor area, and the algorithm calculating temporal variability are biased in various investigations. ### Motor function affected patients alters the preference in transient brain states of sMN The mutual transition of connectivity state in diseases with highly dynamic neural activity abnormalities has been active research in DFC analysis. The so-called connectivity state is generally abstracted from the reoccurring BOLD signal using a sliding window scheme and clustering algorithm. Thus, a successive highly recurrent FC pattern list within an MRI scan can vividly exhibit a dynamic transition between the multiple brain states. This dynamic transition globally reflects the FCN's flexibility and can be utilized to investive the alteration or adaptation of dynamic interaction between brain functional networks after stroke. The alteration of connectivity states' flexibility and adaptation due to cognition and psychiatric disorder has been illustrated in other studies, while stroke-induced changes in brain states have rarely been investigated. Recently, the abnormal connectivity states in acute ischemic stroke (AIS) attracted the attention of researchers. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Standown & Seth. & Ctrl. & Tops & Timing & Extraction methods. & DFC Method & Findings \\ \hline [83] & 19 & Induscance & 19 & Longitudinal & 76-20m & Semi-based & SethInfo window & FC temporal variability. Randomised temporal variability. Longitudinal increased target the stages. \\ \hline [24] & 76 & Induscance & 50 & Longitudinal & 76-1m & Semi-based & SethInfo window & FC temporal variability. Randomised temporal variability. Longitudinal increased target the steps. \\ & & stroke & & & & & & & & \\ & & stroke & & & & & & & \\ & & stroke & & & & & & & \\ & & stroke & & & & & & & \\ \hline [84] & 76 & Induscance & 50 & Longitudinal & 76-1m & Semi-based & SethInfo window & FC temporal variability. Randomised temporal variability. Longitudinal increased target the steps. \\ & & stroke & & & & & & & \\ & & stroke & & & & & & & \\ & & stroke & & & & & & & \\ & & stroke & & & & & & & \\ \hline [84] & 44 & Induscance & - & Cross sections & \(<\)76-20m & Semi-based & SethInfo window & SethInfo window & SethInfo window & SethInfo window \\ & & stroke & & & & & & & & \\ & & stroke & & & & & & & \\ & & stroke & & & & & & & \\ & & & & & & & & & \\ \hline [84] & 54 & Induscance & - & Cross sections & \(<\)76-20m & Semi-based & SethInfo window & SethInfo window & SethInfo window & SethInfo window \\ & & stroke & & & & & & & & \\ & & stroke & & & & & & & \\ & & & stroke & & & & & & \\ & & & & & & & & \\ & & & & & & & & \\ \hline \end{tabular} \end{table} Table 3: The summary of DFC analysis with fMRI for post-stroke recovery. In a study which included 31 AIS patients [130], the authors outlined three different SMN connectivity states: the first is characterized by extremely strong intra-domain connectivity and extremely weak inter-domain connectivity; the second has remarkably weak intra-domain connectivity; and the third is a compound state that combines the characteristics of states 1 and 2. The summarized states do not differ too much from the states summarized in their further work [132][131], while three studies revealed different aspects concerning post-stroke motor impairment. [130] provides the distinct configuration of the FC connectivity states in stroke patients with various degrees of clinical symptoms. Moderately affected patients, for example, have significantly more dwell time in a weakly connected configuration, while severely affected patients prefer to stay in the state 1. This finding is consistent with the study [132], which included 41 AIS patients. This study also demonstrated that NIHSS significantly correlated with a fraction (the ratio of the time a subject spent in a given state over the scan time) and dwell time (the time a subject spent in a state without switching to another one ) of state 1. [131] has the most AIS patients (54) among the three studies. In this study the author pays more attention to distinguish the link between the SMN connectivity measure and the subgroups with or without the motor deficit. Results show that embedding the fraction and dwell time into the initial motor impairment-based model can improve the prediction performance(95% accuracy). Note that this finding does not derive from multiple independent investigations. Hence, it needs to be validated in other stroke patient cohorts to test if the results are reliable and reproducible. Additionally, only the SMN has so far shown the preference shift in transitory brain states. If this variance is caused by changes in the globally dynamic interplay between distinct functional domains, this can be further investigated in the future. ## Discussion Search studies in this review demonstrate that the widespread changes in connectivity can observed in post-stroke recovery. In a static brain functional network, decreased interhemispheric FC appears to be a common feature of resting-state network reorganization in stroke, accompanied by reduced network efficiency and modularity. Increased FC can be also observed, and a positive correlation exists between the increased FC of bilateral cerebral hemispheres and the degree of post-stroke functional recovery. On the other hand, the DFC analysis reveals that the FC temporal variability has a decreased trend, and abnormal connectivity states exist in post-stroke patients with motor impairment. Despite DFC analysis is not as mature as SFC in poster-stroke investigation, DFC methods reveal the dynamic nature of brain, bringing an all-new perspective to investigate the recovery mechanism. In the following, we continue to discuss two interesting aspects pertinent to SFC and DFC analysis following stroke: 1. if the static or dynamic functional connectivity can sever as a post-stroke recovery biomarker; 2. the methodological considerations relevant to functional connectivity analysis in stroke research. ### Static or dynamic functional connectivity as a recovery biomarker following stroke Accurate prediction of motor function outcomes and treatment responses after stroke can benefit clinical and research settings, promoting effective rehabilitation care delivery and the stratification of subjects in clinical trials. As a heterogeneous disease, stroke is characterized by the varying lesion size and location. Therefore, the demographic and clinical variables, such as age, sex, lesion volumes, etc., are naturally considered potential factors contributing to the post-stroke recovery prediction [133, 134]. Recently, there has been increasing interest in the role of functional connectivity measurements acquired from neuroimaging in predicting recovery performance. Hence, in this section, we discuss if the SFC and DFC measurements can serve as a motor recovery biomarker following a stroke from the perspective of FC application. An appropriate first step to investigating the role of FC measures in motor recovery after stroke is to examine the strength of the association between connectivity and motor behaviour in different stroke populations. A correlation coefficient of 0.75 or greater usually indicates a strong correlation. Results from cross-sectional studies showed a moderate to a strong association between measures of static functional connectivity and motor status after stroke(r =0.58-0.76) [135, 136, 137]. The number of stroke patients studied in these studies ranged from 8 to 55. The motion Assessment was measured using the Upper Extremity Fugl-Meyer assessment (UL-FMA) scores, Motricity Index, and Chedoke-McMaster Stroke Assessment. Regarding DFC analysis, the dynamic FC measure - temporal variability demonstrates a significant correlation with the UL-FMA scores at the chronic stage after stroke, showing the same effect as static FC analysis [84]. Another DFC study found a negative correlation between the difference in DFC measures in the motor execution network and FMA scores [83], while it did not pass through FDR correction. In addition, in a study with 31 acute ischemic stroke patients [130], dynamic functional connectivity pattern shows significant differences between stroke groups varying in motor status: patients with severely impaired mobility are more likely to have a regionally dense connected, highly segregated pattern; patients with mild motor impairment take more time to weakly connect the state with reduced segregation. The results from the longitudinal studies also appear to underpin FC as a potential biomarker for post-stroke motor recovery. For example, in the longitudinal study of static FC in stroke patients, initial baseline measures of functional connectivity were strongly associated not only with longitudinal temporal assessment scores of motion status [28, 138], but also strongly correlated with changes in motor function recovery over time (motor function improvement, r = 0.32 - 0.79) [139, 140, 124, 106]. Dynamic analysis of FC at the longitudinal level further demonstrated the potential of FC alteration as a biological marker of rehabilitation (between bilateral intraparietal lobule and left angular gyrus, r = -0.68) [132]. With support from the literature findings, FC substantiates that it can serve as a reliable biomarker for post-stroke motor recovery. However, it is advised to keep caution to assess the causality relationship between functional connectivity and stroke recovery-- the sample size and statistic capability bring more challenges to FC as a crucial factor in post-stroke recovery. On the one hand, sample sizes collected in clinical analyses typically range from 10 to 20 participants, which is particularly to sensitive to outliers [141] and vulnerable to the effective size inflation [142], may not accurately represent the entire group. On the other hand, the operation of covariables, such as age, sex, lesion size/location, baseline motor status/measure, etc., is various across studies, intensifying inconsistent FC recognition in stroke recovery. Nonetheless, we should not be too pessimistic. Despite these criticisms being fair, it does not change the strong or the strong versus weak FC effects observed in the recovery process. [143]. In the future, longitudinal studies with more samples or the clinical benefits of the function outcome prediction may underpin the role of FC in post-stroke recovery. ### Methodological considerations relevant to functional connectivity analysis in stroke research Generally, two experimental designs are involved in stroke research with the FC analysis: cross-sectional and longitudinal. These two study designs allow investigation of different post-stroke effects on FC. Typically, a cross-sectional study examines the FC change between stroke-health control (between-person effects); a longitudinal study can peek at the FC change in stroke patients over time (within-person effects) [144]. Due to intensive time and resource payment, the majority of studies rely on cross-sectional designs. However, one of the defects of the cross-sectional study is the brain lesion-induced FC difference will be diluted by the cohort effects. For instance, since the life backgrounds vary in participants, FC differences between stroke patients and healthy will reflect not only the lesion-induced neural circuits reorganization, but also differences in the environment participants live. To track the within-person FC changes in post-stroke recovery and to investigate the causality of FC and neuroplasticity, a longitudinal study is a potent method that can demonstrate the cross-sectional findings from the time dimension [120, 28, 106]. In the cross-sectional studies, static and dynamic FC analyses are present, while the static accounts for the majority in longitudinal studies. Recently, dynamic connectivity analysis has been developed in multiple fields [145, 93, 104, 146, 144]. Since post-stroke recovery is time-dependent, investigating how the neural networks interact dynamically and to what extent dynamic connectivity pattern supports motor function recovery and change deserves further study. Section II introduces the general pre-processing methods. Although pre-processing steps involve choices between analysis approaches(static vs dynamic), they only vary mildly across FC analysis investigations for post-stroke research. And as far as to our knowledge, the systemic study of the effects of pre-processing choices has been not proposed. That is probably because the sample size of stroke patients is not huge, which cannot support examining the pre-processing steps on the lesion-induced difference in functional connectivity. In terms of the brain parcellation methods they use, SFC and DFC studies have apparent preferences. Post-stroke studies with SFC analysis commonly use the atlas-based method to parcellate the whole brain. This brain map generated from massive brain investigation can provide detailed brain information allowing researchers to focus on the interested network or regions. Besides, one of the biggest benefits of using the atlas-based method is that it provides a way to compare and get fairly consistent findings across studies; for example, the lesions interrupt the remote network connection, and the interhemispheric FC decrease, etc. By contrast, studies with DFC tend to utilize the ICA since the DFC is more sensitive to noise due to increased temporal resolution. Hence, the results of DFC's approach may vary depending on the patient cohorts because the ICA is a data-driven approach which may isolate components belonging to different networks. Moreover, compared with the altas-based method, ICA cannot examine changes in interhemispheric connections. Hence, how connections between hemispheres interact dynamically at the millisecond level may require combining the SFC and DFC results or new intervention approaches. ## Conclusion fMRI based post-stroke functional connectivity analysis for post-stroke motor dysfunction patients has two branches: the static functional connectivity(SFC) and dynamic functional connectivity analysis (DFC). While SFC assumes that the FC/FNC is stationary during the fMRI scan, DFC maintains that the FC/FNC fluctuates even for a short period of time and has a specific coupling pattern. A great many of SFC and DFC analysis methods have been developed and successfully applied to investigate the alteration of functional interaction or communication behind post-stroke motor function deficit and recovery. In this context, this review summarized the current advance of SFC and DFC approaches and latest findings for their application on post-stroke motor function research. The studies included in this review demonstrate that SFC is the predominant post-stroke functional connectivity analysis method in last five fears. The results from SFC show that there is a reduction in FC between motor regions after a stroke, that a rise in FC is highly associated with functional recovery. Meanwhile, DFC is developing rapidly. the application of DFC methods in post-stroke motor function impairment and recovery has shown its potential. With more DFC methods created and utilized to investigate the abnormal motor FC/FNC dynamics, the DFC is expected to far reaching effects in term of neural reorganization underlying stroke recovery and underpin understanding of the recovery mechanism. Besides, based on the consideration of the previous studies, recommendations from this review of future studies are: * Both SFC and DFC methodology needs to be validated on large cohort to improve reliability and robustness of their statistic results. * As a potent method to examine the stroke lesion-effect to FC/FNC dynamics alteration within-person, DFC method based longitudinal post-stroke investigation should be greatly encouraged in the future. * The quantitative relationship between FC/FCN alteration and motor function improvement should thoroughly investigated, particular for dynamic FC/FNC alteration. * The post-stroke motor function research should not limit to brain motor function areas. The interplay effect between motor network and other function network like the cognitive network should be considered. ## References * [1] Tect for this section... * [2] Funding * [3] Text for this section... * [4] Abbreviations * [5] Availability of data and materials * [6] Text for this section... * [7] ## Competing interests The authors declare that they have no competing interests. ## Consent for publication Tect for this section... ## Authors' contributions Tect for this section... ## Authors' information Tect for this section... ## Author details \({}^{1}\)Department of Biomedical Engineering, College of Engineering, Shantou University, Shantou, P.R.China. \({}^{2}\)School of Engineering, RMIT University, Melbourne, Australia. \({}^{3}\)School of Engineering and Physical Sciences, The University of Birmingham, Birmingham, UK.
2309.03535
Feature Enhancer Segmentation Network (FES-Net) for Vessel Segmentation
Diseases such as diabetic retinopathy and age-related macular degeneration pose a significant risk to vision, highlighting the importance of precise segmentation of retinal vessels for the tracking and diagnosis of progression. However, existing vessel segmentation methods that heavily rely on encoder-decoder structures struggle to capture contextual information about retinal vessel configurations, leading to challenges in reconciling semantic disparities between encoder and decoder features. To address this, we propose a novel feature enhancement segmentation network (FES-Net) that achieves accurate pixel-wise segmentation without requiring additional image enhancement steps. FES-Net directly processes the input image and utilizes four prompt convolutional blocks (PCBs) during downsampling, complemented by a shallow upsampling approach to generate a binary mask for each class. We evaluate the performance of FES-Net on four publicly available state-of-the-art datasets: DRIVE, STARE, CHASE, and HRF. The evaluation results clearly demonstrate the superior performance of FES-Net compared to other competitive approaches documented in the existing literature.
Tariq M. Khan, Muhammad Arsalan, Shahzaib Iqbal, Imran Razzak, Erik Meijering
2023-09-07T07:46:46Z
http://arxiv.org/abs/2309.03535v1
# Feature Enhancer Segmentation Network (FES-Net) for Vessel Segmentation ###### Abstract Diseases such as diabetic retinopathy and age-related macular degeneration pose a significant risk to vision, highlighting the importance of precise segmentation of retinal vessels for the tracking and diagnosis of progression. However, existing vessel segmentation methods that heavily rely on encoder-decoder structures struggle to capture contextual information about retinal vessel configurations, leading to challenges in reconciling semantic disparities between encoder and decoder features. To address this, we propose a novel feature enhancement segmentation network (FES-Net) that achieves accurate pixel-wise segmentation without requiring additional image enhancement steps. FES-Net directly processes the input image and utilizes four prompt convolutional blocks (PCBs) during downsampling, complemented by a shallow upsampling approach to generate a binary mask for each class. We evaluate the performance of FES-Net on four publicly available state-of-the-art datasets: DRIVE, STARE, CHASE, and HRF. The evaluation results clearly demonstrate the superior performance of FES-Net compared to other competitive approaches documented in the existing literature. Medical Image Segmentation, Lightweight Deep Networks, Retinal Blood Vessel Segmentation, Diabetic Retinopathy, Convolutional Neural Networks. ## I Introduction Computer-aided wide-scale screening presents a feasible approach to the detection of diseases in people with diabetes mellitus, as it has the potential to increase scarce healthcare resources and support healthcare professionals [1, 2]. Previous research [3, 4, 5] has underscored the importance of the size and structure of the retinal vessels in the diagnosis and prediction of the prognosis of diabetic retinopathy. The noticeable alterations in vessel dimensions act as robust markers of the severity of the disease and could be used to forecast the potential future progression of the condition, as supported by the conclusions of these studies [3]. In computer-aided diagnosis (CAD) systems, segmentation of retinal blood vessels is crucial and time-consuming [6]. This is due to the fact that many retinal abnormalities, such as hemorrhages and microaneurysms, typical manifestations of diabetic retinopathy, are often observed close to these vessels [7]. Furthermore, efficient segmentation of retinal vessels can allow estimation of the topology of vessel maps [8, 9]. Segmenting retinal blood vessels is a challenging task in retinal image analysis due to the presence of numerous obstacles. These include low contrast, uneven intensity, and varying thickness between primary vessels and capillaries present in images of the retinal fundus. Additionally, the presence of exudates and lesions further complicates the segmentation process. To address these challenges, numerous studies have employed supervised or unsupervised algorithms and computer vision techniques, in order to achieve accurate and automatic segmentation [10, 11, 12, 13, 14]. The latest research points out that deep learning architectures have better performance compared to other techniques [15, 16, 17]. Retinal vessel segmentation has been aided by a number of approaches based on deep learning. U-Net is proposed for medical image segmentation, but it identifies false boundaries in retinal images along with blood vessels [18]. Gu _et al._ put forth a context encoder network that captures higher-order information while maintaining spatial data [19] for the segmentation of blood vessels. Yan _et al._ enhanced U-Net's performance by introducing a joint loss function [20]. Wang _et al._ introduced DEU-Net, which uses a fusion module function to merge a spatial path with a large kernel, thus preserving spatial data while capturing semantic details [21]. Fu _et al._ presented DeepVessel, which uses a CNN featuring a multiscale and multilevel layer to construct a dense hierarchical representation [22]. Additionally, they incorporated a conditional random field to model extended interactions among pixels [22]. Notwithstanding the effectiveness of these approaches, they overlook the computational commutation needed to customize the network for use with manageable resources on embedded systems. The proposed FES-Net aims to address some of the difficulties encountered in retinal vessel segmentation by improving the original image and bypassing the need for conventional image processing methods. The network employs four convolutional blocks on the downsampling end and a relatively simpler upsampling end to generate an output binary mask for each category. To keep the overall computational cost low, only a limited number of transposed convolutions are used in the upsampling procedure. The final stage assigns a label to each vessel pixel using the pixel classification layer. ## II Related Work In recent years, lightweight segmentation networks have gained attention in image segmentation challenges. Seg-NAS3D [23] introduced a network search to optimize the 3D segmentation structure and significantly reduce the complexity of the model. RefineNet [24] was used as the lightweight backbone network by Nekrasov _et al._[25] to improve the performance of the model. IC-Net [26] is a real-time semantic segmentation network that uses an image cascade with branch training to accelerate model convergence. BiSeNet [27] is a lightweight model based on a dual-branch topology, employing multiple paths to refine spatial information. DMFNet [28] partitions channels into numerous groups and employs a weighted three-dimensional expanded convolution. This approach ultimately results in a decrease in the parameter count while simultaneously enhancing the accuracy of inference. Xception [29] and MobileNet [30] use deep separable convolution to improve the inference speed. The Dense-Inception U-Net [31] uses a compact encoder with a lightweight Inception backbone and a dense module to capture high-level semantic information. This architecture is designed for medical image segmentation tasks. ShuffleNet [32, 33] applies group convolution and channel shuffling techniques to reduce computational expenses compared to more complex models. Iqbal _et al._[34] explored the development of lightweight segmentation networks specifically for medical images. However, creating networks with low model complexity, high inference speed, and excellent performance still poses a challenge in most medical image segmentation tasks. nnU-Net [35] improves network adaptability by preprocessing data and postprocessing segmentation results, but model parameters increase with this approach. A lightweight V-Net [36] uses convolution inverted residual bottleneck block (IRB block). To ensure segmentation accuracy and use fewer parameters, depth-wise convolution and point-wise convolution are employed. However, it does not speed up the inference process. Furthermore, Tarasiewicz _et al._[37] developed Lightweight U-Nets that can accurately delineate brain tumors from multimodal magnetic resonance imaging and trained several skinny networks in all image planes. PyConvU-Net [38] increases segmentation accuracy while using fewer parameters by replacing all traditional convolutional layers of U-Net with pyramidal convolution. However, its inference speed is low. G-Net Light [34], PLVSNet [39], and MKIS-Net [40] are effective CNN architectures to segment retinal blood vessels while being lightweight. ## III Proposed Method The conventional semantic segmentation networks are deep and have many trainable parameters, such as SegNet [41], U-Net [18], and DeepLab [42]. These networks are based on encoder-decoder designs, where the decoder mirrors the encoder. This means that the network has double number of layers. Our proposed FES-Net, on the other hand, utilizes shallow upsampling that helps to reduce the depth of the network, resulting in parameter reduction. This network is based on four Fig. 1: Architecture of the proposed FES-Net. BN = batch normalization. DW-Sep-Conv = depth-wise separable convolution. FEB = feature enhancement block. ReLU = rectified linear unit activation. Fig. 2: Connectivity schematic of prompt convolutional blocks (PCBs) used in the proposed FES-Net. prompt convolutional blocks (PCBs), which are a combination of general convolutions and separable convolutions to reduce the cost of the network (Figs. 1 and 2). Each PCB in FES-Net comprises a 3x3 general convolution, a separable convolution of 3x3, and one 1x1 general convolution (Fig. 2). These convolutions are directly connected to a strided (dilated) convolution and employ batch normalization and rectified linear units (ReLU) for the combination of their outputs. The depth-wise concatenation layer is used to merge the outputs of these three convolutions, and FES-Net contains four such PCBs. The main feature of FES-Net is the enhancement of spatial features achieved through the feature enhancement block (FEB). FEB consists of a shallow convolutional block that preserves low-level features, does not significantly downsample them, and provides them at the end of the network. It uses four convolutions with a maximum depth of 16 channels only, which minimizes the network's parameters. FEB enhances spatial information on the initial layers of the network and provides rich features to the final stage of the network. In the FES-Net architecture with FEB (Fig. 1), the upsampling block receives the feature \(F_{D}\) from the downsampling side, and from the shallow upsampling, it outputs the feature \(F_{US}\), which is common in semantic segmentation networks. Continuous downsampling and multiple convolutional layers result in feature deterioration and \(F_{US}\) cannot provide a better true positive rate. Therefore, FEB takes the characteristic \(F_{i}\) from the initial layers of the network and generates the characteristic \(F_{E}\) after the shallow convolutional operation, which contains low-level characteristics. The two characteristics \(F_{US}\) and \(F_{E}\) are combined to produce the characteristic \(S_{C}\): \[S_{C}=F_{US}\,\raisebox{-1.0pt}{\text{\textcircled{0}}}\,F_{E} \tag{1}\] where the symbol \(\,\raisebox{-1.0pt}{\text{\textcircled{0}}}\,\) denotes depth-wise concatenation of the two features, namely the feature on the upsampling side and the feature on FEB. The resulting feature \(S_{C}\) contains rich edge information, resulting in higher sensitivity. Finally, at the end of the network, FES-Net uses the pixel classification layer to assign a predicted label to each pixel of the image. ## IV Results and Discussion The efficacy of the proposed FES-Net for segmenting retinal blood vessels was evaluated on publicly available datasets and compared with previously published state-of-the-art methods. ### _Datasets_ **DRIVE:** The DRIVE [64] dataset has 40 color retinal images divided into 20 training images and 20 testing images, with a resolution of \(584\times 565\) pixels. It includes patients of different ages diagnosed with diabetes, with seven images representing the early stages of mild diabetic retinopathy. Manual pixel annotations are available for vessel and background segmentation in both training and testing images. **STARE:** The STARE [65] dataset consists of 20 color fundus images with \(35^{\circ}\) field of view (FOV) and a resolution of \(700\times 605\) pixels. The ground truth is established using two manual segmentation options, and 10 images are used for training, while the remaining 10 are used for testing. **CHASE:** The CHASE [1] dataset consists of 28 color images of 14 British schoolchildren. Each image has a resolution of \(999\times 996\) pixels and is centered on the optical disc. Two manual segmentation maps are provided for ground truth. The dataset does not have a specific separation of training and testing sets. For our experiments, we used the initial 20 images for training and the final 8 images for testing. **HRF:** The HRF dataset [66] includes 45 images, consisting of 15 images each from healthy individuals, patients with glaucoma, and patients with diabetic retinopathy. The images have a resolution of \(3504\times 2336\) pixels and a viewing angle of \(60^{\circ}\). Ground truth segmentation was performed by a team \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{5}{c}{**Performance (\%)**} \\ \cline{2-6} & **Se (\(\uparrow\))** & **Sp (\(\uparrow\))** & **Acc (\(\uparrow\))** & **AUC (\(\uparrow\))** & **F1 (\(\uparrow\))** \\ \hline HED [43] & 80.76 & 98.22 & 96.41 & 98.24 & 82.68 \\ Orlando et al. [45] & 76.80 & 97.38 & 95.19 & 95.70 & - \\ JL-UNet [20] & 75.81 & 98.46 & 96.12 & 98.01 & - \\ Att UNet [46] & 77.09 & 98.48 & 96.33 & 97.00 & - \\ H-DenseUNet [47] & 78.59 & 98.42 & 96.44 & 98.47 & 82.32 \\ Three-stage CNN [48] & 77.35 & 98.57 & 96.38 & 98.33 & - \\ BTS-DSN [49] & 82.01 & 98.28 & 96.60 & 98.72 & 83.62 \\ DUNet [50] & 78.92 & 98.16 & 96.34 & 98.43 & 82.30 \\ CC-Net [53] & 80.67 & 98.16 & 96.32 & 98.33 & 81.36 \\ OCE-Net [56] & 80.12 & 98.65 & 96.72 & **98.76** & 83.41 \\ Wave-Net [57] & 79.02 & 98.36 & 96.41 & - & 81.40 \\ Lightyes [58] & - & - & - & 98.29 & - \\ G-Net Light [34] & 81.70 & 98.53 & 97.30 & - & 81.78 \\ \hline **Proposed FES-Net** & **82.97** & **98.91** & **97.33** & 97.17 & **83.99** \\ \hline \hline \end{tabular} \end{table} TABLE II: Performance comparison of the proposed FES-Net and several alternative methods on the STARE dataset. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{5}{c}{**Performance (\%)**} \\ \cline{2-6} & **Se (\(\uparrow\))** & **Sp (\(\uparrow\))** & **Acc (\(\uparrow\))** & **AUC (\(\uparrow\))** & **F1 (\(\uparrow\))** \\ \hline HED [43] & 80.76 & 98.22 & 96.41 & 98.24 & 82.68 \\ Orlando et al. [45] & 76.80 & 97.38 & 95.19 & 95.70 & - \\ JL-UNet [20] & 75.81 & 98.46 & 96.12 & 98.01 & - \\ Att UNet [46] & 77.09 & 98.48 & 96.33 & 97.00 & - \\ H-DenseUNet [47] & 78.59 & 98.42 & 96.44 & 98.74 & 82.32 \\ Three-stage CNN [48] & 77.35 & 98.57 & 96.38 & 98.33 & - \\ BTS-DSN [49] & 82.01 & 98.28 & 96.60 & 98.72 & 83.62 \\ DUNet [50] & 78.92 & 98.16 & 96.34 & 98.43 & 82.30 \\ CC-Net [53] & 80.67 & 98.16 & 96.32 & 98.33 & 81.36 \\ OCE-Net [56] & 80.12 & 98.65 & 96.72 & **98.76** & 83.41 \\ Wave-Net [57] & 79.02 & 98.36 & 96.41 & - & 81.40 \\ Lightyes [58] & - & - & - & 98.29 & - \\ G-Net Light [34] & 81.70 & 98.53 & 97.30 & - & 81.78 \\ \hline **Proposed FES-Net** & **82.97** & **98.91** & **97.33** & 97.17 & **83.99** \\ \hline \hline \end{tabular} \end{table} TABLE I: Performance comparison of the proposed FES-Net and several alternative methods on the DRIVE dataset. of specialists for each image. ### _Experimental Setup_ For network training, the ADAM optimizer was used with a starting learning rate of 0.00002 and a decay rate of 0.90. Before starting the training, we resized the images for each dataset to a standard width of 640 pixels and normalized each image based on the z-score statistic. To enrich the dataset, we applied contrast enhancement, brightness adjustment, and random image flipping and rotation (ranging between 1-360\({}^{\circ}\)). These techniques were employed to synthetically expand the number of training images. ### _Evaluation Criteria_ Vessel segmentation maps exhibit a distinctive characteristic compared to other types of segmentation maps, as they adopt a binary representation, where each pixel is assigned exclusively to either the vessel or the background class. Accomplished ophthalmologists with specialized expertise manually annotated the "ground truth" labels in publicly accessible datasets. These annotations serve as a reference standard against which the performance of generated segmentation results can be assessed. The process involves the classification of individual image pixels into either the vascular or nonvascular category. For each output image, there are four possible outcomes that can occur: true positive (\(T_{P}\)), which represents pixels correctly identified as vessels; true negative (\(T_{N}\)), which represents pixels correctly identified as non-vessels; false positive (\(F_{P}\)), which represents pixels mistakenly identified as vessels; and false negative (\(F_{N}\)), which represents pixels mistakenly identified as non-vessels. To allow comparisons between different methods, five widely used parameters are used in the field: sensitivity (\(S_{e}\)), specificity (\(S_{p}\)), accuracy (\(A_{cc}\)), area under the curve (_AUC_), and the \(F_{1}\) score. These measures assess different aspects of the performance of a method and their mathematical formulations are as follows: \[S_{e}=\frac{T_{P}}{T_{P}+F_{N}}, \tag{2}\] \[S_{p}=\frac{T_{N}}{T_{N}+F_{P}}, \tag{3}\] \[A_{cc}=\frac{T_{P}+T_{N}}{T_{P}+T_{N}+F_{P}+F_{N}}, \tag{4}\] \[\text{{AUC}}=1-\frac{1}{2}\left(\frac{F_{P}}{F_{P}+T_{N}}+\frac{F_{N}}{F_{N}+T _{P}}\right), \tag{5}\] \[F_{1}=\frac{2\times T_{P}}{(2\times T_{P})+F_{P}+F_{N}}. \tag{6}\] ### _Comparison and Experiments_ Here we present the results obtained with our proposed network, as well as a range of alternate approaches, on the DRIVE, STARE, CHASE, and HRF datasets. We commence by providing a prior qualitative and quantitative evaluation of a comprehensive spectrum of unsupervised and supervised techniques commonly employed in the field. This evaluation aims to establish a baseline performance and to compare the effectiveness of various existing methods. Subsequently, we perform an exhaustive comparison between U-Net [18] and SegNet [41], which are widely recognized deep learning architectures for semantic segmentation tasks. The objective of this section is to present a self-contained comparison, while also showcasing the performance of our proposed network against these well-established benchmarks. The quantitative results of our proposed FES-Net and other methods (Tables I-IV) provide evidence that the proposed FES-Net consistently outperforms other techniques in terms of \(A_{cc}\). Moreover, the results demonstrate that our proposed network obtains competitive performance in terms of \(S_{e}\), \(S_{p}\) and _AUC_, particularly for the CHASE dataset, and consistently ranks among the top-performing models. In particular, there is no apparent discernible pattern among alternative methods in terms of optimal \(S_{e}\), \(S_{p}\) and _AUC_, suggesting that the proposed FES-Net exhibits unique strengths in accurately segmenting retinal vessels across different datasets. In addition to the quantitative analysis, we present a qualitative comparison of the results obtained by FES-Net and other methods on the DRIVE, STARE, and CHASE datasets. The results obtained on the DRIVE dataset (Fig. 3) demonstrate that FES-Net significantly reduces false positives in small \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{5}{c}{**Performance (\%)**} \\ \cline{2-6} & **Se (\(\uparrow\))** & **Sp (\(\uparrow\))** & **Acc (\(\uparrow\))** & **AUC (\(\uparrow\))** & **F1 (\(\uparrow\))** \\ \hline U-Net [18] & - & - & 95.87 & 83.05 & 72.39 \\ CS-Net [59] & - & - & 95.66 & 82.32 & 71.04 \\ SA-Unet [60] & - & - & 95.64 & 82.70 & 71.18 \\ SCS-Net [61] & - & - & 95.65 & 81.80 & 70.66 \\ CogSeg [62] & - & - & 96.22 & 84.31 & 74.75 \\ SuperVessel [63] & - & - & 96.54 & 85.06 & 76.74 \\ \hline **Proposed FES-Net** & **80.26** & **98.37** & **97.05** & **88.55** & **79.98** \\ \hline \hline \end{tabular} \end{table} TABLE IV: Performance comparison of the proposed FES-Net and several alternative methods on the HRF dataset. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{5}{c}{**Performance (\%)**} \\ \cline{2-6} & **Se (\(\uparrow\))** & **Sp (\(\uparrow\))** & **Acc (\(\uparrow\))** & **AUC (\(\downarrow\))** & **F1 (\(\uparrow\))** \\ \hline HED [43] & 75.16 & 98.05 & 95.97 & 97.96 & 78.15 \\ DeepVessel [22] & 74.12 & 97.01 & 96.09 & 97.90 & - \\ Orlando et al. [45] & 75.65 & 96.55 & 94.67 & 94.78 & - \\ JL-UNet [20] & 76.33 & 98.09 & 96.10 & 97.81 & - \\ Att UNet [46] & 80.10 & 98.04 & 96.42 & 98.40 & 80.12 \\ H-DenseUNet [47] & 78.93 & 97.92 & 96.11 & 98.35 & 79.01 \\ Three-stage CNN [48] & 76.41 & 98.06 & 96.07 & 97.76 & - \\ BTS-DSN [49] & 78.88 & 90.1 & 96.27 & 98.40 & 79.83 \\ DUNet [50] & 77.35 & 98.01 & 96.18 & 98.39 & 79.32 \\ M2U-Net [51] & - & - & 97.03 & 96.66 & 80.06 \\ OCE-Net [56] & 81.38 & 98.24 & 96.78 & **98.72** & 81.96 \\ Wave-Net [57] & 82.83 & 98.21 & 96.64 & - & 83.49 \\ LightPSes [58] & - & - & - & 98.20 & - \\ G-Net Light [34] & 82.10 & 98.38 & 97.26 & - & 80.48 \\ \hline **Proposed FES-Net** & **85.95** & **98.88** & **97.55** & 98.61 & **84.88** \\ \hline \hline \end{tabular} \end{table} TABLE III: Performance comparison of the proposed FES-Net and several alternative methods on the CHASE dataset. vessels compared to current methods. For example, U-Net variants struggle to accurately delineate vessel boundaries, resulting in a higher number of false positives, while SegNet [41] tends to generate false tiny vessels in most images, and BCD-Unet [67] appears to overlook crucial information about vessel structures, leading to suboptimal segmentation performance. In contrast, FES-Net effectively captures this information while minimizing the generation of false vessel information, resulting in more accurate segmentations. Alternative methods tend to produce more false positives when applied to the STARE dataset (Fig. 4), particularly around the retinal boundaries, optic nerves, and small vessels. This may be attributed to the challenges posed by the complex retinal structures and image artifacts present in this dataset. The proposed FES-Net method proves to be more robust against these artifacts, preserving the fine details of the vessel structures while maintaining a low rate of false positives. These findings indicate that FES-Net is capable of capturing the subtle characteristics of retinal vessels in challenging scenarios, showcasing its efficacy in this domain. Similar observations are made when applying FES-Net to the CHASE dataset (Fig. 5). Despite the presence of various challenges, such as image quality variations and vessel abnormalities, FES-Net consistently achieves accurate vessel segmentation. The proposed method effectively suppresses false positives while preserving the true vessel structures, even in regions with low contrast or overlapping vessel patterns. These results further establish the robustness and efficacy of FES-Net in a variety of datasets and image conditions. In general, the comprehensive evaluation demonstrates the superiority of our proposed FES-Net method in terms of accuracy and robustness compared to the alternative methods examined. Quantitative analysis reveals consistent high performance, while visual comparisons highlight the ability of FES-Net to accurately capture retinal vessel structures with minimal false positives. These results provide solid evidence for the efficacy of our proposed method for retinal vessel segmentation tasks and its potential to help diagnose and monitor retinal diseases. ### _Discussion_ There are notable architectural distinctions between our proposed network and the alternative models discussed above, which merit attention. First, it is important to emphasize that Fig. 4: Segmentation results of FES-Net and comparative methods on representative test images of the STARE dataset. From left to right: test image (#2 in the top row and #3 in the bottom row), ground truth segmentation, and the segmentation results of BCD-Unet, MultiResNet, SegNet, Unet++, and our proposed FES-Net, respectively. True positive pixels are shown in green, false positives in red, and false negatives in blue. Fig. 3: Segmentation results of FES-Net and comparative methods on representative test images of the DRIVE dataset. From left to right: test image (#1 in the top row and #2 in the bottom row), ground truth segmentation, and the segmentation results of BCD-Unet, MultiResNet, SegNet, Unet++, and our proposed FES-Net, respectively. True positive pixels are shown in green, false positives in red, and false negatives in blue. the FES-Net consistently outperforms the alternatives, with a substantially reduced number of trainable parameters. Our network is designed to achieve a balance between performance and cost-effectiveness. Specifically, it consists of 1 million (M) trainable parameters, while for example, MultiResUNet [68] has about 7M and VesselNet [69] about 9M parameters (Table V). In recent research, modifications to the U-Net and SegNet architectures have been introduced, leading to outstanding performance in the segmentation of retinal vessels [67, 70]. However, it is crucial to emphasize that these methods often involve a substantially larger number of trainable parameters, typically an order of magnitude larger, in comparison to our proposed network. This suggests the potential of our approach to achieving comparable results with fewer parameters, which is promising from a computational perspective. Furthermore, our experiments were conducted on publicly available standard datasets and performance metrics which are used widely. U-Net [18] and SegNet [41] are established benchmark methods in the field, and our proposed method consistently demonstrates strong competitiveness against the above-mentioned benchmarks, as evidenced by the results obtained by the FES-Net. In general, our proposed network successfully balances performance and computational efficiency, delivering exceptional results with notably fewer trainable parameters compared to existing methods. This serves as compelling evidence for the effectiveness and potential of our approach for retinal vessel segmentation tasks. ## V Conclusions In this study, a novel feature enhancement network specifically designed for the segmentation of retinal blood vessels is proposed, taking into account the computational demands to deploy the network on resource-limited devices such as smartphones. Significantly, our segmentation network achieves a remarkable decrease in the overall parameter count when compared to cutting-edge segmentation networks documented in the literature. It consists of approximately 1 million parameters only. We have presented extensive experiments and comparative analyses of different alternative methods using four publicly accessible datasets. The results provide ample evidence of the robustness, competitiveness, and effectiveness of our proposed network.
2309.09597
Some remarks on $p$-median problem
It is shown that for a given Banach space $X$ and a subspace $Y$ weakly $\mathcal{K}$-analytic, $L_p(\mu,Y)$ is $p$-simultaneously proximinal in $L_p(\mu,X)$ whenever $Y$ is $p$-simultaneously proximinal in $X$.
Tijani Pakhrou
2023-09-18T09:08:00Z
http://arxiv.org/abs/2309.09597v1
# Some remarks on \(p\)-median problem ###### Abstract It is shown that for a given Banach space \(X\) and a subspace \(Y\) weakly \(\mathcal{K}\)-analytic, \(L_{p}(\mu,Y)\) is \(p\)-simultaneously proximinal in \(L_{p}(\mu,X)\) whenever \(Y\) is \(p\)-simultaneously proximinal in \(X\). **Mathematics Subject Classification (2020)**. 41A28, 41.40. **Keywords:** Simultaneous approximation, Best approximation. ## 1 Introduction Throughout this paper, \((X,\|\cdot\|)\) is a real Banach space, \((\Omega,\Sigma,\mu)\) a complete probability space and we assume \(1\leq p<\infty\). We denote by \(L_{p}(\mu,X)\) the Banach space of all Bochner \(p\)-integrable functions defined on \(\Omega\) with values in \(X\) endowed with the usual \(p\)-norm (Diestel and Uhl, 1977). Let \(Y\) be a subset of \(X\) and \(n\) a positive integer. Let \(A=\{a_{1},\ldots,a_{n}\}\subset X\) and \(m\in[1,\infty)\), we say that \(y_{0}\in Y\) is a _relative \(m\)-median of \(A\) in \(Y\)_ or _relative \(m\)-center of \(A\) in \(Y\)_ if \[\sum_{i=1}^{n}\|a_{i}-y_{0}\|^{m}\leq\sum_{i=1}^{n}\|a_{i}-y\|^{m},\ \ \forall\,y\in Y.\] We denote by \(Z_{Y}^{m}(A)\) the set of relative \(m\)-_centers_ of \(A\) in \(Y\). When \(Y=X\), then \(Z^{m}(A)\) denotes the set of \(m\)-centers of \(A\). Note that when \(A\) has one or two points, the \(m\)-centers of \(A\) are trivial. Indeed, if we take only one point, obviously its \(m\)-center itself. If we take a couple of points, it is easy to prove that their \(m\)-centers are always in their convex hull, i.e. the segment defined by these two points. Moreover, if \(X\) has dimension not exceeding \(2\), by Helly theorem (Mendoza and Pakhrou, 2003) we have \[Z^{m}(\{a_{1},\ldots,a_{n}\})\cap\mathrm{conv}(\{a_{1},\ldots,a_{n}\})\neq\emptyset,\] for every \(a_{1},\ldots,a_{n}\in X\). However, if we consider a set of three points in a space of dimension greater than two, we lose information about the existence and location of \(m\)-centers. If every \(n\)-tuple of vectors \(a_{1},\ldots,a_{n}\in X\) admits a relative \(m\)-center in \(Y\), then \(Y\) is said to be \(m\)_-simultaneously proximinal in \(X\)_. Of course, for \(n=1\) the notion of relative \(m\)-center is just that of _best approximation_ and the concept of \(m\)-simultaneously proximinal coincides with the _proximinality_ (Mendoza, 1998). One of the questions studied about the relative \(m\)-center is the type of subspaces that are \(m\)-simultaneously proximinal in a Banach space. Natural subspaces of \(L_{p}(\mu,X)\) are either of the kind \(L_{p}(\mu,Y)\), where \(Y\) is a closed subspace of \(X\), or of the kind \(L_{p}(\mu_{0},X)\), where \(\mu_{0}\) is the restriction of \(\mu\) to a certain sub-\(\sigma\)-algebra \(\Sigma_{0}\) of \(\Sigma\). The main problem we present in this paper is: _Let \(1\leq p<+\infty\) and \(1\leq m<+\infty\). If \(Y\) is a subspace of \(X\), \(m\)-simultaneously proximinal in \(X\) and weakly \(\mathcal{K}\)-analytic, is \(L_{p}(\mu,Y)\)\(m\)-simultaneously proximinal in \(L_{p}(\mu,X)\)_? We denote by \(\ell_{m}^{n}(X)\) the Banach space consisting of all \(n\)-tuple \[(x_{1},\ldots,x_{n})\in X^{n}=\overbrace{X\times X\times\cdots\times X}^{n \text{ times}}\] with the norm \[\||(x_{1},\ldots,x_{n})\||_{m}:=\Big{(}\sum_{i=1}^{n}\|x_{i}\|^{m}\Big{)}^{1 \over m},\ \ m\in[1,+\infty).\] Then it is obvious that \(y_{0}\) is a relative \(m\)-center of \(A=\{a_{1},\ldots,a_{n}\}\subset X\) in \(Y\) if and only if \((y_{0},\ldots,y_{0})\) is a best approximation to \((a_{1},\ldots,a_{n})\) from the subspace \[\operatorname{diag}\!\left(\ell_{m}^{n}(Y)\right):=\big{\{}(y,\ldots,y):y\in Y \big{\}}.\] ## 2 Preliminaries Let \(X\) be a nonempty set and \(\mathcal{A}\) a class of its subsets. We say that we are given a Souslin scheme \(\{A_{n_{1},\ldots,n_{k}}\}\) with values in \(\mathcal{A}\) if, to every finite sequence \((n_{1},\ldots,n_{k})\) of natural numbers, there corresponds a set \(A_{n_{1},\ldots,n_{k}}\in\mathcal{A}\). The Souslin operation over the class \(\mathcal{A}\) is a mapping such that to every Souslin scheme \(\{A_{n_{1},\ldots,n_{k}}\}\) with values in \(\mathcal{A}\), associates the set \[A=\bigcup_{(n_{i})_{i\geq 1}\in\mathbb{N}^{\mathbb{N}}}\bigcap_{k=1}^{\infty}A_{ n_{1},\ldots,n_{k}}.\] The sets \(A\subset X\) of this form are called \(\mathcal{A}\)-Souslin. We recall that a multivalued mapping \(\Psi\) from a topological space \(X\) to the set of nonempty subsets of a topological space \(Y\), is called _upper semi-continuous_ if for every \(x\in X\) and every open set \(V\) in \(Y\), containing the set \(\Psi(x)\), there exists a neighborhood \(U\) of \(x\) such that \[\Psi(U):=\bigcup_{u\in U}\Psi(u)\subset V.\] Let \(X\) be a Hausdorff space and let \(\mathbb{N}^{\mathbb{N}}\) be the space of sequences of positive integers endowed with the product topology. A subset \(A\) of \(X\) is called \(\mathcal{K}\)-_analytic_ if there exists an upper semi-continuous mapping \(\Psi\) on \(\mathbb{N}^{\mathbb{N}}\) with values in the set of nonempty compact sets in \(X\) such that the equality \[A=\bigcup_{(n_{i})_{i\geq 1}\in\mathbb{N}^{\mathbb{N}}}\Psi\big{(}(n_{i})_{i\geq 1 }\big{)}\] holds. The metric projection on a subset \(Y\) of a Banach space \((X,\|\cdot\|)\) is the multivalued mapping given by \[P_{Y}(x)=\big{\{}y\in Y:\|x-y\|=\inf_{z\in Y}\|x-z\|\big{\}}\] for every \(x\in X\). Note that \(P_{Y}(x)\) may be the empty set for some \(x\in X\). Let \(Y\), \(Z\) be topological spaces. A selector for a multivalued mapping \(\phi:Y\longrightarrow 2^{Z}\) is a singlevalued mapping \(f:Y\longrightarrow Z\) such that \(f(y)\in\phi(y)\) for every \(y\in Y\). A set in a Hausdorff space is called _analytic_ if it is the image of a complete separable metric space under a continuous mapping. We will say that a map \(f:Y\longrightarrow Z\) is _analytic measurable_ if the preimage of any Borel subset of \(Z\) belongs to the smallest \(\sigma\)-algebra containing the analytic subsets of \(Y\). **Theorem 2.1** (Cascales and Raja, 2003).: _Let \(X\) be a Banach space and \(Y\) a proximinal and weakly \(\mathcal{K}\)-analytic convex subset of \(X\). Then, for every separable closed subset \(M\subset X\) the metric projection \(P_{Y}|_{M}:M\longrightarrow 2^{Y}\) has an analytic measurable selector with separable range._ Observe that the class of proximinal vector subspaces \(Y\) which are \(\mathcal{K}\)-analytic for the weak topology contains the reflexive subspaces, proximinal separable subspaces, proximinal weakly compactly generated (WCG) subspaces and proximinal quasi-reflexive subspaces, among others. **Proposition 2.2**.: _Let \(X\) be a Banach space and \(Y\) a weakly \(\mathcal{K}\)-analytic subspace of \(X\) which is \(m\)-simultaneously proximinal in \(X\). Consider a complete probability space \((\Omega,\Sigma,\mu)\) and \(f_{1},\ldots,f_{n}:\Omega\longrightarrow X\) be \(\mu\)-measurable functions. Then there exists a \(\mu\)-measurable function \(g_{0}:\Omega\longrightarrow Y\) such that \(g_{0}(s)\) is a relative \(m\)-center of \(\{f_{1}(s),\ldots,f_{n}(s)\}\subset X\) in \(Y\), for almost all \(s\in\Omega\)._ Proof.: Since \(f_{1},\ldots,f_{n}:\Omega\longrightarrow X\) are \(\mu\)-measurable functions. Is immediate that the function \[\begin{array}{rcl}f:&\Omega&\longrightarrow&\ell_{m}^{n}(X)\\ &s&\longrightarrow&f(s)=\big{(}f_{1}(s),\ldots,f_{n}(s)\big{)}\end{array}\] is \(\mu\)-measurable. There exists \(\Omega_{0}\in\Sigma\) such that \(\mu(\Omega\backslash\Omega_{0})=0\) and \(f(\Omega_{0})\) is a separable subset of \(\ell_{m}^{n}(X)\) for the norm topology. On the other hand, countable product of \(\mathcal{K}\)-analytic sets is \(\mathcal{K}\)-analytic. Hence, since \(Y\) is weakly \(\mathcal{K}\)-analytic, \(\operatorname{diag}(\ell_{m}^{n}(Y))\) is weakly \(\mathcal{K}\)-analytic. Denoting by \(M\) the closure of \(f(\Omega_{0})\) in \(\ell_{m}^{n}(X)\) for the norm topology, from Theorem 2.1, there exists \[h:M\longrightarrow\operatorname{diag}\bigl{(}\ell_{m}^{n}(Y)\bigr{)}\] an analytic selector with separable range for the norm topology, of the metric projection \[P_{\operatorname{diag}(\ell_{m}^{n}(Y))|_{M}}:M\longrightarrow 2^{\operatorname{ diag}(\ell_{m}^{n}(Y))}.\] The function \[g:\;\;\Omega \longrightarrow \operatorname{diag}\bigl{(}\ell_{m}^{n}(Y)\bigr{)}\] \[s \longrightarrow g(s)=\left\{\begin{array}{cl}h\bigl{(}f(s)\bigr{)}&\text{ if }\quad s\in\Omega_{0}\\ 0&\text{ if }\quad s\in\Omega\backslash\Omega_{0}\end{array}\right.\] is \(\mu\)-measurable, where the measurability follows from the fact that \(f|_{\Omega_{0}}\) is \(\Sigma|_{\Omega_{0}}\)-analytic measurable because a complete probability space is stable by the Souslin operation, (Kechris, 1995), and \(h\) is analytic measurable. By construction, \(g\) satisfies, \[g(s)=h\bigl{(}f(s)\bigr{)}\in P_{\operatorname{diag}(\ell_{m}^{n}(Y))|_{M}} \bigl{(}f(s)\bigr{)},\;\;\text{ if }\quad s\in\Omega_{0}.\] Then, \[\||f(s)-g(s)\||_{m}=\inf_{u\in\operatorname{diag}(\ell_{m}^{n}(Y))}\||f(s)-u \||_{m},\;\;\text{ if }\quad s\in\Omega_{0},\] so \(g(s)\) is a best approximation of \(f(s)\) for almost all \(s\in\Omega\). Finally, we define the function \(g_{0}:\Omega\longrightarrow Y\) by \(g_{0}(s)=y_{s}\) where \(y_{s}\) is the component of \(g(s)=(y_{s},\ldots,y_{s})\in\operatorname{diag}(\ell_{m}^{n}(Y))\) if \(s\in\Omega_{0}\) and \(g_{0}(s)=0\) otherwise. It is clear that \(g_{0}\) is \(\mu\)-measurable. Thus, by taking \(y\in Y\) we have \[\||f(s)-g(s)\||_{m}\leq\||f(s)-u\||_{m}\] for almost all \(s\in\Omega\), where \(u=(y,\ldots,y)\in\operatorname{diag}(\ell_{m}^{n}(Y))\). This implies that \[\sum_{i=1}^{n}\|f_{i}(s)-g_{0}(s)\|^{m}\leq\sum_{i=1}^{n}\|f_{i}(s)-y\|^{m}\] for almost all \(s\in\Omega\). Therefore, \(g_{0}(s)\) is a relative \(m\)-center of \(\{f_{1}(s),\ldots,f_{n}(s)\}\subset X\) in \(Y\) for almost all \(s\in\Omega\). ## 3 Main results The notion of relative \(p\)-center in \(Y\) for almost all \(s\in\Omega\), implies the relative \(p\)-center in \(L_{p}(\mu,Y)\), when \((\Omega,\Sigma,\mu)\) is a complete probability space and \(1\leq p<+\infty\). **Proposition 3.1**.: _Let \(X\) be a Banach space and \(Y\) a closed subspace of \(X\). Given any functions \(f_{1},\ldots,f_{n}\in L_{p}(\mu,X)\) and \(g\in L_{p}(\mu,Y)\). Then if \(g(s)\) is a relative \(p\)-center of \(\{f_{1}(s),\ldots,f_{n}(s)\}\subset X\) in \(Y\), for almost all \(s\in\Omega\), \(g\) is a relative \(p\)-center of \(\{f_{1},\ldots,f_{n}\}\subset L_{p}(\mu,X)\) in \(L_{p}(\mu,Y)\)._ Proof.: Let \(h\in L_{p}(\mu,Y)\). Since \(g(s)\) is a relative \(p\)-center of \(\{f_{1}(s),\ldots,f_{n}(s)\}\subset X\) in \(Y\), for almost all \(s\in\Omega\), one has \[\sum_{i=1}^{n}\|f_{i}(s)-g(s)\|^{p}\leq\sum_{i=1}^{n}\|f_{i}(s)-h(s)\|^{p},\] for almost all \(s\in\Omega\). Thus \[\sum_{i=1}^{n}\int_{\Omega}\|f_{i}(s)-g(s)\|^{p}\,d\mu(s)\leq\sum_{i=1}^{n} \int_{\Omega}\|f_{i}(s)-h(s)\|^{p}\,d\mu(s).\] Therefore, we get \[\sum_{i=1}^{n}\|f_{i}-g\|_{p}^{p}\leq\sum_{i=1}^{n}\|f_{i}-h\|_{p}^{p},\ \ \forall\,h\in L_{p}(\mu,Y),\] which means that \(g\) is is a relative \(p\)-center of \(\{f_{1},\ldots,f_{n}\}\subset L_{p}(\mu,X)\) in \(L_{p}(\mu,Y)\). Observe that the previous proposition fails if we take relative \(m\)-center with \(m\neq p\), as it is shown in the following example. **Example 3.2**.: _By taking \(\Omega=\{s_{1},s_{2}\}\), we define \(\Sigma=\mathcal{P}(\Omega)\) and \(\mu(\{s_{1}\})=\mu(\{s_{2}\})=1\). Let \(X=\mathbb{R}^{2}\) with the euclidean norm and \(Y=\{(x,0):x\in\mathbb{R}\}\). Consider the functions \(f_{1},f_{2},g:\Omega\longrightarrow X\)_ \[f_{1}(s_{1}):=(-1,1),\ \ f_{1}(s_{2}):=(2,2)\] \[f_{2}(s_{1}):=(2,2),\ \ f_{2}(s_{2}):=(-1,1)\] \[g(s_{1}):=\Big{(}\frac{1}{2},0\Big{)},\ \ g(s_{2}):=\Big{(}\frac{1}{2},0 \Big{)}.\] _Then \(g(s)\) is a relative \(2\)-center of \(\{f_{1}(s),f_{2}(s)\}\subset X\) in \(Y\), for all \(s\in\Omega\), but \(g\) is not a relative \(2\)-center of \(\{f_{1},f_{2}\}\subset L_{1}(\mu,X)\) in \(L_{1}(\mu,Y)\)._ Proof.: We have that \[\|f_{1}(s)-g(s)\|^{2}+\|f_{2}(s)-g(s)\|^{2}\leq\|f_{1}(s)-y\|^{2}+\|f_{2}(s)-y \|^{2}\] for all \(y\in Y\) and all \(s\in\Omega\). Hence \(g(s)\) is a relative \(2\)-center of \(\{f_{1}(s),f_{2}(s)\}\subset X\) in \(Y\), for all \(s\in\Omega\). On the other hand \[\|f_{1}- g\|_{1}^{2}+\|f_{2}-g\|_{1}^{2}=\bigg{(}\int_{\Omega}\|f_{1}(s)-g(s)\|d \mu(s)\bigg{)}^{2}+\] \[+\bigg{(}\int_{\Omega}\|f_{2}(s)-g(s)\|d\mu(s)\bigg{)}^{2}=\] \[=\big{(}\|f_{1}(s_{1})-g(s_{1})\|+\|f_{1}(s_{2})+g(s_{2})\|\big{)} ^{2}\] \[+\big{(}\|f_{2}(s_{1})-g(s_{1})\|+\|f_{2}(s_{2})+g(s_{2})\|\big{)} ^{2}=\] \[=2\Bigg{(}\sqrt{\frac{13}{4}}+\sqrt{\frac{25}{4}}\Bigg{)}^{2}= \frac{1}{2}(13+10\sqrt{13}+25)=19+5\sqrt{13}.\] We define \(h:\Omega\longrightarrow Y\) by \[h(s_{1})=h(s_{2}):=(0,0).\] Then, \[\|f_{1}-h\|_{1}^{2}+\|f_{2}-h\|_{1}^{2} =\big{(}\|f_{1}(s_{1})-h(s_{1})\|+\|f_{1}(s_{2})-h(s_{2})\|\big{)} ^{2}+\] \[+\big{(}\|f_{2}(s_{1})-h(s_{1})\|+\|f_{2}(s_{2})-h(s_{2})\|\big{)} ^{2}=\] \[=\Big{(}\sqrt{2}+\sqrt{8}\Big{)}^{2}+\Big{(}\sqrt{8}+\sqrt{2} \Big{)}^{2}=36\] Since \(5\sqrt{13}>18\) we have \(36<19+5\sqrt{13}\), one has \[\|f_{1}-h\|_{1}^{2}+\|f_{2}-h\|_{1}^{2}<\|f_{1}-g\|_{1}^{2}+\|f_{2}-g\|_{1}^{2},\] which means \(g\) is not a relative \(2\)-center of \(\{f_{1},f_{2}\}\subset L_{1}(\mu,X)\) in \(L_{1}(\mu,Y)\). **Lemma 3.3**.: _Let \(Y\) be a closed subspace of Banach space \(X\) and \(f_{1},\ldots,f_{n}\in L_{p}(\mu,X)\). Let \(g:\Omega\longrightarrow Y\) be a \(\mu\)-measurable function such that \(g(s)\) is a relative \(p\)-center of \(\{f_{1}(s),\ldots,f_{n}(s)\}\subset X\) in \(Y\) for almost all \(s\in\Omega\). Then \(g\) is a relative \(p\)-center of \(\{f_{1},\ldots,f_{n}\}\subset L_{p}(\mu,X)\) in \(L_{p}(\mu,Y)\)._ Proof.: Since \(g(s)\) is a relative \(p\)-center of \(\{f_{1}(s),\ldots,f_{n}(s)\}\subset X\) in \(Y\), for almost all \(s\in\Omega\), we have \[\sum_{i=1}^{n}\|f_{i}(s)-g(s)\|^{p}\leq\sum_{i=1}^{n}\|f_{i}(s)-y\|^{p}\] for almost all \(s\in\Omega\) and each \(y\in Y\). On the other hand, for \(i_{0}\in\{1,\ldots,n\}\) it follows \[\|g(s)\|^{p} \leq\big{(}\|f_{i_{0}}(s)-g(s)\|+\|f_{i_{0}}(s)\|\big{)}^{p}\leq\] \[\leq 2^{p-1}\big{(}\|f_{i_{0}}(s)-g(s)\|^{p}+\|f_{i_{0}}(s)\|^{p} \big{)}\leq\] \[\leq 2^{p-1}\sum_{i=1}^{n}\|f_{i}(s)-g(s)\|^{p}+2^{p-1}\|f_{i_{0}}( s)\|^{p}\leq\] \[\leq 2^{p-1}\sum_{i=1}^{n}\|f_{i}(s)-y\|^{p}+2^{p-1}\|f_{i_{0}}(s) \|^{p},\] for almost all \(s\in\Omega\) and each \(y\in Y\). Now by taking \(y=0\), we get \[\|g(s)\|^{p}\leq 2^{p-1}\sum_{i=1}^{n}\|f_{i}(s)\|^{p}+2^{p-1}\|f_{i_{0}}(s)\|^ {p}\] for almost all \(s\in\Omega\). Thus \[\|g\|_{p}^{p}\leq 2^{p-1}\sum_{i=1}^{n}\|f_{i}\|_{p}^{p}+2^{p-1}\|f_{i_{0}}\|_{ p}^{p}.\] Hence \(g\in L_{p}(\mu,Y)\) and from Proposition 3.1, it is a relative \(p\)-center of \[\{f_{1},\ldots,f_{n}\}\subset L_{p}(\mu,X)\] in \(L_{p}(\mu,Y)\). **Theorem 3.4**.: _Let \(X\) be a Banach space. If \(Y\) is a subspace of \(X\), \(p\)-simultaneously proximal in \(X\) and weakly \(\mathcal{K}\)-analytic, then \(L_{p}(\mu,Y)\) is \(p\)-simultaneously proximinal in \(L_{p}(\mu,X)\)._ Proof.: Let \(f_{1},\ldots,f_{n}\) be functions in \(L_{p}(\mu,X)\). Then Proposition 2.2 guarantees the existence a \(\mu\)-measurable function \(g_{0}:\Omega\longrightarrow Y\) such that \(g_{0}(s)\) is a relative \(p\)-center of \(\{f_{1}(s),\ldots,f_{n}(s)\}\subset X\) in \(Y\) for almost all \(s\in\Omega\). By Lemma 3.3 it follows that \(g_{0}\) is a relative \(p\)-center of \(\{f_{1},\ldots,f_{n}\}\subset L_{p}(\mu,X)\) in \(L_{p}(\mu,Y)\). **Proposition 3.5**.: _Let \((H,\|\cdot\|)\) be a Hilbert space, \(1\leq p<+\infty\), \(p\neq 2\) and \(m=2\). Then there exists a three-point set \(\{f_{1},f_{2},f_{3}\}\subset L_{p}(\mu,H)\) and \(g\in L_{p}(\mu,H)\) such that \(g(s)\in H\) is a \(2\)-center of \(\{f_{1}(s),f_{2}(s),f_{3}(s)\}\subset H\), for every \(s\in\Omega\). But \(g\) is not a \(2\)-center of \(\{f_{1},f_{2},f_{3}\}\)._ Proof.: Since the space \(L_{p}(\mu,H)\)\((p\neq 2)\) is not an inner product space, by (Benitez, Fernandez and Soriano, 2002), there exists a three-point set \(\{f_{1},f_{2},f_{3}\}\subset L_{p}(\mu,H)\) such that \[Z^{2}(\{f_{1},f_{2},f_{3}\})\cap\operatorname{conv}(\{f_{1},f_{2},f_{3}\})=\emptyset. \tag{1}\] On the other hand, we have that the set \(Z^{2}(\{f_{1},f_{2},f_{3}\})\) is not empty and \[\frac{1}{3}\big{(}f_{1}(s)+f_{2}(s)+f_{3}(s)\big{)}\in Z^{2}(\{f_{1}(s),f_{2}( s),f_{3}(s)\}),\quad\text{for all}\quad s\in\Omega. \tag{2}\] We define the function \(g:\Omega\longrightarrow H\) by \[g(s):=\frac{1}{3}\big{(}f_{1}(s)+f_{2}(s)+f_{3}(s)\big{)}.\] By construction of \(g(s)\) one has \(g\in\operatorname{conv}(\{f_{1},f_{2},f_{3}\})\). On the other hand, because of \(\{f_{1},f_{2},f_{3}\}\subset L_{p}(\mu,H)\), we have \(g\in L_{p}(\mu,H)\). From (1), have to \(g\notin Z^{2}(\{f_{1},f_{2},f_{3}\})\). However from (2), \(g(s)\) is a \(2\)-center of \(\{f_{1}(s),f_{2}(s),f_{3}(s)\}\) for all \(s\in\Omega\). This concludes the proof.
2309.09583
Liftings of knots in $S_{g} \times S^{1}$ and covering of virtual knots
A virtual link diagram is called {\em (mod $m$) almost classical} if it admits a (mod $m$) Alexander numbering. In \cite{BodenGaudreauHarperNicasWhite}, it is shown that Alexander polynomial for almost classical links can be defined by using the homology of the associated infinite cyclic cover. On the other hand, in \cite{NaokoKamada} an infinite family of $m$ fold covering over a virtual knot is constructed so that it is mod $m$ almost classical link for all $m$ by using oriented cut point. In this paper, another way to obtain a family of $m$-fold coverings over a given virtual knots, which are mod $m$ almost classical, by using knots in $S_{g} \times S^{1}$.
Seongjeong Kim
2023-09-18T08:46:12Z
http://arxiv.org/abs/2309.09583v2
# Liftings of knots in \(S_{g}\times S^{1}\) and covering of virtual knots ###### Abstract. A virtual link diagram is called _(mod \(m\)) almost classical_ if it admits a (mod \(m\)) Alexander numbering. In [5], it is shown that Alexander polynomial for almost classical links can be defined by using the homology of the associated infinite cyclic cover. On the other hand, in [4] an infinite family of \(m\) fold covering over a virtual knot is constructed so that it is mod \(m\) almost classical link for all \(m\) by using oriented cut point. In this paper, another way to obtain a family of \(m\)-fold coverings over a given virtual knots, which are mod \(m\) almost classical, by using knots in \(S_{g}\times S^{1}\). Keywords: Virtual knots, Mod \(m\) almost classical knots, Oriented cut system, Knots in \(S_{g}\times S^{1}\) Mathematics Subject Classification 2020: 57K10, 57K12 ## 1. Introduction A virtual link diagram is called _(mod \(m\)) almost classical_ if it admits a (mod \(m\)) Alexander numbering. In [5], it is shown that Alexander polynomial for almost classical links can be defined by using the homology of the associated infinite cyclic cover. In particular, it satisfies a skein relation. Similarly one can construct an Alexander polynomial for mod \(m\) almost classical links. On the other hand in [6, 7] the notions of an oriented cut point and a cut system for a virtual link diagram, which is an extension of (unoriented) cut points, are introduced. In [4] it is proved that any two cut systems for a given virtual link diagram are equivalent up to finitely many local moves and an infinite family of \(m\) fold covering over a virtual knot is constructed so that it is mod \(m\) almost classical link for all \(m\) Our interest is links in \(S_{g}\times S^{1}\), where \(S_{g}\) is an orientable surface of genus \(g\). In [3], by the author diagrams on a plane and local moves for links in \(S_{g}\times S^{1}\), which present equivalence relations for links in \(S_{g}\times S^{1}\). This paper is contributed to construct another way to obtain a family of \(m\)-fold coverings over a given virtual knots, which are mod \(m\) almost classical, by using knots in \(S_{g}\times S^{1}\). The present paper is organized as follows: In Section 2, we introduce the notion of almost classical links and oriented cut points. In Section 3, the basic notions and properties of links in \(S_{g}\times S^{1}\) are introduced. In Section 4, the covering over knots in \(S_{g}\times S^{1}\) is constructed by using heights of arcs of them. In Section 5, a map from virtual knots to knots in \(S_{g}\times S^{1}\) by using oriented cut points. We will show that the m fold covering over knots in \(S_{g}\times S^{1}\) of degree \(0\) constructed in Section 4 is almost classical. ## 2. Almost classical links and cut system In this paper, arcs between crossings are called _arcs_ and arcs between a crossing and a double lines are called _short arcs_. ### Almost classical links **Definition 2.1**.: An _Alexander numbering_ (respectively, _mod \(m\) Alexander numbering_) of \(D\) is an assignment of a number in \(\mathbb{Z}\) (respectively, in \(\mathbb{Z}_{m}\)) to each short arc of \(D\) such that the numbers of \(4\) short arcs around each classical crossing are assigned as shown in Fig. 1 for some \(i\in\mathbb{Z}\) (respectively, \(i\in\mathbb{Z}_{m}\)). If a diagram admits Alexander numbering (respectively, _mod \(m\) Alexander numbering_), then it is called _almost classical diagram_ (respectively, _mod \(m\) almost classical diagram_) **Example 2.2**.: _It is know that the left diagram in Fig. 2 is not Alexander numberable. The right diagram in Fig. 2 admits an Alexander numbering as described and hence it is almost classical._ **Example 2.3**.: _The virtual knot diagram in Fig. 3 is known that it is not almost classical diagram. But one can find a mod \(3\) Alexander numbering as described in Fig. 3 and hence it is mod \(3\) almost classical diagram._ **Definition 2.4**.: A virtual link \(L\) is _almost classical_ (respectively _mod \(m\) almost classical_) if there is an almost classical (respectively mod \(m\) almost classical) virtual link diagram of \(L\). It is known that any almost classical virtual link diagram is mod \(m\) almost classical. A virtual link diagram is checkerboard colorable if and only if it is mod \(2\) almost classical. **Proposition 2.5** ([5]).: _A knot \(K\) in a thickened surface \(\Sigma\times[0,1]\) is mod \(p\) almost classical if and only if it is homologically trivial as an element in \(H_{1}(\Sigma;\mathbb{Z}_{p})\)._ Figure 1. Figure 2. Non-Alexander numberable and almost classical diagrams **Proposition 2.6** ([5]).: _For a virtual knot or link \(K\), the followings are equivalent_ 1. \(K\) _is almost classical_ 2. \(K\) _is homological trivial as a knot or link in_ \(\Sigma\times[0,1]\)_._ 3. \(K\) _is the boundary of a connected, oriented surface_ \(F\) _embedded in_ \(\Sigma\times[0,1]\)_, where_ \(\Sigma\) _is the underlying surface associated to an Alexander numberable diagram of_ \(K\)_._ _The surface \(F\) is called a Seifert surface for \(K\)._ In [5] Alexander polynomial \(\Delta_{K}(t)\) is defined by using Seifert surfaces for almost classical links. In particular, it satisfies the usual skein relation \(\Delta_{K_{+}}(t)-\Delta_{K_{-}}(t)=(t-t^{-1})\Delta_{K_{0}}(t)\). ### Cut system and \(m\)-fold cyclic covering In general, virtual link diagrams are not almost classical. In [6, 7], a notion of cut points is introduced by H. Dye. **Definition 2.7**.: An _oriented cut point_ or simply _cut point_ is an arrow on an arc which indicates a local orientation of the arc as described in Fig. 4. An oriented cut point is called _coherent_ (respectively _incoherent_) if the local orientation indicated by the cut point agrees on (respectively disagree on) the orientation of the virtual link diagram. **Definition 2.8**.: Let \(D\) be a virtual link diagram and \(P\) a set of oriented cut points of \(D\). We say that \(P\) is a _cut system_ if \(D\) admits an Alexander numbering such that at each oriented cut point, the number increases by one in the direction of the oriented cut point as described in Fig. 5. Such an Alexander numbering is called _an Alexander numbering of a virtual link diagram with a cut system_. **Definition 2.9**.: _A standard cut system_ of a virtual link diagram is a cut system which is obtained by introducing two oriented cut points as described in Fig. 6 around each virtual crossing. It is really a cut system and an Alexander numbering looks as described in the right of Fig. 6 around each virtual crossing. Figure 4. Oriented cut points Figure 3. Mod 3 almost classical diagram _Remark 2.10_.: The idea of standard cut system is to assign Alexander numbering around virtual crossings considering it as classical crossing. Note that classical knot diagrams are almost classical, that is, Alexander numberable. **Definition 2.11**.: The local transformation of oriented cut points depicted in Fig. 7 are called _oriented cut point moves_. **Theorem 2.12** ([4]).: _Two cut systems of the same virtual link diagram are related by a sequence of oriented cut point moves._ **Corollary 2.13**.: _Let \(D\) be a virtual link diagram and \(P\) a cut system of \(D\). The number of coherent cut points of \(P\) equals that of incoherent cut points of \(P\)._ By N. Kamada an _m-fold cyclic covering (virtual link) diagram of \((D,P)\)_ is constructed. The following proposition is proved. **Proposition 2.14**.: _For a virtual link diagram \(D\) with a cut system \(P\), an \(m\)-fold cyclic covering virtual link diagram \(\phi_{m}(D,P)\) is mod \(m\) almost classical._ For details, see [4]. ## 3. Basic notions for links in \(S_{g}\times S^{1}\) ### Links in \(S_{g}\times S^{1}\) and its diagrams Figure 5. Oriented cut points Figure 6. Standard cut system Figure 7. Oriented cut point moves **Definition 3.1**.: Let \(S_{g}\) be an oriented surface of genus \(g\). _A link \(L\) in \(S_{g}\times S^{1}\)_ is a pair of \(S_{g}\times S^{1}\) and a smooth embedding of a disjoint union of \(S^{1}\) into \(S_{g}\times S^{1}\). We denote it by \((L,S_{g}\times S^{1})\). Each image of \(S^{1}\) in \(S_{g}\times S^{1}\) is called _a component_ of \(L\). A link of one component is called _a knot in \(S_{g}\times S^{1}\)_. **Definition 3.2**.: Let \((L,S_{g}\times S^{1})\) and \((L^{\prime},S_{g^{\prime}}\times S^{1})\) be two links in \(S_{g}\times S^{1}\). We call \((L,S_{g}\times S^{1})\) and \((L^{\prime},S_{g^{\prime}}\times S^{1})\) are _equivalent_, if \((L^{\prime},S_{g^{\prime}}\times S^{1})\) can be obtained from \((L,S_{g}\times S^{1})\) by isotopy and stabilization/destabilization of \(S_{g}\times S^{1}\). We call \((L,S_{g}\times S^{1})\) and \((L^{\prime},S_{g}\times S^{1})\) are _isotopic_ if there exists a diffeomorphism from \(S_{g}\times S^{1}\) to itself which maps \(L\) to \(L^{\prime}\). By the _destabilization for \((L,S_{g}\times S^{1})\)_ we mean the following: Let \(C\) be a non-contractible circle on the surface \(S_{g}\) such that there exists a torus \(T\) in \(S_{g}\times S^{1}\) homotopic to the torus \(C\times S^{1}\) and not intersecting the link. Then our destabilization is cutting of the manifold \(S_{g}\times S^{1}\) along the torus \(C\times S^{1}\) and pasting of two newborn components of boundary by \(D^{2}\times S^{1}\). The _stabilization for \(S_{g}\times S^{1}\)_ is converse operation to the destabilization. First let us construct diagrams for \((L,S_{g}\times S^{1})\) on the surface \(S_{g}\) as follows: Let \(L\) be an (oriented) link in \(S_{g}\times S^{1}\). Assume that an orientation is given on \(S^{1}\). Suppose that \(x_{0}\in S^{1}\) is a point such that \(S_{g}\times\{x_{0}\}\cap L\) is a set of finite points with no transversal points. Then there exists a natural diffeomorphism \(f\) from \((S_{g}\times S^{1})\setminus(S_{g}\times\{x_{0}\})\) to \(S_{g}\times(0,1)\subset S_{g}\times[0,1]\). Let \(M_{L}=\overline{f((S_{g}\times S^{1})-(S_{g}\times\{x_{0}\}))}\cong S_{g} \times[0,1]\). Then \(\overline{f(L)}\) in \(M_{L}\) consists of finitely many circles and arcs with exactly two boundaries on \(S_{g}\times\{0\}\) and \(S_{g}\times\{1\}\). Let \(D_{\overline{f(L)}}\) be the image of a projection of \(\overline{f(L)}\) on the \(S_{g}\times\{0\}\). Notice that two boundaries of an arc of \(\overline{f(L)}\) is projected to the same point, we call it _a vertex_. It follows that the diagram \(D_{\overline{f(L)}}\) of \(L\) on \(S_{g}\) has \(n\) circles with vertices corresponding to two boundary points on \(S_{g}\times\{0\}\) and \(S_{g}\times\{1\}\) and \(m\) circles as described in the right of Fig. 8. Notice that two arcs near to a vertex are the images of arcs near to \(S_{g}\times\{0\}\) and \(S_{g}\times\{1\}\) in \(M_{L}\cong S_{g}\times[0,1]\), respectively, as described in Fig. 9. We change the point to two small lines so that if one of the lines is connected with an arc which is near to \(S_{g}\times\{1\}\), then the line is longer than the another, as describe in Fig. 9. Figure 8. Schematic figures of links in \(S_{g}\times S^{1}\) and its projection on the surface \(S_{g}\times\{0\}\) Since \(D_{\overline{f(L)}}\) is a framed 4-valent graph with double lines on the surface \(S_{g}\), which comes from \(\overline{f(L)}\) in \(S_{g}\times[0,1]\), one can give classical crossings for each 4-valent vertex. That is, a link \(L\) in \(M_{L}\) has a knot diagram with double lines on \(S_{g}\). Simply we call it _a diagram on \(S_{g}\) with double lines_. **Proposition 3.3** (M. K. Dabkowski, M. Mroczkowski (2009) [2], Kim (2018) [3]).: _Let \((L,S_{g}\times S^{1})\) and \((L^{\prime},S_{g}\times S^{1})\) be two links in \(S_{g}\times S^{1}\). Let \(D_{L}\) and \(D_{L^{\prime}}\) be diagrams of \(L\) and \(L^{\prime}\) on \(S_{g}\), respectively. Then \((L,S_{g}\times S^{1})\) and \((L^{\prime},S_{g}\times S^{1})\) are isotopic if and only if \(D_{L^{\prime}}\) can be obtained from \(D_{L}\) by applying finitely many moves in Fig. 10._ **Corollary 3.4**.: _Let \((L,S_{g}\times S^{1})\) and \((L^{\prime},S_{g-1}\times S^{1})\) be two links in \(S_{g}\times S^{1}\) and \(S_{g-1}\times S^{1}\), respectively. Let \(D_{L}\) and \(D_{L^{\prime}}\) be diagrams \(S_{g}\) and \(S_{g-1}\) of \(L\) and \(L^{\prime}\), respectively. Then \((L^{\prime},S_{g-1}\times S^{1})\) is obtained from \((L,S_{g}\times S^{1})\) by a destabilization if and only if \((D_{L^{\prime}},S_{g-1})\) can be obtained from \((D_{L},S_{g})\) by a destabilization of \(S_{g}\)._ Now, let us construct diagrams for links in \(S_{g}\times S^{1}\) on the plane by using diagrams on \(S_{g}\). For a link in \(S_{g}\times S^{1}\) let a diagram \(D\) on \(S_{g}\) with double lines be given. We may assume that the diagram is drawn on \(2g\)-gon presentation of \(S_{g}\) as in the middle of Fig. 11. Connect points on boundaries of \(2g\)-gon with corresponding Figure 10. Moves for diagrams on \(S_{g}\) Figure 9. Local image near to an intersection of a link with \(S_{g}\times\{x_{0}\}\) and a corresponding double line points by arcs outside \(2g\)-gon. By changing intersections between arcs outside \(2g\)-gon to virtual crossings, we obtain a diagram with double lines and virtual crossings, see the right in Fig. 11. We call it _a diagram with double lines on the plane_ for links in \(S_{g}\times S^{1}\), or simply _a diagram for links in \(S_{g}\times S^{1}\)_. The following theorem holds. **Proposition 3.5** (Kim (2018) [3]).: _Let \((L,S_{g}\times S^{1})\) and \((L^{\prime},S_{g^{\prime}}\times S^{1})\) be two links. Let \(D_{L}\) and \(D_{L^{\prime}}\) be diagrams of \(L\) and \(L^{\prime}\) on the plane, respectively. Then \(L\) and \(L^{\prime}\) are equivalent if and only if \(D_{L^{\prime}}\) can be obtained from \(D_{L}\) by applying finitely many moves in Fig. 12._ **Terminology.** In this paper, arcs of a diagram with double lines between two double lines are called _long arcs_, arcs between crossings are called _arcs_ and arcs between a crossing and a double lines are called _short arcs_. _Remark 3.6_.: As described in Fig. 13, by adding two double lines one can change over/under information of a crossing. ### Degree of oriented knots in \(S_{g}\times S^{1}\) and heights of arcs From now on we are mainly interested in _oriented knots_ in \(S_{g}\times S^{1}\). Let \(\Pi:\mathbb{R}\to S^{1}\) be the natural covering defined by \(\Pi(r)=e^{2\pi ri}\). Then the function \(Id_{S_{g}}\times\Pi:S_{g}\times\mathbb{R}\to S_{g}\times S^{1}\) is also a covering over \(S_{g}\times S^{1}\), where \(Id_{S_{g}}:S_{g}\to S_{g}\) is the identity map. Figure 11. Schematic figures of links in \(S_{g}\times S^{1}\) and its projection on the plane Figure 12. Moves for links in \(S_{g}\times S^{1}\) \(S_{g}\times\mathbb{R}\)\(\xrightarrow{\phi_{2}}\)\(\mathbb{R}\)\(S^{1}\xrightarrow{\tilde{K}}S_{g}\times S^{1}\)\(S^{1}\) Let \(K:[0,1]\to S_{g}\times S^{1}\) be a knot with \(K(0)=K(1)\). Let \(\tilde{K}\) be a lifting of \(K\) into \(S_{g}\times\mathbb{R}\) along the covering \(Id_{S_{g}}\times\Pi:S_{g}\times\mathbb{R}\to S_{g}\times S^{1}\). When \(\phi_{2}\circ\hat{K}(0)=0\) for the projection \(\phi_{2}:S_{g}\times\mathbb{R}\to\mathbb{R}\), _the degree \(deg(K)\) of a knot \(K\) in \(S_{g}\times S^{1}\)_ is defined by \[deg(K)=\phi_{2}\circ\hat{K}(1)\in\mathbb{Z}.\] It is easy to see that the degree \(deg(K)\) of a knot \(K\) in \(S_{g}\times S^{1}\) is an invariant for knots in \(S_{g}\times S^{1}\). The degree of a given oriented knot \(K\) in \(S_{g}\times S^{1}\) can be calculated by using a diagram with double lines in the following way: Let \(D\) be an oriented diagram with double lines of an oriented knot \(K\) in \(S_{g}\times S^{1}\). Let us give \(\pm 1\) to double lines with respect to the orientation as described in Fig. 14, call it _a sign of a double line_. Then the degree of \(K\) is equal to the sum of signs of all double lines, for example, see Fig. 15. ### Heights of long arcs of \(D\) Let \(K\) be an oriented knot in \(S_{g}\times S^{1}\) with degree \(deg(K)=n\). Without loss of generality we may assume that the point \(K(0)=K(1)\) corresponds to a double line of a diagram \(D\). For a long arc \(l\) of a diagram \(D\) of \(K\), there is a line segment \(l^{\prime}\) in the lifting \(\tilde{K}\) to \(S_{g}\times\mathbb{R}\) with \(\tilde{K}(0)\in S_{g}\times\{0\}\) corresponding to \(l\). If \(\phi_{2}(l^{\prime})\in(a,a+1)\) for some \(a\in\mathbb{Z}\), then we give a label \(a\in\mathbb{Z}_{n}\) (or \(a\in\mathbb{Z}\) when \(n=0\)) to \(l\) and we call it _the height of the long arc \(l\)_. **Example 3.7**.: _Let \(K\) be an oriented knot in \(S^{2}\times S^{1}\) as described in Fig. 16, where \(S^{2}\) is a 2-dimensional sphere. The knot \(K\) has the degree \(2\) and it has a diagram \(D_{K}\) of the trivial knot diagram with two double lines. One can see that the arc of Figure 14. Signs of double lines Figure 13. A crossing change with additional two double lines \(K\) colored by red corresponds to the arc of \(\tilde{K}\) placed in \(S^{2}\times(0,1)\), but the arc of \(K\) colored by green corresponds to the arc of \(\tilde{K}\) placed in \(S^{2}\times(1,2)\). Note that the red and green arcs of \(K\) correspond to the long arcs of \(D_{K}\) colored by red and green respectively. Now we give numbers \(0\) and \(1\) to red and green long arcs of \(D_{K}\) respectively. Note that the heights \(0\) and \(1\) are considered as elements in \(\mathbb{Z}_{2}\)._ As the degree of an oriented knot \(K\) in \(S_{g}\times S^{1}\) heights of long arcs of a diagram \(D\) of \(K\) can be determined in combinatoric way: Let \(K\) be an oriented knot in \(S_{g}\times S^{1}\) with the degree \(deg(K)=n\). Let \(D\) be an oriented diagram with double lines of \(K\). Let us give signs for double lines. Take a double line as a starting point and give a height \(0\) to the long arc which is the following long arc with respect to the orientation of \(D\), see Fig. 17. Following the diagram respecting the given orientation when we meet a double line with height \(a\in\mathbb{Z}_{n}\) coming from a long arc, we give \(a+\epsilon\in\mathbb{Z}_{n}\) to the next long arc, where \(\epsilon\) is the sign of the double line, see Fig. 18. Figure 16. A knot in \(S^{2}\times S^{1}\) with degree \(2\) Figure 17. Starting point determining heights of long arcs Figure 15. \(D_{1}\), \(D_{2}\) and \(D_{3}\) are oriented diagrams with double lines. Each double line has a signs. We obtain that \(deg(D_{1})=2\), \(deg(D_{2})=3\), \(deg(D_{3})=0\). **Example 3.8**.: _Let \(D\) be an oriented diagram with double lines in Fig. 19 of a knot \(K\) in \(S_{g}\times S^{1}\). The degree of \(K\) is \(0\). With the fixed starting point at double line, we obtain heights of long arcs as right figure in Fig. 19._ **Example 3.9**.: _Let \(D\) be an oriented diagram with double lines in Fig. 20 of a knot \(K\) in \(S_{g}\times S^{1}\). The degree of \(K\) is \(3\). With the fixed starting point at double line, we obtain heights of long arcs. Note that the long arc with height \(0\) has height \(3\) simultaneously by the rule of determining heights in combinatoric way. Since heights are valued in \(\mathbb{Z}_{3}\), it does not make any ambiguity._ _Remark 3.10_.: Let \(\mathcal{A}_{D}\) be the set of long arcs of \(D\). Then heights of long arcs can be presented by a map from \(\mathcal{A}_{D}\) to \(\mathbb{Z}_{n}\). For each choice of double line \(d\) as a starting point we obtain \(\phi_{d}:\mathcal{A}_{D}\to\mathbb{Z}_{n}\). For a pair of double lines \(d_{1}\) and \(d_{2}\), there exists a map \(t_{s}:\mathbb{Z}_{n}\to\mathbb{Z}_{n}\) defined by \(t_{s}(x)=x+s\) such that \(\phi_{d_{2}}=t_{s}\circ\phi_{d_{1}}\). ## 4. Lifting of knots in \(S_{g}\times S^{1}\) to \(S_{g}\times\mathbb{R}\) In the previous section we define the degree of a knot in \(S_{g}\times S^{1}\) by using the lifting \(Id_{S_{g}}\times\Pi:S_{g}\times\mathbb{R}\to S_{g}\times S^{1}\). Since a knot \(K\) is an embedding of \(S^{1}\) into \(S_{g}\times S^{1}\), there exists the lifting of \(K\) to \(S_{g}\times\mathbb{R}\) along the projection \(Id_{S_{g}}\times\Pi\) Figure 19. Heights on long arcs of a knot \(K\) of degree \(0\) valued in \(\mathbb{Z}_{0}=\mathbb{Z}\) Figure 20. Heights on long arcs of a knot \(K\) of degree \(3\) valued in \(\mathbb{Z}_{3}\) Since \(\mathbb{R}\cong(0,1)\subset[0,1]\), the lifting can be considered as an arc in thickened surface \(S_{g}\times[0,1]\) and one can expect that it can be presented by a diagram of knots on the plane. In the present section we discuss how to obtain the diagram of the liftings step by step by using diagrams with heights on arcs. ### Lifting of knots in \(S_{g}\times S^{1}\) with degree \(0\) Let \(K\) be an oriented knot in \(S_{g}\times S^{1}\) with degree \(0\). Let \(\hat{K}\) a lifting of \(K\) to \(S_{g}\times\mathbb{R}\). Since \(K\) has the degree \(0\), \(\hat{K}\) is a knot in \(S_{g}\times\mathbb{R}\), that is, \(\hat{K}\) is a virtual knot. It is clear that if \(K\) and \(K^{\prime}\) are equivalent, then \(\hat{K}\) and \(\hat{K^{\prime}}\) are equivalent as virtual knots. To obtain a diagram of \(\hat{K}\) let us visualize the lifting \(\hat{K}\) to \(S_{g}\times\mathbb{R}\). It consists of \(4\) steps: **Step 1.** Let \(D\) be an oriented diagram of \(K\) in \(S_{g}\times S^{1}\). Let fix a double line on a diagram and give a height for each long arc as in the previous section. Let us say that the smallest label is \(m\) and the largest label is \(M\). **Step 2.** Let us imagine \(M-m+1\) parallel planes placed as Fig. 22. Give numbers from bottom to top by integers from \(m\) to \(M\). Draw \(M-m+1\) copies of \(D\) for each plane. For a copy of a diagram on the plane with a number \(k\) erase long arcs which have not the height \(k\). **Step 3.** Let us suppose that we are walking on the arc on the plane with number \(s\). When we meet the double line, if it is longer line, then we connect the arc to the arc on the plane with number \(s+1\), but if it is shorter line, then we connect the arc to the arc on the plane with number \(s-1\). Since \(deg(K)=0\), we come back to the initial point and it is a virtual knot in \(S_{g}\times\mathbb{R}\). From our visualization of \(\hat{K}\) one can obtain a virtual diagram \(\hat{D}\) as follows: for a diagram \(D\) with double lines, we give heights for arcs of \(D\). For each crossing, if two heights of over/under crossings are same, then we remain it. If two heights Figure 21. Step 1 for knots of degree \(0\) Figure 22. Step 2 for knots of degree \(0\) of over/under crossings are different, then we change (if it is necessary) over/under information so that the arc with higher height becomes over crossing. By removing double lines, we obtain a virtual knot diagram \(\hat{D}\). **Corollary 4.1**.: _Let \(D\) and \(D^{\prime}\) be two oriented diagrams of knots in \(S_{g}\times S^{1}\). If they are equivalent, then \(\hat{D}\) and \(\hat{D^{\prime}}\) are equivalent as virtual knots._ Proof.: Assume that \(D^{\prime}\) is obtained from \(D\) by applying one of the moves in Fig. 12. If \(D^{\prime}\) is obtained from \(D\) by applying (1), (2), (3), (1'), (2'), (3') and (3"), then it is easy to see that \(\hat{D^{\prime}}\) can be obtained from \(\hat{D}\) by applying virtual Reidemeister moves. Suppose that \(D^{\prime}\) is obtained from \(D\) by applying (4). The line segments in move (4) have labels \(a,a+1\) and \(b\) as described in Fig. 24 and Fig. 25. As shown in Fig.s, when we lift diagrams, the difference is just a change of the place of a classical crossing and it follows that \(\hat{D}\cong\hat{D}^{\prime}\). Analogously one can show that \(\hat{D}\cong\hat{D}^{\prime}\) when \(D^{\prime}\) is obtained from \(D\) by applying (4'). Suppose that \(D^{\prime}\) is obtained from \(D\) by applying (5), see Fig. 26. When we lift the link to \(S_{g}\times\mathbb{R}\) the line segment between two double-lines and others are placed in the different levels. Note that in \(S_{g}\times\mathbb{R}\) under the line segment corresponding to the line segment between two double-lines there are no other arcs, so we can push it down as described under of Fig. 26. That is, \(\hat{D}\cong\hat{D}^{\prime}\) and the proof is completed. _Remark 4.2_.: If we consider liftings \(\hat{K}_{s}\) such that \(\hat{K}_{s}(0)=\hat{K}_{s}(0)=(x,s)\), then we obtain a link of infinitely many components. Note that the classical crossing with Figure 23. Step 3 for knots of degree 0 Figure 24. \(a\geq b\) label \(b-a\), then it corresponds to the crossing of infinite link \(\sqcup_{s\in\mathbb{Z}}K_{s}\) between two components \(K_{a}\) and \(K_{b}\), see Fig. 27. **Corollary 4.3**.: _Let \(K\) and \(K^{\prime}\) be knots in \(S_{g}\times S^{1}\). If \(K\) and \(K^{\prime}\) are equivalent, then the liftings \(\sqcup_{s\in\mathbb{Z}}K_{s}\) and \(\sqcup_{s\in\mathbb{Z}}K^{\prime}_{s}\) are equivalent in \(S_{g}\times\mathbb{R}\)._ **The construction of diagram of \(\sqcup_{s=1}^{n}K_{s}\) for a knot \(K\) in \(S_{g}\times S^{1}\) with degree \(0\)** consists of \(4\) steps. **Step 1.** For a diagram \(D\) with double lines, we draw \(m\) parallels. But we do not determine over/under information yet. **Step 2.** Twist \(m\) parallel arcs by \(\pi/m\) on the part corresponding to double lines as described in Fig. 28. Notice that over/under information is not determined yet. **Step 3.** Fix a point for each component so that \(m\) points are places in one line vertical to all arcs. And give \(0\) for arcs with the fixed point. **Step 4.** Walking along diagrams, we number arcs so that the number of arc is not changed when we pass through the crossings corresponding to a classical crossing of a knot \(K\), but it is changed by \(\pm 1\) with the rule described in Fig. 29. **Step 5.** We give over/under information so that arcs with higher numbering become over crossing, otherwise, follow the over/under information of original diagram. Figure 26. Figure 25. \(a<b\) For example, see Fig. 30. ### Lifting of knots in \(S_{g}\times S^{1}\) with degree \(k\neq 0\) Let \(K\) be an oriented knot in \(S_{g}\times S^{1}\) with degree \(k\). Then there exists a lifting \(\hat{K}\) to \(S_{g}\times\mathbb{R}\), but, unlike the case of knots in \(S_{g}\times S^{1}\) with degree \(0\), it is a long knot obtained by connected sum of infinitely many copies of \(\hat{K}\cap[0,k]\). But, if we consider a covering \(p_{k}\) from \(S_{g}\times S^{1}\) to \(S_{g}\times S^{1}\) defined by \(p_{k}(x,z)=(x,z^{k})\), the lifting \(\hat{K}\) to \(S_{g}\times S^{1}\) becomes a knot in \(S_{g}\times S^{1}\) of degree \(0\). The algorithm to obtain is similar to the algorithm in the previous section, since the degree is \(l\), we need to connect some arcs on the highest (\(M\)-th) layer with arcs on the lowest (\(m\)-th) layer, although the arcs go up according to double lines. When we connect them, we add a double line on the connecting arc. Let us denote the obtained diagram by \(\hat{D}^{l}\). see Fig. 31. Figure 28. Construction of diagram Figure 29. Construction of diagram Note that, since the degree is \(l\), we need to connect some arcs on the highest (\(M\)-th) layer with arcs on the lowest (\(m\)-th) layer, although the arcs go up according to double lines. When we connect them, we add a double line on the connecting arc. Let us denote the obtained diagram by \(\hat{D}^{l}\). **Corollary 4.4**.: _Let \(D\) and \(D^{\prime}\) be two oriented diagrams. If they are equivalent, then \(\hat{D}\) and \(\hat{D^{\prime}}\) are equivalent as knots in \(S_{g}\times S^{1}\) with degree \(1\) or \(-1\)._ If we consider liftings \(\hat{K}_{s}\) such that \(\hat{K}_{s}(0)=\hat{K}_{s}(0)=(x,e^{\frac{2\pi si}{k}})\), then we obtain a link in \(S_{g}\times S^{1}\) of \(k\) components. Note that the classical crossing with label \(b-a\), then it corresponds to the crossing of infinite link \(\sqcup_{s=0,\dots,k-1}\hat{K}_{s}\) between two components \(K_{a}\) and \(K_{b}\), see Fig. 32. **Corollary 4.5**.: _Let \(K\) and \(K^{\prime}\) be knots in \(S_{g}\times S^{1}\). If \(K\) and \(K^{\prime}\) are equivalent, then the liftings \(\sqcup_{s=0,\dots,k-1}\hat{K}_{s}\) and \(\sqcup_{s=0,\dots,k-1}\hat{K}^{\prime}_{s}\) are equivalent in \(S_{g}\times S^{1}\)._ **The construction of diagram of \(\sqcup_{s=0,\dots,k-1}K_{s}\) for a knot \(K\) in \(S_{g}\times S^{1}\) with degree \(k\)** is same to the case of knots \(K\) in \(S_{g}\times S^{1}\) with degree \(0\) except Step 4. We replace Step 4 to Step 4': **Step 4'.** Walking along diagrams, we number arcs with the rule described in Fig. 33. Figure 31. Lifting for knot of degree \(3\) Figure 30. Construction of the diagram of \(3\)-fold lifting For example, see Fig. 34. ## 5. Liftings of knots in \(S_{g}\times S^{1}\) and cut system For a virtual link diagram with a cut system, one can obtain a link diagram with double lines by replacing cut points to double lines as in Fig. 35. If \(D_{1}\) and \(D_{2}\) are obtained from each others by generalized Reidemeister moves, then we obtain \((D_{1},P_{1})\) and \((D_{2},P_{2})\) with standard cut systems. **Lemma 5.1**.: _For two virtual link diagrams \((D_{1},P_{1})\) and \((D_{2},P_{2})\) with cut systems, if \(D_{1}\) and \(D_{2}\) are equivalent under cut point system moves, then the link diagrams \(D_{1}^{dl}\) and \(D_{2}^{dl}\) with double lines obtained from \((D_{1},P_{1})\) and \((D_{2},P_{2})\) respectively, are equivalent._ Proof.: Suppose that \((D_{2},P_{2})\) is obtained from \((D_{1},P_{1})\) by one of oriented cut point moves. It is easy to see that if \((D_{2},P_{2})\) is obtained from \((D_{1},P_{1})\) by the move of "passing" of a cut point through virtual crossing or the cancellation move of an adjacent coherent and incoherent cut points, then \(D_{1}^{dl}\) and \(D_{2}^{dl}\) are equivalent as knots in \(S_{g}\times S^{1}\). If \((D_{2},P_{2})\) is obtained from \((D_{1},P_{1})\) by deletion of four cut points around a classical crossing, then one can show that \(D_{1}^{dl}\) and \(D_{2}^{dl}\) are equivalent as knots in \(S_{g}\times S^{1}\) as described in Fig. 36. Figure 33. We number arcs as described in Figures, but, we place double lines, when \(i_{s}=k-1\) and \(i_{t}=0\), respectively **Corollary 5.2**.: _For a virtual link diagram \((D,P)\) with a cut system, the obtained link diagram \(D^{dl}\) with double lines has degree \(0\)._ Proof.: By definition of cut system it is obvious that the numbers of coherent and incoherent cut points are same. Therefore, the obtained link diagram \(D^{dl}\) with double lines has degree \(0\). **Lemma 5.3**.: _Let \(D\) be an oriented diagram of a knot in \(S_{g}\times S^{1}\) of degree \(0\). Then the \(m\)-fold lifting of \(D\) is mod \(m\) almost classical for any \(m>1\)._ Proof.: Let \(\tilde{D}^{m}\) be an oriented diagram of a \(m\)-fold lifting of \(D\). Then \(\tilde{D}^{m}\) has \(m\) parallel of \(D\) with twists in the places of double lines. Let us color it by \(\mathbb{Z}_{m}\). Fix Figure 34. Construction of the diagram of \(3\)-fold lifting of the knot in \(S_{g}\times S^{1}\) of degree \(3\) Figure 35. Map from \((D,P)\) to a knot in \(S_{g}\times S^{1}\) any point on \(D\) and color \(m\) semi-arcs of \(\tilde{D}^{m}\) corresponding to the fixed point by \(0,1,\dots,m-1\) from right to left. For convenience let us denote the coloring by \((0,1,\dots,m-1)\). **Step 1.** In the diagram \(\tilde{D}^{m}\) there are two kinds of twists of arcs and the arcs near to the twists are colored as described in Fig. 37. That is, the order of colors of \(m\) parallels remains. Therefore, it suffices to show that \(m\) parallel of \(D\) is mod \(m\) Alexander colorable. **Step 2.** Each crossing of \(D\) corresponds to \(m^{2}\) crossings as described in Fig. 38. Note that \(i\)-th arcs from right of the parallel corresponding to two arcs for a crossing of \(D\), are in same component. Now \(m\) parallels going into the part of \(m^{2}\) crossings are colored by \((0,1,\dots,m-1)\). Then in \(\mathbb{Z}_{m}\) the Alexander numbering is preserved passing through \(m^{2}\) crossings. **Step 3.** One can show that the numberings for arcs corresponding to \(m^{2}\) crossings with two entering colorings \((0,1,\dots,m-1)\) satisfy the properties in Fig. 40. From the previous steps, the proof is completed. **Theorem 5.4**.: _Let \(D^{dl}\) be a link diagram with double lines obtained from a virtual link diagram \((D,P)\) with cut system. Then the \(m\)-fold lifting of \(D^{dl}\) is mod \(m\) almost classical for any \(m>1\)._ Figure 37. Proof step 1 Figure 38. Proof order of parallel Figure 39. Proof step 3 **Example 5.5**.: _We obtain \(3\)-fold covering of \(D^{dl}(3_{1})\) from the trefoil knot \(3_{1}\) and it must be mod \(3\) almost classical. In Fig. 41 one can find a mod \(3\) Alexander numbering._ _Remark 5.6_.: We have two liftings of virtual knots, which provide mod \(m\) almost classical links. The difference is the following: In [4] the constructed lifting can be obtained from \(m\) parallels of virtual knot such that classical crossings between different components are replaced by virtual crossings. But in our construction, classical crossings between different crossing remains as classical crossings.
2309.04919
Unsupervised Chunking with Hierarchical RNN
In Natural Language Processing (NLP), predicting linguistic structures, such as parsing and chunking, has mostly relied on manual annotations of syntactic structures. This paper introduces an unsupervised approach to chunking, a syntactic task that involves grouping words in a non-hierarchical manner. We present a two-layer Hierarchical Recurrent Neural Network (HRNN) designed to model word-to-chunk and chunk-to-sentence compositions. Our approach involves a two-stage training process: pretraining with an unsupervised parser and finetuning on downstream NLP tasks. Experiments on the CoNLL-2000 dataset reveal a notable improvement over existing unsupervised methods, enhancing phrase F1 score by up to 6 percentage points. Further, finetuning with downstream tasks results in an additional performance improvement. Interestingly, we observe that the emergence of the chunking structure is transient during the neural model's downstream-task training. This study contributes to the advancement of unsupervised syntactic structure discovery and opens avenues for further research in linguistic theory.
Zijun Wu, Anup Anand Deshmukh, Yongkang Wu, Jimmy Lin, Lili Mou
2023-09-10T02:55:12Z
http://arxiv.org/abs/2309.04919v1
# Unsupervised Chunking with Hierarchical RNN ###### Abstract In Natural Language Processing (NLP), predicting linguistic structures, such as parsing and chunking, has mostly relied on manual annotations of syntactic structures. This paper introduces an unsupervised approach to chunking, a syntactic task that involves grouping words in a non-hierarchical manner. We present a two-layer Hierarchical Recurrent Neural Network (HRNN) designed to model word-to-chunk and chunk-to-sentence compositions. Our approach involves a two-stage training process: pretraining with an unsupervised parser and finetuning on downstream NLP tasks. Experiments on the CoNLL-2000 dataset reveal a notable improvement over existing unsupervised methods, enhancing phrase F1 score by up to 6 percentage points. Further, finetuning with downstream tasks results in an additional performance improvement. Interestingly, we observe that the emergence of the chunking structure is transient during the neural model's downstream-task training. This study contributes to the advancement of unsupervised syntactic structure discovery and opens avenues for further research in linguistic theory.1 Footnote 1: This paper is a substantially extended version of an EMNLP Findings paper (Deshmukh et al., 2021). The main extension is a novel approach that improves unsupervised chunking performance in a downstream NLP task, which is an interesting result by itself. Part of the text is reused with permission from our previous work (Deshmukh et al., 2021), available at [https://aclanthology.org/2021.findings-emnlp.307.pdf](https://aclanthology.org/2021.findings-emnlp.307.pdf), licensed under CC BY 4.0. Our code and results are available at [https://github.com/MANGA-UOFA/UCHRNN](https://github.com/MANGA-UOFA/UCHRNN) ## 1 Introduction Understanding the linguistic structure of language, such as parsing and chunking, is an important research topic in NLP. While most previous work (Kudo and Matsumoto, 2001; Zhang et al., 2002; Pradhan et al., 2004) employs supervised machine learning methods to predict linguistic structures and can achieve high performance, they rely heavily on manual annotations of syntactic structures like Treebanks (Marcus et al., 1993). As a result, there has been a growing interest in unsupervised linguistic structure discovery in recent years (Kim et al., 2020; Shen et al., 2018, 2019), which is important to natural language processing (NLP) research because it sheds light on linguistic theories and can potentially benefit low-resource languages. Previous work on unsupervised syntactic structure discovery has mainly focused on unsupervised constituency parsing (Kim et al., 2020, 2019; Shen et al., 2018, 2019) and dependency parsing (Klein and Manning, 2004; Gillenwater et al., 2010; Shen et al., 2021). In this work, we address unsupervised chunking, another meaningful task for discovering syntactic structure. Unlike parsing that induces tree structures from a sentence, chunking aims to group consecutive words of a sentence in a non-hierarchical fashion; each chunk can be intuitively thought of as a phrase (Tjong Kim Sang and Buchholz, 2000; Clark, 2001). In fact, unsupervised chunking has wide applications in real-world scenarios, as understanding text fundamentally requires finding spans like noun phrases and verb phrases. It would benefit other NLP tasks, such as keyword extraction (Firoozeh et al., 2020), named entity recognition (Sato et al., 2017), open information extraction (Niklaus et al., 2018; Borgeaud et al., 2022; Izacard et al., 2022), and logical reasoning (Wu et al., 2023). We propose a two-layer hierarchical recurrent neural network (HRNN) to accomplish the chunking task. Our HRNN is designed to explicitly model the word-to-chunk composition by a lower-level RNN, and chunk-to-sentence composition by an upper-level RNN. We further design a trainable chunking gate that switches between lower word-level RNN and upper phrase-level RNN, which is also used for chunk prediction. We propose a two-stage training framework for HRNN without using linguistic annotations, namely, pretraining with an unsupervised parser and finetuning with a downstream NLP task. In the first stage of pretraining, we adopt the recent advances of unsupervised parsing (Kim et al., 2020, 2019), and propose a maximal left-branching heuristic to induce sensible (albeit noisy and imperfect) chunk labels from an unsupervised parser. In the second stage, we finetune HRNN chunking by feeding its upper-level RNN representations to a downstream-task network. Our intuition is that a more meaningful chunking structure is beneficial for the downstream task. Therefore, optimizing downstream-task performance can, in turn, improve HRNN chunking without using syntactically annotated labels. We conducted experiments on the CoNLL-2000 chunking dataset (Tjong Kim Sang and Buchholz, 2000). Results show that the unsupervised parser-pretrained HRNN significantly improves the best-performing unsupervised baseline, with a considerable margin of \(5\) percentage points in terms of the phrase F1 score. We then finetuned the pretrained HRNN model with three downstream text generation tasks: summarization, paraphrasing, and translation. Compared with our pretrained model, it achieves a chunking performance improvement of up to \(6\) percentage points in downstream datasets, and \(2\) percentage points on CoNLL-2000. The results suggest that our method not only bridges the gap between supervised and unsupervised chunking methods but also shows the generalizability across different downstream tasks and datasets. In our experiment, we also find an intriguing linguistic phenomenon during the finetuning stage: the neural model's emergence of linguistic structure is transient. That is, although the downstream performance consistently improves, the chunking performance improves significantly but only in the first few thousand steps. Then, it starts to decrease and eventually drops to the initial level of the finetuning stage or even lower, suggesting that linguistic structures are a convenient vehicle for a downstream task when the training begins and the model capacity is small. However, the neural network tends to discard such linguistic structures to achieve higher performance in the downstream task as the training proceeds. In summary, our main contributions are as follows: * We address the task of unsupervised chunking, and propose a hierarchical recurrent neural network (HRNN) that can explicitly model the word-to-chunk and chunk-to-sentence compositions. * We propose a two-stage training framework for HRNN, largely outperforming previous unsupervised chunking methods. * We observe the neural model's emergence of chunking structure is transient, which may inspire further study of linguistic theory. ## 2 Approach In this section, we first introduce the proposed Hierarchical RNN (HRNN) model. Then we discuss the process of pretraining a HRNN with the induced labels from a state-of-the-art unsupervised parser. Finally, we propose to finetune the model with a downstream task to improve chunking performance. ### Hierarchical RNN We model the chunking task in the BI schema (Ramshaw and Marcus, 1995), where "B" denotes the beginning of a chunk, and "I" denotes the inside of a chunk. We observe that encoder-only RNNs or Transformers may not be suitable for the chunking task because they lack autoregressiveness, which means the prediction at a time step is unaware of previously predicted chunks. Feeding predicted chunk labels like a sequence-to-sequence model is also inadequate because a BI label only provides one-bit information and does not offer meaningful autoregressive information. To this end, we design a hierarchical RNN to model the autoregressiveness of predicted chunks. Our HRNN contains a lower-level RNN operating at the word level and an upper-level RNN operating at the chunk level. We design a gating mechanism that switches between the two RNNs in a soft manner, also serving as the chunk label. The bottom-left part of Figure 1 shows the structure of HRNN. Given a sentence with words \(\mathrm{x}^{(1)},\cdots,\mathrm{x}^{(n)}\), we first apply a pretrained Transformer encoder (Devlin et al., 2019) to obtain the contextual representations of the words that help our model to understand the previous and future context of the sentence. We denote the contextual embeddings of these words by \(\mathbf{x}^{(1)},\cdots,\mathbf{x}^{(n)}\). For a step \(t\), we predict a switching gate \(m^{(t)}\in(0,1)\) as the chunking decision.2 Footnote 2: Here, \(m^{(t)}=1\) corresponds to “B,” i.e., a new chunk, and \(m^{(t)}=0\) corresponds to “I,” i.e., inside of a chunk. \[m^{(t)}_{\text{logit}}=W\big{[}\underline{\mathbf{h}}^{(t-1)}; \overline{\mathbf{h}}^{(t-1)};\mathbf{x}^{(t)}\big{]} \tag{1}\] \[m^{(t)}=\sigma(m^{(t)}_{\text{logit}}) \tag{2}\] where \(\underline{\mathbf{h}}^{(t-1)}\) is the hidden state of the lower RNN and \(\overline{\mathbf{h}}^{(t-1)}\) is that of the upper RNN. The semicolon represents vector concatenation, and \(\sigma\) represents the sigmoid function. The switching gate is also used to control the information flow between the lower-level word RNN and the upper-level chunk RNN. In this way, it provides meaningful autoregressive information that enables HRNN to be aware of previously detected chunks. Suppose our model predicts the \(t\)th word as the beginning of a chunk, which essentially "cuts" the sequence into two parts at this step. The lower RNN and upper RNN are updated by \[\underline{\mathbf{h}}^{(t)}_{\text{cut}} =\underline{f}\big{(}\mathbf{x}^{(t)},\underline{\mathbf{h}}^{(\text{oss })}\big{)} \tag{3}\] \[\overline{\mathbf{h}}^{(t)}_{\text{cut}} =\overline{f}\big{(}\underline{\mathbf{h}}^{(t-1)},\overline{\mathbf{h}} ^{(t-1)}\big{)} \tag{4}\] where \(\underline{f}\) and \(\overline{f}\) are the transition functions of the lower and upper RNNs, respectively, both employing the hyperbolic tangent (tanh) as their activation function. In Equation (3), the lower RNN ignores its previous hidden state and starts anew from a learnable initial state \(\underline{\mathbf{h}}^{(\text{oss})}\), because a new chunk is predicted at this step. On the other hand, in Equation (4), the upper RNN picks the newly formed chunk \(\underline{\mathbf{h}}^{(t-1)}\) captured by the lower RNN, and fuses it with the previous upper RNN's state \(\overline{\mathbf{h}}^{(t-1)}\). Suppose our model predicts that the \(t\)th word is not the beginning of a chunk, i.e., "no cut" occurs at this step. The RNNs are then updated by \[\underline{\mathbf{h}}^{(t)}_{\text{nocut}} =\underline{f}\big{(}\mathbf{x}^{(t)},\underline{\mathbf{h}}^{(t-1)} \big{)} \tag{5}\] \[\overline{\mathbf{h}}^{(t)}_{\text{nocut}} =\overline{\mathbf{h}}^{(t-1)} \tag{6}\] In this scenario, the lower RNN updates its hidden state with the input \(\mathbf{x}^{(t)}\) in the same manner as a typical RNN, whereas the upper RNN remains idle because no chunk is formed. Figure 1: An overview of our chunking induction method. On the left, the pretraining of HRNN using the induced labels from the Compound PCFG parser is shown. On the right, the HRNN is fine-tuned with text generation, specifically a summarization task in this example. A weight matrix is created from the switching gate’s values and then inserted into the Transformer’s encoder-decoder attention modules. The "cut" and "no cut" cases can be unified by \[\overline{\mathbf{h}}^{(t)} =m^{(t)}\overline{\mathbf{h}}^{(t)}_{\text{cut}}+(1-m^{(t)})\overline{ \mathbf{h}}^{(t)}_{\text{nocut}} \tag{7}\] \[\underline{\mathbf{h}}^{(t)} =m^{(t)}\underline{\mathbf{h}}^{(t)}_{\text{cut}}+(1-m^{(t)})\underline {\mathbf{h}}^{(t)}_{\text{nocut}} \tag{8}\] In our work, we keep \(m^{(t)}\) as a real number and adopt a soft gating mechanism that fuses "cut" and "no cut" in a soft manner. This is because chunking, by its nature, may be ambiguous, and our soft gating mechanism can better handle such ambiguity. Our proposed HRNN supports end-to-end learning, which can be weakly supervised by downstream tasks. Nevertheless, a randomly initialized HRNN may have difficulty modeling the desired chunking structure and suffer from the cold-start problem. We thus propose to pretrain HRNN by heuristically induced chunking labels from an unsupervised parser, which will be discussed in Section 2.2. ### Pretraining HRNN by Unsupervised Parsing We propose to induce chunk labels from an unsupervised parser. The intuition is that the chunking structure can be considered a flattened parse tree, thus sharing some commonalities with the parsing structure. Our approach is able to take advantage of recent advances in unsupervised parsing (Kim et al., 2020, 2019). Specifically, we adopt a state-of-the-art unsupervised parser, Compound Probabilistic Context-Free Grammar (PCFG) (Kim et al., 2019), which is a tuple \(\mathcal{G}=(\mathcal{S},\mathcal{N},\mathcal{P},\Sigma,\mathcal{R})\), where \(\mathcal{S}\) is a start symbol; \(\mathcal{N}\), \(\mathcal{P}\), and \(\Sigma\) are finite sets of nonterminal, preterminal, and terminal symbols, respectively. \(\mathcal{R}\) is a finite set of rules taking one of the following forms: \[\mathrm{S} \to\mathrm{A} \mathrm{A}\in\mathcal{N} \tag{9}\] \[\mathrm{A} \to\mathrm{B}\ \mathrm{C} \mathrm{B},\mathrm{C}\in\mathcal{N}\cup\mathcal{P}\] (10) \[\mathrm{T} \to\mathrm{w} \mathrm{T}\in\mathcal{P},\mathrm{w}\in\Sigma \tag{11}\] where \(\mathrm{S}\to\mathrm{A}\) is the start of a sentence and \(\mathrm{T}\to\mathrm{w}\) indicates the generation of a word. \(\mathrm{A}\to\mathrm{B}\ \mathrm{C}\) models the bifurcations of a binary constituency tree, where a constituent node is not explicitly associated with a type (e.g., noun phrase) in our setting. In addition, the model maintains a continuous random vector at the sentence level, which serves as the prior of PCFG. To train Compound PCFG, the maximum likelihood of the text is utilized, and a Viterbi-like algorithm marginalizes the PCFG. Amortized variational inference is employed to handle the continuous distribution from the sentence-level vector. We refer readers to Kim et al. (2019) for details. We aim to induce chunk labels from the state-of-the-art unsupervised parser, Compound PCFG. Given a sentence, we obtain its parse tree by applying the Viterbi-like CYK algorithm (Jurafsky and Martin, 2000) to Compound PCFG. Then, we propose a simple yet effective heuristic that extracts maximal left-branching subtrees as chunks. It is known that the English language has a strong bias toward right-branching structures (Williams et al., 2018; Li et al., 2019). We observe, on the other hand, that a left-branching structure often indicates words that are closely related. Here, a left-branching subtree means that the words are grouped in the form of \(((\cdots((\mathrm{x}_{i}x_{i+1})\mathrm{x}_{i+2})\cdots)\mathrm{x}_{i+n-1})\). A left-branching subtree for words \(\mathrm{x}_{i}\cdots\mathrm{x}_{i+n-1}\) is maximal if neither \(\mathrm{x}_{i-1}\mathrm{x}_{i}\cdots\mathrm{x}_{x+n-1}\) nor \(\mathrm{x}_{i}\cdots\mathrm{x}_{x+n-1}\mathrm{x}_{i+n}\) is left-branching. We extract all maximal left-branching subtrees as chunks. Our heuristic can provide unambiguous chunk labels for any sentence with any parse tree, as demonstrated by the following theorem. **Theorem 1**.: _Given any binary parse tree, every word will belong to one and only one chunk by the maximal left-branching heuristic._ Proof.: [Existence] A single word itself is a left-branching subtree, which belongs to some maximal left-branching subtree. [Uniqueness] We will show that two different maximal left-branching subtrees \(\mathrm{s}_{1}\) and \(\mathrm{s}_{2}\) cannot overlap. Assume by way of contradiction that there exists a word \(\mathrm{x}_{i}\) in both \(\mathrm{s}_{1}\) and \(\mathrm{s}_{2}\). Then, \(\mathrm{s}_{1}\) must be a substructure of \(\mathrm{s}_{2}\) or vice versa; otherwise, the paths, root-\(\mathrm{s}_{1}\)-\(\mathrm{x}_{i}\) and root-\(\mathrm{s}_{2}\)-\(\mathrm{x}_{i}\), violate the acyclic nature of a tree. But \(\mathrm{s}_{1}\) being a subtree of \(\mathrm{s}_{2}\) (or vice versa) contradicts the maximality of \(\mathrm{s}_{1}\) and \(\mathrm{s}_{2}\). ### Finetuning Hierarchical RNN with Downstream Tasks We propose to finetune the HRNN in a weakly supervised manner to learn better chunking structures from a downstream task. Our intuition is that accomplishing a downstream NLP task requires understanding meaningful semantic units/chunks of a sentence (Wu et al., 2023), and that optimizing the downstream task may benefit the chunk prediction. Specifically, we consider text generation tasks, namely, summarization (Rush et al., 2015; Liu et al., 2022; Liu, 2019), paraphrasing (Liu et al., 2020; Li et al., 2019), and translation (Bojar et al., 2014; Liu et al., 2020). Although the HRNN may be finetuned with classification tasks as well, we found in our preliminary experiments that classification tasks yield marginal improvement, probably because the classification training signal is too sparse (one label per sample) in comparison to generation tasks. Therefore, we only consider finetuning HRNN with text generation tasks in this work. We propose a decoder network, whose attention is based on the upper-level RNN (chunk representations). This is accomplished by performing conventional attention over the RNN's states but reweighing it by whether a chunk is predicted at the step. Since the HRNN's gate is differentiable, we can train it end to end with the downstream tasks. We first argue that the chunk representation is indeed given by the upper-level RNN where a "cut" is predicted. Suppose we have a chunk \(\mathrm{x}^{(i)}\cdots\mathrm{x}^{(i+k)}\), i.e., HRNN predicts two consecutive "cuts" (end of a chunk) at the time steps \(i-1\) and \(i+k\). According to our design (Section 2.1), the lower-level RNN initializes its hidden state with \(\underline{\mathbf{h}}^{(\text{sos})}\) when \(t=i-1\), and updates its hidden state \(\underline{\mathbf{h}}^{(t)}\) with the input words \(\mathrm{x}^{(i)}\cdots\mathrm{x}^{(i+k)}\). In the meantime, the upper-level RNN is idle for time steps \(i\) to \(i+k-1\), but picks up the \(\underline{\mathbf{h}}^{(i+k)}\) at step \(i+k\). This follows a conventional RNN, demonstrating that the chunk representation is given by the upper-level RNN at a "cut" step. Now consider the scenarios with "soft" chunks, since we keep the chunking decision \(m^{(t)}\) as a real number in our HRNN. When the "cut" strength is high (\(m^{(t)}\) close to 1), the step is more likely to be a chunk, and we would like to attend to this step more from the decoder. Conversely, if the "cut" strength is low, it should be less attended. This can be achieved by applying conventional encoder-decoder attention to every step, but reweighing it by the "cut" strength. For one attention head, we form a query vector \(\mathbf{q}^{(j)}\) for the \(j\)th decoder step and a key vector \(\mathbf{k}^{(i)}\) for the \(i\)th encoder step. Here, the key vector \(\mathbf{k}^{(i)}\) is given by the upper-level RNN's state \(\overline{\mathbf{h}}^{(i)}\). We reweigh the attention by first computing an unnormalized measure. \[\widetilde{\alpha}\Big{(}\mathbf{q}^{(j)},\mathbf{k}^{(i)},m^{(i)}_{\text{logit}} \Big{)}=\exp\Big{(}\frac{(\mathbf{q}^{(i)})^{\top}\mathbf{k}^{(j)}}{\sqrt{d}}+\gamma m ^{(i)}_{\text{logit}}\Big{)} \tag{12}\] where \(\gamma\) is the coefficient hyperparameter. Then, we normalize it to attention probabilities as \[\alpha^{(j,i)}=\frac{\widetilde{\alpha}\Big{(}\mathbf{q}^{(j)},\mathbf{k}^{(i)},m^{(i )}_{\text{logit}}\Big{)}}{\sum_{i^{\prime}}\widetilde{\alpha}\Big{(}\mathbf{q}^{( j)},\mathbf{k}^{(i^{\prime})},m^{(i^{\prime})}_{\text{logit}}\Big{)}} \tag{13}\] Empirically, we find that an end-to-end reweighted attention head tends to ignore \(m^{(i)}_{\text{logit}}\) and make it close to 0. Consequently, the gating probability \(m^{(i)}\) will be close to 0.5 unanimously, and the HRNN will be in a situation of neither "cut" nor "no cut". We alleviate this by proposing an auxiliary training objective that pushes \(m^{(i)}\) to either 0 or 1. We introduce a chunking strength hyperparameter \(\kappa\%\) that aims to control the granularity of chunking. A detailed analysis of this hyperparameter is presented in Section 3.4.3. The final values for \(\kappa\%\) are decided based on validation performance. We then define the loss at the \(i\)th encoder step as \[L^{(i)}=\begin{cases}(m^{(i)}-1)^{2},&\text{if $m^{(i)}$ is among the top-$\kappa$ \% gate values,}\\ (m^{(i)}-0)^{2},&\text{otherwise.}\end{cases} \tag{14}\] The overall auxiliary loss is \(L_{\text{aux}}=\sum_{i}L^{(i)}\), and is combined with the end-to-end downstream task by \(L_{\text{task}}+\eta L_{\text{aux}}\), where \(\eta\) is a hyperparameter balancing the two training objectives. In this way, we are able to learn a meaningful chunking gate in the downtream NLP task. ## 3 Experiments ### Datasets and Metrics We adopted a widely used chunking dataset, CoNLL-2000 (Tjong Kim Sang and Buchholz, 2000), to evaluate the general chunking performance of a model. CoNLL-2000 contains 8K training, 1K validation, and 2K test samples. Each sample is labeled with the BIO schema, where "B" indicates the beginning of a chunk, "I" indicates inside a chunk, and "O" indicates outside a chunk (mainly punctuation). We used the BI schema and ignore the "O" tokens. In this work, however, we addressed chunking in the unsupervised setting, and thus we did not use the groundtruth chunk labels in the training set, but only the training sentences. Groundtruth chunk labels are used for validation3 and test purposes. Footnote 3: Certain researchers argue that labeled validation sets cannot be used for unsupervised grammar induction and resort to the test set for hyperparamter tuning and model selection (Shi et al., 2020), which violates the training-validation-test framework of machine learning. We agree that it is important to investigate unsupervised validation measures for grammar induction, but without an established one, we use labeled data for validation purposes in this work. Notice that this fundamentally differs from supervised training, as we only use labels for collecting statistics of model performance. We finetuned the HRNN model with several downstream text generation tasks, whose datasets are introduced as follows. * Gigaword (Rush et al., 2015): A widely used article-headline dataset for text summarization with about 3.8M training, 20K validation, and 2K test samples. We randomly picked 2K samples (same size as the test set) for more efficient validation. * MNLI-Gen: MNLI (Williams et al., 2018) is a massive dataset for the natural language inference task. We used the entailment subset, generating the entailed hypothesis based on the premise. Since the test set of MNLI is private, we adopted the matched section of the original validation set as our test set, while the mismatched section served as our validation set. The resulting entailment generation dataset contains 131K training, 3.5K validation, and another 3.5K test samples. * WMT-14 (Bojar et al., 2014): A multilingual dataset used for machine translation. We used the English-to-German subset, having nearly 4.5M training, 3K validation, and 2.7K test samples. Table 1 shows the statistics of the datasets we have introduced. In addition to the general chunking performance on the CoNLL-2000 corpus, we are interested in in-domain chunking performance after finetuning with a downstream task, whose test set, unfortunately, does not have human annotation of chunking labels. Therefore, we resorted to NLTK (Bird and Loper, 2004), which has an off-the-shelf chunker that achieves high performance. We used NLTK to chunk the source sentences of the downstream datasets, and the NLTK's outputs (also with the BI schema) served as the in-domain chunking labels. Regarding the evaluation metrics, we adopted the standard phrase F1 score and tagging accuracy. In our implementation, they are realized by the CoNLL-2000 evaluation script (Tjong Kim Sang and Buchholz, 2000). ### Implementation Details We utilized an encoder-only BERT model and two encoder-decoder models, BART and mBART, in our experiments. For the BERT model (Devlin et al., 2019), we adopted its base version that has \(12\) Transformer layers with \(768\) hidden dimensions, and its total parameter number is \(110\) million. For the BART model (Lewis et al., 2020), we also used its base version, which has \(6\) encoder layers and another \(6\) decoder layers. It has 768 dimensions and 139 million parameters in total. The mBART model (Liu et al., 2020) has 610 million parameters, both the encoder and decoder having \(12\) layers and \(1024\) dimensions. We only used the encoder modules of BART and mBART for the chunking pretraining, which are then combined with their respective pretrained decoder for finetuning on downstream tasks. Our HRNN has the same dimension as the underlying Transformer models. We used the Adam optimizer (Kingma and Ba, 2015) for both pretraining and finetuning. The optimizer hyperparameters were set as \(\beta_{1}=0.9\), \(\beta_{2}=0.999\). We applied a learning rate warm-up for the first \(10\) percent of total steps, followed by a linear decay. The detailed hyperparameters for both stages can be found in Table 1. \begin{table} \begin{tabular}{l|l|c c c|c c c c} \hline \hline Task & Dataset & \(\#\)Train & \(\#\)Dev & \(\#\)Test & Batch size & Learning rate & \(\eta\) & \(\gamma\) & top-\(\kappa\)\% \\ \hline Chunking & CoNLL-2000 & 8K & 1K & 2K & 32 & 5e-5 & – & – & – \\ \hline \multirow{3}{*}{Downstream} & Gigaword & 3.8M & 2K & 2K & 32 & 5e-5 & 0.1 & 0.1 & 50\% \\ & MNLI-Gen & 131K & 3.5K & 3.5K & 32 & 4e-5 & 0.1 & 0.1 & 60\% \\ \cline{1-1} & WMT-14 (En-De) & 4.5M & 3K & 2.7K & 32 & 1e-5 & 0.1 & 0.1 & 50\% \\ \hline \hline \end{tabular} \end{table} Table 1: Details on the dataset and implementation. Symbol \(\eta\) represents the balance between two training objectives, while \(\gamma\) is the hyper-parameter for the reweight operation, as defined in Equation 12. The term top-\(\kappa\) denotes the chunking strength heuristics. The pretraining phase generally concludes within three epochs, whereas the finetuning process requires less than one. Although one-epoch finetuning may appear unreasonable at first glance, we observed that Transformers are very powerful models, which may accomplish a downstream task while bypassing syntactic information. To let meaningful syntax emerge in the downstream task, we have to consider early stopping and only very few iterations are needed. Details will be presented in Section 3.4.3. ### Main Results Table 2 presents the main results of our approach. We used CoNLL-2000 as the main dataset to evaluate the general chunking performance. To measure in-domain performance on Gigaword, MNLI-Gen, and WMT-14, we used the NLTK supervised chunker (Bird and Loper, 2004) as pseudo-groundtruth labels. For comparison of inducing chunks from unsupervised parsing, we include Compound PCFG (discussed in Section 2.2, Kim et al., 2019) and another recent unsupervised parser based on the features of a pretrained language model proposed by Kim et al. (2020). The latter, called an LM Chunker in Table 2, thresholds the BERT (Devlin et al., 2019) similarity of consecutive words for chunking. We observe from Lines 5 and 6 that the LM-based unsupervised chunker is worse than the Compound PCFG in phrase F1 (42.05 vs. 62.89) and tag accuracy (68.74 vs. 81.64). Therefore, we use the chunking labels induced by Compound PCFG to pretrain our HRNN model in all experiments. We also include several traditional unsupervised chunking methods as baselines: a pointwise mutual information (PMI) chunker (Van de Cruys, 2011) cuts two consecutive words if their PMI score is below a threshold; Baum-Welch HMM (Baggenstoss, 2001) applies expectation-maximization to a hidden Markov model (HMM, Rabiner, 1989). These methods perform significantly worse than recent advances in unsupervised syntactic structure discovery. However, we notice that both Compound PCFG and LM chunkers (Lines 5-6 in Table 2) show a significant decrease in performance when transferring to other datasets. For example, when the two chunkers are transferred from CoNLL-2000 to Gigaword, their tag accuracy is dropped by 25 percent and 37 percent, respectively. By contrast, the PMI chunker has a 7 percent drop, while the Baum-Welch HMM even has a 1 percent increase (Lines 3-4). The results indicate that traditional unsupervised chunking methods are more stable, although it achieves lower performance on CoNLL-2000. Overall, unsupervised chunkers may suffer from the problem of poor transferability in addition to less satisfactory chunking performance in general. Regarding our method, we pretrained HRNN with several Transformer encoders: * BERT (Devlin et al., 2019), an encoder-only Transformer model used in our preliminary conference paper (Deshmukh et al., 2021). * BART (Lewis et al., 2020), chosen due to its pretrained decoder, which allows HRNN to be finetuned on Gigaword and MNLI-Gen datasets for generation tasks. * mBART (Liu et al., 2020), a multilingual variant of BART, which is needed for the translation task. \begin{table} \begin{tabular}{l l c c c c c c c} \hline \hline \multirow{2}{*}{\#} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{**CoNLL-2000**} & \multicolumn{2}{c}{**Gigaword**} & \multicolumn{2}{c}{**MNLI-Gen**} & \multicolumn{2}{c}{**WMT-14**} \\ & & Phrase F1 & Tag Acc. & Phrase F1 & Tag Acc. & Phrase F1 & Tag Acc. & Phrase F1 & Tag Acc. \\ \hline \multicolumn{8}{l}{**Supervised Methods**} \\ 1 & NLTK-tagger-chunker & 83.71 & 89.51 & – & – & – & – & – & – \\ 2 & Supervised HMM & 87.68 & 93.99 & 87.54 & 94.37 & 86.81 & 93.40 & 86.49 & 93.50 \\ \hline \multicolumn{8}{l}{**Unsupervised Methods**} \\ 3 & PMI Chunker & 35.64 & 64.50 & 37.44 & 65.18 & 44.37 & 67.14 & 45.70 & 67.20 \\ 4 & Baum–Welch HMM & 25.04 & 58.93 & 38.35 & 54.75 & 29.87 & 32.06 & 34.87 & 45.00 \\ 5 & LM Chunker & 42.05 & 68.74 & 29.31 & 51.34 & 25.85 & 49.57 & 32.84 & 50.72 \\ 6 & Compound PCFG Chunker & 62.89 & 81.64 & 25.69 & 51.70 & 32.96 & 55.73 & 52.16 & 73.73 \\ \hline \multicolumn{8}{l}{**Our Pretrain Methods**} \\ 7 & HRNN w/ BERT & 68.12 & 83.90 & 64.20 & 82.61 & 66.38 & 81.60 & 67.27 & 82.70 \\ 8 & HRNN w/ BART & 68.57 & 84.04 & 64.60 & 82.24 & 68.81 & 82.04 & 72.09 & 84.87 \\ 9 & HRNN w/ mBART & 68.65 & 84.42 & 67.08 & 83.49 & 67.25 & 81.35 & 71.24 & 84.55 \\ \hline \multicolumn{8}{l}{**Our Finetune Methods**} \\ 10 & HRNN w/ BART (Gigaword) & **70.83** & **85.16** & **70.66\({}^{\dagger}\)** & **85.15\({}^{\dagger}\)** & **73.96** & **84.93** & **75.87** & **86.66** \\ 11 & HRNN w/ BART (MNLI-Gen) & 69.68 & 84.72 & 67.15 & 83.36 & 70.79\({}^{\dagger}\) & 83.11\({}^{\dagger}\) & 73.26 & 85.43 \\ 12 & HRNN w/ mBART (WMT-14) & 69.77 & 84.69 & 67.91 & 83.58 & 69.93 & 82.75 & 71.85\({}^{\dagger}\) & 84.69\({}^{\dagger}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Main results. The numbers with \({}^{\dagger}\) indicate that the model is tested on the same datasets as the finetuning tasks. We see from Lines 7-9 that all the pretrained HRNN variants achieve an improvement of more than \(5\) percentage points in phrase F1 based on the Compound PCFG chunker on the CoNLL-2000 dataset. BART-based and mBART-based HRNN is better than the BERT-based variant not only on the standard CoNLL-2000 dataset but also in transferring its learned chunking knowledge to three downstream datasets. The large margins between the Compound PCFG chunker and our pretrained HRNN model imply that HRNN can indeed smooth out the noise of heuristics while capturing meaningful chunking patterns. More importantly, our method drastically improves the transferability of unsupervised chunkers (Lines 7-9 vs. Lines 3-6). We then finetuned the HRNN on some downstream task for syntactic structure discovery. We observed an improved chunking performance across all datasets, including Gigaword, MNLI-Gen, and WMT-14. As shown in Lines 11-12, finetuning HRNN with MNLI-Gen and WMT-14 leads to a 1-2 percent increase in phrase F1 on the in-domain datasets (70.79 vs. 68.81; 71.81 vs. 71.24) as well as the general CoNLL-2000 corpus (69.68 vs. 68.57; 69.77 vs. 68.65). Notably, finetuning with the summarization task yields the most significant improvements, as indicated in Line 10. It achieves nearly an increase of \(6\) percentage points in phrase F1 on the in-domain Gigaword dataset (70.66 vs. 64.60), as well as an increase of \(5\) and \(4\) percentage points on the out-of-domain MNLI-Gen and WMT-14 datasets (73.96 vs. 68.81; 75.87 vs. 72.09). Furthermore, it brings another 2.2-point improvement to the CoNLL-2000 dataset. The superiority of chunk induction from the summarization task may be due to the inherent connection between summarization and chunking. This is in line with a summarization method known as phrasal extractive summarization (Muresan et al., 2001; Xu and Zhao, 2022), where the salient information (often key phrases) is extracted to generate summarized text. ### Detailed Analyses In this subsection, we conduct comprehensive experiments to verify our proposed approach. We will first show an analysis of the proposed heuristic for chunk label induction from unsupervised parsers for pretraining HRNN. Then, we will show an ablation study on the proposed HRNN structure. Finally, we will analyze the effectiveness of finetuning HRNN on downstream text generation tasks. #### 3.4.1 Analysis of the Left-Branching Chunking Heuristic We provide a detailed analysis of our maximal left-branching chunking heuristic, where we used the CoNLL-2000 dataset as a testbed because we pretrained all HRNN variants on CoNLL-2000. Without loss of generality, we conducted this analysis with BERT-based HRNN. Table 3 shows a comparison between different heuristics for inducing chunks from parse trees. We observe that our maximal left-branching heuristic outperforms right branching by 20 points in phrase F1. Additionally, we introduce a thresholding approach that only extracts one-word and two-word chunks since most of the groundtruth chunks contain one or two words. The performance of such a heuristic is better than right-branching but worse than our left-branching approach. Our findings support our hypothesis that right-branching is a common structure in English and does not suggest meaningful chunks. Conversely, left-branching identifies closely related words and is an effective heuristic for inducing chunks from parse trees. #### 3.4.2 Ablation Study on the HRNN Architecture Table 4 presents an ablation study on the HRNN model. Here, we only consider pretraining HRNN from unsupervised parsers, while excluding the phase of finetuning on downstream tasks. This allows us to eliminate distracting factors in this ablation study. We experimented with various neural architectures for the chunker, which learns from Compound PCFG with the maximal left-branching heuristic. We see all variants outperform the Compound PCFG chunker (Lines 2-6 vs. Line 1 \begin{table} \begin{tabular}{l l l l} \hline \hline **Chunking Heuristics** & **Phrase F1** & **Tag Acc.** \\ \hline 1-word \& 2-word chunks & 55.72 & 75.14 \\ Maximal right branching & 40.83 & 69.28 \\ Maximal left branching & **62.89** & **81.64** \\ \hline \hline \end{tabular} \end{table} Table 3: Analysis of the heuristics for inducing chunking labels from Compound PCFG. \begin{table} \begin{tabular}{l l l} \hline \hline **\#** & **Method** & **Phrase F1** & **Tag Acc.** \\ \hline 1 & Compound PCFG chunker & 62.89 & 81.64 \\ 2 & \(\longrightarrow\) HRNN only & 65.01 & 82.22 \\ 3 & \(\longrightarrow\) BERT+1-layer RNN & 67.19 & 83.86 \\ 4 & \(\longrightarrow\) BERT+2-layer RNN & 66.53 & 83.34 \\ 5 & \(\longrightarrow\) BERT+HRNN (hard) & 67.90 & 83.80 \\ 6 & \(\longrightarrow\) BERT+HRNN & **68.12** & **83.90** \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation study of the HRNN model. in Table 4), demonstrating that machine learning models can effectively smooth out noise and mitigate the imperfection of chunk heuristics. When we compare the traditional RNN with HRNN, the latter's switching gate provides autoregressive information. Our HRNN performs better with a controlled number of layers (Lines 4 vs. 6). Furthermore, the soft HRNN surpasses the hard HRNN (Lines 5 vs. 6), indicating its enhanced capability to manage chunk ambiguity and provide superior autoregressive information. Additionally, integrating HRNN with a Transformer encoder such as BERT (Lines 2 vs. 6) is beneficial, as it can capture global contextual information. #### 3.4.3 Analysis of Finetuning Chunking structure discovery with finetuning HRNN on downstream tasks requires several treatments: 1) We need to pretrain the HRNN, whose architecture has been analyzed in Section 3.4.2, and 2) We need to apply an auxiliary loss push HRNN's predicted chunking probability away from \(0.5\), as presented in Equation (14). We analyze the effect of pretraining and the auxiliary loss in this part. Figure 2 presents the learning curve on the downstream dataset Gigward. As seen, the HRNN with both pretraining and auxiliary loss is the only configuration that achieves an improved chunking performance (green curves with dots), whereas other settings (without pretraining and/or without auxiliary loss) yield worse performance. This ablation study underscores the significance of both the pretraining strategy and auxiliary loss for HRNN's chunking structure discovery in downstream tasks. Moreover, the auxiliary loss provides the opportunity to control the chunking granularity. This can be shown by varying the chunking strength hyperparameter \(\kappa\) in Equation (14) from \(0\) to \(1\) in increments of \(0.1\). The results, presented in Figure 3, confirm that \(\kappa\) is indeed able to control the ratio of predicted cuts (i.e., chunking granularity), although the percentages may not match as there could be other factors contributing to the model's behavior. Interestingly, closely Figure 3: This analysis examines the effect of the chunking strength hyperparameter \(\kappa\) on the actual chunking ratio. The dashed line represents the groundtruth chunking ratio in the dataset. The purple lines indicate that the finetuned chunking performance outperforms the pretrained model. Additionally, the color depth and width of these lines indicate the ranking of the chunking performance achieved, with deeper and wider lines indicating better performance. Figure 2: Ablation study of our HRNN finetuning method. We plot the learning curves in terms of phrasal F1 (left) and tag accuracy (right). aligning the predicted and groundtruth chunking ratios does not lead to better performance. The optimal \(\kappa\) value turns out to be \(0.5\) in our experiments, even though the predicted ratio slightly exceeds the groundtruth ratio. We also notice an intriguing phenomenon in Figure 2: all models converge to a phrase F1 score of approximately \(40\). By manually checking their predictions, we realize that the models will eventually predict a cut for almost all steps, e.g., every word is a chunk by itself. Even with our full approach (green curves), the chunking performance is only improved within limited training steps, after which it drops as well. This raises a curious question about the dynamics of the chunking structure during the process of downstream-task finetuning. To further investigate this, we show in Figure 4 the learning curves of both chunking performance and the downstream-task performance, evaluated by Rouge1 score (Lin, 2004). The learning curves show that the emerging of meaningful chunking structures is a transient phenomenon only during the early stage of downstream-task learning. The chunking performance drops after several thousand iterations, while the downstream-task performance continues to increase monotonically. We hypothesize that chunking information (or linguistic information in general) is a meaningful abstraction of human languages. When our exquisitely designed model has not mastered the downstream NLP task well (e.g., at an early stage), it will produce structures akin to linguistically defined chunks, which serve as a convenient intermediate step for the downstream task. As training proceeds, however, such linguistically meaningful structures are abandoned by the model, as deep neural networks are known to achieve high performance with end-to-end unexplainable features (Wu et al., 2023). Our work sheds light on future research in linguistics. #### 3.4.4 Case Studies In Figure 5, we show examples of chunking structures generated by Compound PCFG, pretrained HRNN, and finetuned HRNN in the summarization task. Figure 4: Learning curves of finetuning HRNN with text generation tasks. Phrasal F1 and accuracy reflect the chunking performance, while the Rouge1 score measures the text generation performance. The gray dashed lines mark the steps when HRNN achieves the highest chunking performance. Figure 5: Examples of chunking structures generated by Compound PCFG, pretrained HRNN, and finetuned HRNN are provided. Differences are highlighted in thick blue. Groundtruth chunks are also displayed in green for reference. As seen, our pretrained HRNN produces longer noun phrases than Compound PCFG, such as _Carlyle Group_ and _a merchant banking concern_ in Example (c), and finetuning HRNN can further detect longer noun phrases--such as _certain estate-tax returns_ in Example (a)--which are usually considered as a chunk by groundtruth labels. Furthermore, our approach is able to correct nonsensical chunks produced by Compound PCFG. In Example (a), the two-word chunk _grants relief_ is split into _grants_ and _relief_ as they are not in the same semantic unit. In Example (c), the chunk _In January he_ is first split into _In January_ and _he_, which makes more sense. Then the finetuned HRNN further splits _In January_ into _In_ and _January_, which agree more with the groundtruth chunks. In general, the pretrained HRNN not only effectively learns the chunking patterns from Compound PCFG, but also can smooth out its noise. The finetuned HRNN model further improves the pretrained model and achieves better performance of unsupervised chunking. ## 4 Related Work This section briefly discusses recent advances in chunking and unsupervised grammar induction approaches, followed by recent literature on weakly supervised neuro-symbolic methods. ### The Chunking Task Chunking is a prominent task in natural language processing with various applications, such as named entity recognition (Wang et al., 2013), syntactic analysis (Zhou et al., 2012), and fine-grained sentiment analysis (Yang and Cardie, 2013). Chunking is also referred to as shallow parsing, where the goal is to separate a text into non-overlapping chunks that are syntactically connected (Abney, 1991). The CoNLL-2000 shared task (Tjong Kim Sang and Buchholz, 2000), derived from the Penn Treebank (Marcus et al., 1993), has provided a platform for system comparison of English chunking research. In the early stage of research, various classification models, such as support vector machines (SVMs, Kudo and Matsumoto, 2001) and the Winnow algorithm (Zhang et al., 2002), are applied to accomplish the chunking task. Additionally, sequential labeling models are extensively used for chunking; examples include the hidden Markov model (HMM) (Molina and Pla, 2002) and conditional random fields (CRFs, Sha and Pereira, 2003; McDonald et al., 2005). Sun et al. (2008) adopt latent-dynamic conditional random fields to the chunking task, which leads to further performance improvement. Zhou et al. (2012) propose to use chunk-level features for the chunking task, outperforming most previous research based on word-level features (Kudo and Matsumoto, 2001; Zhang et al., 2002; Molina and Pla, 2002; McDonald et al., 2005; Sun et al., 2008). Inspired by this, we design a hierarchical RNN that captures chunk features that are formed dynamically. We further leverage contextual word representations from a language model and enhance performance through weak supervision from downstream tasks. ### Unsupervised Syntactic Structure Induction Detecting syntactic structures without supervision has attracted significant interest due to its applications in scenarios with limited resources (Clark, 2001; Klein, 2005). Unsupervised parsing is one of the most widely explored areas in syntactic structure detection. It can be broadly divided into three areas: unsupervised constituency parsing (Klein and Manning, 2002; Haghighi and Klein, 2006; Reichart and Rappoport, 2008; Clark, 2001), which aims to derive a constituency parse tree for a given sentence; unsupervised dependency parsing (Seginer, 2007; Paskin, 2001), which focuses on identifying the grammatical relationships between words in a sentence; and unsupervised joint parsing (Klein and Manning, 2004; Shen et al., 2018), which seeks to combine both constituency and dependency parsing into a unified framework. Among these settings, unsupervised constituency parsing has been explored most. Klein and Manning (2002) propose an approach that utilizes an expectation-maximization (EM) algorithm to model the constituency of each span. Haghighi and Klein (2006) propose to employ a probabilistic context-free grammar (PCFG) based on manually designed features. Kim et al. (2019) introduce the Compound PCFG for unsupervised parsing. On the other hand, methods have been developed to obtain tagged parse trees without supervision. Reichart and Rappoport (2008) perform clustering by syntactic features to obtain tagged parse trees. Clark (2001) clusters sequences of tags based on their local mutual information to build tagged parse trees. Overall, these early studies mostly rely on heuristics, linguistic knowledge, and manually designed features. Unsupervised parsing has revived interest in the deep learning era. Socher et al. (2011) propose a recursive autoencoder that constructs a binary tree by greedily minimizing the reconstruction loss. Other methods for learning recursive tree structures in an unsupervised manner include CYK-style marginalization (Drozdov et al., 2019) and Gumbelsoftmax (Choi et al., 2018). Yogatama et al. (2017) present a shift-reduce parser learned by reinforcement learning to improve performance on a downstream task such as textual entailment. However, evidence shows the above approaches do not yield linguistically plausible trees (Williams et al., 2018). Li et al. (2019) propose to transfer knowledge between multiple unsupervised parsers to improve their performance. Our pretraining method draws inspiration from such knowledge transfer, but we introduce insightful heuristics that generate chunk labels from unsupervised parsers. Additionally, we have developed a hierarchical recurrent neural network (HRNN) that learns from the chunk labels generated by our heuristics, and can further be finetuned with the downstream tasks for performance improvement. Regarding unsupervised chunking, the speech processing community has addressed the task as a component of speech detection systems and utilized acoustic information to determine the chunks (Pate and Goldwater, 2011; Barrett et al., 2018). Our work only relies on textual information and views unsupervised chunking as a new task of inducing syntactic structure. ### Weakly Supervised Neuro-Symbolic Methods In recent years, neuro-symbolic approaches have gained significant attention from the AI and NLP communities for interpreting deep learning models (Liu et al., 2023). Many of these approaches treat neuro-symbolic methods as combining deep learning and symbolic representations. For example, Lei et al. (2016) extract key phrases that rationalize the neural model's predictions on classification tasks. Liu et al. (2018) model text classification as a sequential decision process, and propose a policy network that is able to skip reading and make decisions. Liang et al. (2017) and Mou et al. (2017) perform SQL-like execution based on input text for semantic parsing. However, training a neuro-symbolic model is more complicated than an end-to-end neural model, because the explainable discrete latent structure lacks ground truth for direct supervision. As a result, these approaches are typically trained in a weakly supervised manner, where the training signals only exist at the end of the entire model. Reinforcement learning or its relaxation, such as Gumbel-softmax (Jang et al., 2017), is often used in the training procedure due to the non-differentiable latent structure. Recently, Wu et al. (2023) propose neural fuzzy logic to explain the predictions of the natural language inference model, which is trained end-to-end. Our HRNN model uses switching gates as the intermediary of the weak supervision from text generation for chunk induction, which allows it to be trained with different downstream datasets. ## 5 Limitations and Future Work One limitation of this paper is that we have only applied our approach to the English language, as our HRNN chunker is pretrained by unsupervised parsers (which are mainly built for English) and our maximal left-branching heuristic for chunk induction utilizes the right-branching prior of English. To generalize unsupervised grammar induction to other languages, we may develop models that incorporate language-specific prior, e.g., left branching for Japanese (Martin, 2003). Another viable path is to translate a sentence into English, perform unsupervised syntactic structure prediction on the English translation, and map the structure back to the original language. Extending this, multilingual unsupervised grammar induction is also a promising research direction. Our long-term vision is to build an interpretable NLP system, trained in an end-to-end fashion, that can detect semantic units (chunks), determine their relationships, perform task-specific reasoning, and ultimately achieve high performance for the task of interest (Liu et al., 2023). The main obstacle to this ambitious goal is the difficulty of end-to-end training: this paper performs chunking during text generation, but the obtained chunking structure is later abandoned by the model; another study of ours performs chunk detection and alignment by heuristics, after which a differentiable fuzzy logic mechanism is designed to perform logical reasoning (Wu et al., 2023). An important future direction is to put these components together and make traditional NLP pipelines end-to-end learnable (Sutskever et al., 2014). We anticipate that, with proper development, such a neuro-pipeline approach will achieve high performance in end tasks similar to black-box neural networks, while excelling in terms of explainablility and interoperability. ## 6 Conclusion In this paper, we propose a framework to induce chunks in an unsupervised manner as syntactic structure discovery. Specifically, we propose a hierarchical RNN (HRNN) with switching gates to learn from the chunk labels induced by a state-of-the-art unsupervised parser. The HRNN is further finetuned with various downstream text generation tasks to achieve better chunking results. The experiments show that our approach largely improves unsupervised chunking. Additionally, we provide comprehensive analyses of our HRNN, the pretraining stage using unsupervised parser-induced labels, as well as the finetuning stage in a downstream task. We also point out limitations and discuss future directions.
2307.16641
Rheology of Pseudomonas fluorescens biofilms: from experiments to predictive DPD mesoscopic modelling
Bacterial biofilms mechanically behave as viscoelastic media consisting of micron-sized bacteria crosslinked to a selfproduced network of extracellular polymeric substances (EPS) embedded in water. Structural principles for numerical modelling aim at describing mesoscopic viscoelasticity without loosing detail on the underlying interactions existing in wide regimes of deformation under hydrodynamic stress. Here we approach the computational challenge to model bacterial biofilms for predictive mechanics in silico under variable stress conditions. Up-to-date models are not entirely satisfactory due to the plethora of parameters required to make them functioning under the effects of stress. As guided by the structural depiction gained in a previous work with Pseudomonas fluorescens (Jara et al. Front. Microbiol. (2021)), we propose a mechanical modeling by means of Dissipative Particle Dynamics (DPD), which captures the essentials of the topological and compositional interactions between bacteria particles and crosslinked EPS-embedding under imposed shear. The P. fluorescens biofilms have been modeled under mechanical stress mimicking shear stresses as undergone in vitro. The predictive capacity for mechanical features in DPD-simulated biofilms has been investigated by varying the externally imposed field of shear strain at variable amplitude and frequency. The parametric map of essential biofilm ingredients has been explored by making the rheological responses to emerge among conservative mesoscopic interactions and frictional dissipation in the underlying microscale. The proposed coarse grained DPD simulation qualitatively catches the rheology of the P. fluorescens biofilm over several decades of dynamic scaling.
Jose Mart. n-Roca, Valentino Bianco, Francisco Alarcon, Ajay K. Monnappa, Paolo Natale, Francisco Monroy, Belen Orgaz, Ivan L. pez-Montero, Chantal Valeriani
2023-07-31T13:24:35Z
http://arxiv.org/abs/2307.16641v1
Rheology of _Pseudomonas fluorescens_ biofilms: from experiments to predictive DPD mesoscopic modelling. ###### Abstract Bacterial biofilms mechanically behave as viscoelastic media consisting of micron-sized bacteria crosslinked to a self-produced network of extracellular polymeric substances (EPS) embedded in water. Structural principles for numerical modelling aim at describing mesoscopic viscoelasticity without loosing detail on the underlying interactions existing in wide regimes of deformation under hydrodynamic stress. Here we approach the computational challenge to model bacterial biofilms for predictive mechanics _in silico_ under variable stress conditions. Up-to-date models are not entirely satisfactory due to the plethora of parameters required to make them functioning under the effects of stress. As guided by the structural depiction gained in a previous work with _Pseudomonas fluorescens_ (Jara et al. Front. Microbiol. (2021)), we propose a mechanical modeling by means of Dissipative Particle Dynamics (DPD), which captures the essentials of the topological and compositional interactions between bacteria particles and crosslinked EPS-embedding under imposed shear. The _P. fluorescens_ biofilms have been modeled under mechanical stress mimicking shear stresses as undergone _in vitro_. The predictive capacity for mechanical features in DPD-simulated biofilms has been investigated by varying the externally imposed field of shear strain at variable amplitude and frequency. The parametric map of essential biofilm ingredients has been explored by making the rheological responses to emerge among conservative mesoscopic interactions and frictional dissipation in the underlying microscale. The proposed coarse grained DPD simulation qualitatively catches the rheology of the _P. fluorescens_ biofilm over several decades of dynamic scaling. + Footnote †: preprint: ## I Introduction Bacterial biofilms are synergistic colonies growing on solid surfaces in contact with a complex aqueous phase that provides physical protection against external threats [1; 2]. Because biofilm colonies are mechanically more resilient than the isolated bacteria, they become more capable to nurse resistant phenotypes [3; 4]. A collective efficiency emerges from the microbial colony to engender antibiotic resistances and compromise substrate removal causing hence persistent biofilm infection [3; 4]. Surface biofilm formation is indeed a major health issue [5; 6] and an industrial challenge worldwide [7; 8; 9]. The mesoscopic structure of biofilms consists of a flexible meshwork made of bacteria and extracellular polymer substance (EPS), which is composed of polysaccharides, proteins and extracellular DNA embedded in aqueous suspension. Depending on the bacterial species, the amount of thriving cells with respect to the EPS vary from one tenth to one fourth of the total biofilm mass [10]. Biofilms are considered to be complex materials as highly crosslinked EPS composites with a heterogeneous viscoelasticity acting as a dynamic scaffold for the dwelling cells [11; 12]. Recently, biofilm viscoelasticity has been considered for probing the mechanical interlinking between EPS components and dwelling bacteria [12; 13; 14; 15; 16; 17]. As a complexity emerging from compositional and topological configurations in the EPS assembly at multiple length scales [18; 19; 13], biofilms exhibit both elasticity and fluid-like rheological response upon shear arising from their complex hydrodynamic dependence on the deformation amplitudes and frequencies [14; 15; 16; 17]. Relevant structural interactions such as bacterial entanglement, protein binding, and EPS cross-linking contribute to form the transient stress-bearing structure that makes the particular biofilms' viscoelasticity to emerge from the mesoscale [20; 21; 22]. Recent structural studies have shown that EPS constituents, such as secreted polysaccharides, DNA chains and other protein filaments biochemically dictate matrix architecture as well as biofilm stability under low mechanical load in the linear regime [20; 21; 22]. However, there is a lack of understanding of the cross-linking, dynamical rearrangements, and mesoscopic reorganizations under large shear forces mimicking real-thing perturbations. Rheological techniques for probing large-amplitude shear deformation have allowed to record rheological signatures at a variety of stresses under wide configurational landscape [12; 14; 16; 17; 23], and strong history fidelity (i.e., presenting viscoelastic memory) [15]. The biofilms grown under different mechanical conditions are indeed known to exhibit viscoelastic variations as corresponding to environmental adaptations [24; 12]. Furthermore, EPS viscoelasticity is known to confer protection against chemical and physical threats [25; 26]. Whilst ample literature alludes to the biomolecular role of the EPS secreted under stressed biofilm growth [24; 25; 12], systematic investigations of the rheological response are still lacking on the biophysical focus. To investigate _in silico_ a synthetic biofilms' rheology that improves further our predictive ability from structural coarse-grain standpoints, we invoke numerical modelling from mesoscopic physics as approached alongside experimental rheology data obtained in physicochemicalally controlled biofilms [15]. Physical biofilm modelling started more than one decade ago in terms of continuous media approaches [27; 28]. They were first described via phase-field (continuum) models in which the disperse phase accounts for polymerized biofilm components (EPS and bacteria), and the continuous phase for aqueous ambiance (containing nutrients and many other small molecules) [29]. This model has been successfully used to recapitulate essentials of biofilm growth, particularly unraveling mechanical features in agarose hydrogels containing bacteria [30]. A further approach consisted in implementing hybrid discrete-continuum models to couple bacterial growth and biomass spreading over biofilm roughness [31; 32]. Numerical mesoscale approaches have been also proposed, particularly Dissipative Particle Dynamics (DPD [33; 34]), and the Immersed Boundary-based Method (IBM) [35; 36]. IBM relies on strong assumptions such as: a) considering biofilm steadiness during the simulation time scale [37]; b) imposing biofilm elasticity as linear springs connecting individual bacteria [38]; c) capturing the biofilm viscosity via constitutive stresses. Encoding these constraints has allowed using IBM for studying biofilm growth and its force-guided deformation [39; 40; 41; 42], establishing connections between mechanical stress and biofilm strain [43; 44; 45]. The DPD methods are comparatively less astringent in mechanical terms than IBM approaches [46]. Indeed, DPD is a mass and momentum conserving algorithm that allows to model many-bodies embedded in a viscous fluid [33; 34; 47], even when geometries are complex [48; 49]. _Quasi-continuous DPD-approaches_ have been indeed revealed with a predictive capacity in describing certain rheological features of _Staphylococcus epidermidis_ biofilms grown on a rheometer plate [50; 51]. By running DPD-simulations over long time and length scales [46], the method has been also used to simulate biofilm formation on post-coated surfaces [52], and the growth of two-dimensional biofilms under external flow [53]. More recently, some of us worked on a combined numerical and experimental study aimed at unravelling how hydrodynamic stress globally affects the mechanical features of _Pseudomonas fluorescens_ biofilms [12]. The _P. fluorescens_ biofilms were grown either under static or shaken conditions. Rheological measurements combined with confocal microscopy were further performed on the real biofilms. The experimental results showed the cultured _P. fluorescens_ biofilms as capable of adapting to environmental conditions by tailoring their matrix microstructure to mechanical stress [12]. By considering prospective DPD simulations allowing us to define the mesoscopic framework recapitulating principal ingredients from real biofilm behavior (polymer density, crosslinking degree, number of bacteria, etc.), those previous results suggested DPD-based rheological modelling with a forecasting potential. In particular, the DPD-simulations showed how much structural change was caused by an increased number of crosslinks in the EPS matrix [12]. In exploring the mechanical DPD-landscape as led by the living bacteria, however, the parameter space of the DPD method was not fully mapped, and more importantly, the viscoelastic behaviour of the biofilm to the applied shear frequencies was not studied in time domain. Beyond our previous work [12], we here exploit novel coarse-grained DPD- modelling to numerically map rheological biofilm features in mechano-structural landscapes exploring nonlinear dynamic stress responses. The parameter space of biofilm viscoelasticity has been now mapped varying both mesh topology and structural compositions as represented by bonding interactions, such as the number of crosslinks between EPS polymers, the number of embedded bacteria, and the biofilm swelling amount of water molecules interacting along the DPD field. In order to check for dynamic (conservative / dissipative) features as structurally connected with the underlying biofilm mechanics, the rheological DPD- response has been now explored in time domain under variable strain spanning a broad range of shear amplitudes (both in linear and nonlinear regimes). The viscoelasticity predicted by the DPD-method has been further subject to experimental validation by measuring _in situ_ responses of real _P. fluorescens_ biofilms grown either under mild static conditions or intense shear stimuli. As a novel piece of retrospective physical forecasting on biofilm dynamics, our experimental findings with _P. fluorescens_ biofilms have been discussed in the sight of the numerical outcomes obtained from the new DPD-simulations. The paper is organized as follows. We first describe the formal DPD-framework, its numerical set-up, and the validating experimental methods. The Results' section comprises an ample setting of numerical DPD simulations as mapping a broad rheological space at variable stress in two differentiated crosslinking topologies, either tightly (compact) or sparsely (homogeneous) crosslinked. The simulation results are then discussed on the sight of the experimental evidence accumulated on _P. fluorescens_ biofilm rheology [12], including new viscoelasticity measurements obtained at variable frequency under extreme conditions of growing, either statically unstressed or under shaking stresses. Finally, we summarize the conclusions. ## II Numerical and experimental methods We present first the novel mesoscopic DPD-method used to simulate _in silico_ the differently formed _P. fluorescens_ biofilms as subject to variable shear stress spanning from _quasi_-static up to shaken conditions mimicking biofilm formation under hydrodynamic stress. The synthetic biofilms are prepared at periodic boundary configuration consisting of a colony of monodisperse bacteria immersed in a simulation box containing randomly distributed polymer and solvent. We also describe the experimental rationale designed for rheological measurements in biological biofilms at retrospective correspondence with the simulations on synthetic biofilms. ### Mesoscopic DPD-simulations A given biofilm is modelled _in silico_ as a mesoscopic system of interacting beads as recapitulating the three different components: bacteria, polymers and water. First having built a polymer matrix as a mesh of rigid bonds with a fixed crosslinking we randomly insert monodisperse sized bacteria and finally hydrate the entire system with water molecules. Whilst Raos _et al._[54] dispersed the single bacteria as filler spherical particles, we build here rod-shaped bacteria by aggregating 440 simulation beads, each one with diameter \(\sigma_{b}\), then brought together via harmonic spring interactions. \[U_{b}^{\rm bond}=\frac{K_{b}}{2}(r_{i,j}-\sigma_{b})^{2}\qquad, \tag{1}\] being \(r_{i,j}\) the distance between bonded beads \(i\) and \(j\), and \(K_{b}\) the harmonic coupling constant. The bacterium body is formed as a spherocylinder whose central part is shaped as an empty cylinder of length \(l_{b}/\sigma_{b}=6\), while the extremes of the bacteria are shaped as spherical caps of radius \(R_{ext}/\sigma_{b}=4\). Both the central and extreme parts have an external radius \(R_{ext}/\sigma_{b}=4\) and internal radius \(R_{int}/\sigma_{b}=2\) i.e. the bacterium membrane is formed by three layers of particles. These rigid bacteria are immersed in a flexible mesh made of polymers (EPS) hydrated with water molecules. Water is represented by \(N_{\rm solv}\) bead particles of type \(s\). Whereas the EPS matrix consists of linear chains of \(l\) beads bounded via an harmonic potential: \[U_{p}^{\rm bond}=\frac{K_{p}}{2}(r_{i,j}-\sigma_{p})^{2}\qquad, \tag{2}\] where \(r_{i,j}\) is the distance between two consecutive beads \(i\) and \(j\), \(\sigma_{p}\) the equilibrium distance, and \(K_{p}\) the coupling constant. The polymer beads of type \(p\) can crosslink each other and with neighbouring bacteria (bacterium-polymer crosslinks). These crosslinking (CL) interactions have been also represented as harmonic potentials as established in Eqs. (1-2). Using the above structural approach all the bounding interactions are encoded inside the mesoscopic simulation box, namely, polymer-polymer (pp), polymer-bacterium (pb), polymer-solvent (ps), bacterium-bacterium (bb), bacterium-solvent (bs), solvent-solvent (ss). Within the current simulation DPD-schema the total force between two particles \(i\) and \(j\) consists of three interactions, conservative, dissipative and random. The sum of the three DPD components captures long-range correlations induced by hydrodynamic interactions and also accounts for thermal fluctuations. The net force between two particles \(i\) and \(j\) (of type \(\alpha\) and \(\beta\)) can be expressed as the sum of the conservative force \(\vec{F}_{i,j}^{C}\), plus the dissipative force \(\vec{F}_{i,j}^{D}\) and the random force \(\vec{F}_{i,j}^{R}\). The conservative force corresponds to \(\vec{F}_{i,j}^{C}=B_{\alpha,B}w(r)\hat{r}_{i,j}\), for \(r<r_{c}\), where \(r_{c}\) is a cut-off distance beyond which all these terms vanish; \(\hat{r}_{i,j}\) is the unit vector along the direction of \(\vec{r}_{i}-\vec{r}_{j}\), indicating the positions of particles \(i\) and \(j\), respectively; \(B_{\alpha,\beta}\) is the amplitude of the conservative force between particles of type \(\alpha\) and \(\beta\). The dissipative force is represented as \(\vec{F}_{i,j}^{D}=-\gamma w^{2}(r)\,(\hat{r}_{i,j}\cdot\hat{v}_{i,j})\,\hat{r} _{i,j}\), where \(\hat{v}_{i,j}\equiv\hat{v}_{i}-\hat{v}_{j}\) is the vector difference between the velocity \(\vec{v}_{i}\) of particles \(i\), and the velocity \(\vec{v}_{j}\) of particle \(j\). Here, \(\gamma\) is the friction coefficient. The random force can be computed as \(\vec{F}_{i,j}^{R}=w(r)\left(\frac{2k_{B}T\gamma}{dt}\right)^{1/2}\Theta\ \hat{r}_{i,j},w(r)=1-r/r_{c}\), being \(w(r)\) a weighting factor varying from 0 to 1. \(\Theta\) is a Gaussian random noise with zero mean and unit variance; \(dt\) the integration time, \(k_{B}\) the Boltzmann constant, and \(T\) the absolute temperature. This random force essentially captures the microscopic Newtonian friction as balanced against the fluctuating impulsion by the thermal energy transferred from the environment. By using internal units we set \(r_{c}=1\) for all DPD interactions, \(\sigma_{b}=\sigma_{p}=0.5\), \(\gamma=4.5\), \(K_{b}=K_{p}=30\). The time step is set to \(dt=0.05\), although we test our results also against \(dt=0.005\). To convert dimensionless units in physical units, we consider \(k_{B}T=4.11\times 10^{-21}\) J as the characteristic energy scale, and we set the length scale to the longest dimension of one _P. fluorescens_ bacterium, approximately \(\sim 1.5\mu\)m. Since the size of one bacterium is about 1.5 \(\mu\)m, and one bacterium os about 10 beads long, the diameter of one bead (\(\sigma_{b}\)) corresponds to about 100 nm. Having assumed \(\sigma_{b}=\sigma_{p}\), the size of a single polymer in the experiments (about 100 nm) would correspond to a single bead. For this reason, in our simulations we assume that polymer chains inserted in the system represent a network of connected polymers, comparable in size with the bacteria. Moreover, since the mass of a single bacterium is \(\sim 10^{-15}\) Kg, the mass of a single particle is set to \(m_{u}\sim 2.3\times 10^{-18}\) Kg. The time scale is derived accordingly as \(\tau_{\rm intrinsic}=\sqrt{m_{u}l_{u}^{2}/k_{B}T}\sim 5\times 10^{-6}\)s. ### Synthetic biofilm: simulation scheme We prepare configurations of bulky \(32\times 32\times 32\) biofilm boxes which are evolved in time via a constant-volume DPD dynamics performed using an open source LAMMPS-integrator package imposing periodic boundary conditions [55]. To study rheological features the parameter space is explored as follows: a number of bacteria \(N_{b}\) ranging from 100 to 200; a number of polymers \(N_{p}\) from 80 to 200 (of polymer lengths \(L_{p}=100\)); a number of solvent particles \(N_{s}\) from 20,000 to 50,000. All parameters \(N_{p}\), \(N_{s}\) and \(N_{b}\) have been chosen to fulfill the DPD condition for the density [33; 56]; this is \((N_{p}+N_{b}+N_{s})/(32\times 32\times 32)>3\). To guarantee bacteria and polymers being properly hydrated, we choose the amplitudes of the solvent interactions smaller than any others. In particular, we choose \(B_{s,s}=B_{s,b}=B_{s,p}=25\) and \(B_{b,b}=B_{p,p}=B_{p,b}=30\). When varying the water content, to study the effect on the rheology when changing the biofilm solvation, we accordingly changed \(N_{b}\) and \(N_{p}\) to always keep the overall system's density larger than 3. In order to understand whether our polymer suspension is in a dilute or semi-dilute regime, we compute the average polymer concentration of our simulations and relate it to the overlap concentration. We estimate the polymer overlap concentration as \(c^{*}\sim N_{p}/L_{p}^{3}\), where \(N_{p}\) is the number of monomers in one polymer and \(L_{p}\) the polymer length. In a theta-solvent, we can approximate \(L_{p}\sim N_{p}\), leading to \(c^{*}\sim 1/N_{p}^{2}=0.0001\) (being \(N_{p}\sim 100\)). Given that we insert about 100 polymers, each one of length 100, the average polymer concentration is about 0.3, that is above the overlap concentration. Thus, we are dealing with a semi-dilute polymer system. To prepare initial configurations, we randomly locate \(N_{b}\) bacteria and \(N_{p}\) polymers in the presence of\(N_{s}\) water molecules and then equilibrate without polymer cross-links (CL) for \(\sim 8\times 10^{6}\) time steps. Next, we turn on the crosslinks within the polymer network. For any choice of \(N_{b}\) and \(N_{p}\), we form a fixed number of crosslinks ranging from 2,300 up to 5,400. To create the network of crosslinked polymers and bacteria, we randomly place \(N_{\rm CL}\) newly formed harmonic bonds: 1) between \(b\) and \(p\) particles i.e., bacterium-polymer crosslinks, and 2) between two \(p\) particles belonging or not to the same polymer chain i.e., polymer-polymer crosslinks. To avoid the formation of artificial aggregates of \(p\) particles, we forbid crosslinks between nearest-neighbour beads belonging to the same polymer chain. Thus, we choose on the one hand to inhibit crosslinks between the ten closest neighbouring particles belonging to the same chain. The resulting homogeneous topology is referred to as \(T_{\rm A}\) (with a probability for each chain monomer to crosslink \(p_{A}=1/10\)). On the other hand, we choose to allow crosslinks between particles that are more than three neighbors apart. We refer to the resulting polymer compact topology as \(T_{B}\) (with a three-fold higher crosslinking probability than \(T_{A}\) i.e., \(p_{B}=1/3>p_{A}\)). Unless specified otherwise all the presented results will refer to the topology \(T_{A}\), considered to be sufficiently open to allow for bacterial reorganizations. To prevent excess formation of crosslinks per particle, which would result in a globule-like cluster of particles, we assume that any particle of type \(b\) can form at most one crosslink, while any \(p\) particle can form at most two crosslinks. After crosslink formation a second equilibration run of \(\sim 10^{6}\) time steps is performed to relax the polymer-bacteria network. Figure 1 shows an example of equilibrated configuration corresponding to the homogeneous \(T_{A}\) topology considered undeformed (in the absence of shear stress). ### Canonical \(T_{\rm A}\) topology: pair distribution function As expected, the homogeneous \(T_{\rm A}-\)biofilm is characterised by an homogeneous distribution of bacteria within a weakly crosslinked matrix as characterized by the radial distribution function \(g_{b-b}(r)\) computed on the bacteria's center of mass (Fig. 2a). Figure 2: (a) Radial pair distribution function for the center of mass of bacteria for three sets of parameters: CL=0, \(N_{b}=120\), \(N_{p}=80\), \(L_{p}=100\), \(N_{x}=60000\) (blue); CL=2400, \(N_{b}=184\), \(N_{p}=80\), \(L_{p}=100\), \(N_{s}=3000\) (green) and CL=4000, \(N_{b}=120\), \(N_{p}=80\), \(L_{p}=100\), \(N_{s}=60000\) (red). (b) Probability distribution function for number of bacteria in box slices of size \(L/4\) (system parameters as in panel (a)). All the radial distribution plots exhibit a marked peak corresponding to nearest-neighbor correlations at short distances \(r/\sigma\sim 5\), while approaching \(g_{b-b}(r)\sim 1\) at longer distances, as expected in homogeneous liquids. Regardless of the number of bacteria \(N_{b}\), the number of polymers \(N_{p}\), the number of crosslinks CL and the level of hydration \(N_{s}\), all curves representing the \(g_{b-b}(r)\) show exactly the same behaviour (differently colored curves in Fig. 2a). As representative configurations, the blue and red curves reported in Fig. 2a refer to systems with different CL numbers, whereas the green and blue curves refer to systems who differ in \(N_{b}\), CL and \(N_{s}\). In order to test the occurrence of bacteria aggregation in the initial sample, we look for the appearance of phase separation (which would result in clustering of bacteria). For this purpose, we compute the density distribution of bacteria's center of mass, shown in Fig. 2b. To perform this calculation, we divide the box volume in cubic cells of size \(L/4\) and compute the number of bacteria per cell. Finally, the distribution is computed over multiple cells taken from several independent configurations. The blue and red curves refer to systems with a lower number of bacteria, resulting in averaging mostly empty cells, while the green curve refers to a system with a larger number of bacteria, resulting in a non-vanishing average density. Independently on the the number of bacteria \(N_{b}\), of polymers \(N_{p}\), of crosslinks CL and of the level of hydration \(N_{s}\), we observe a single-peak distribution. This underlines the fact that the system is locally structured but not phase separated (in agreement with the expected biofilm homogeneity in the weakly crosslinked \(T_{A}\) topology). Even though the snapshots in Figure 1 seem to suggest that biofilms are spatially inhomogeneous, neither of the two procedures (TA or TB, data not shown) lead to a phase separated system. Work is in progress to build a model characterised by a clear spatial inhomogeneity, based on a combination of polymers of different length, together with a biased bacterial aggregation. For a fixed number of bacteria, polymers and water content, we have estimated the average distance between two connected beads in the polymer mesh and demonstrated that it is a decreasing function of the total number of crosslinks, as expected. Since the larger the total number of crosslinks, the larger the number of crosslinks between polymers, the smaller the average distance between two connected beads in the polymer mesh. When the number of crosslinks is larger than 2000, the average distance between two connected beads in the polymer mesh does not vary considerably (data not shown). Throughout the rest of the work, we will only consider biofilm matrix in the \(T_{A}\) topology and with a number of crosslinks larger than 2000. ### Numerical DPD-rheology To study the rheological properties of a given model biofilm, we compute the shear modulus by monitoring the shear stress response \(\sigma_{xy}(t)\) as resulting from an imposed sinusoidal deformation of the simulation box [12; 46]. We apply an external oscillatory shear deformation with time frequency \(\nu\) along the \(X\)-\(Y\) plane by changing the box size \(L\) according to \[L(t)=L_{0}+A\sin(2\pi\nu t)\qquad, \tag{3}\] being \(L_{0}=32\) the initial box size and \(A\) the oscillation's amplitude. According to Raos _et al._[54], the resulting stress can be calculated by fitting the \(xy\) component of the shear stress with: \[\sigma_{xy}(t)=\sigma^{\prime}\cdot\sin(2\pi\nu t)+\sigma^{\prime\prime}\cdot \cos(2\pi\nu t) \tag{4}\] with \(\sigma^{\prime}\) and \(\sigma^{\prime\prime}\) corresponding to the the in-phase and out-of-phase components of the complex shear modulus \(G\), whose components are \[G^{\prime}=\frac{\sigma^{\prime}}{(A/L_{0})}\qquad G^{\prime\prime}=\frac{ \sigma^{\prime\prime}}{(A/L_{0})} \tag{5}\] being \(G^{\prime}\) the storage modulus and \(G^{\prime\prime}\) the loss modulus, respectively. Whereas rheological properties of solids are characterized by a finite storage modulus (\(G^{{}^{\prime}}>0\)), fluid viscous materials are characterized by near zero storage (\(G^{{}^{\prime}}\sim 0\)) and large losses (\(G^{\prime\prime}>0\)) [57; 58; 3]. When shearing the DPD-system, we choose to vary the amplitude \(A\) between 4 and 36 corresponding to a maximal deformation strain (\(u=A/L_{0}\)) of up to the 112% of the box size. We mainly focus on data obtained for \(A=24\) (\(u=75\%\)), since the stress-strain curve behaves almost linear nearby but departs from linearity for larger deformations. We herein find the response stronger and the noise lower when varying system parameters (\(N_{b}\), \(N_{p}\),...). Under shearing conditions, particle velocities are remapped every time they cross periodic boundaries. Having performed preliminary numerical experiments with a simulation box of \(16\times 16\times 16\), we obtained the same rheological behaviour as with a larger system of \(32\times 32\times 32\). Therefore, we chose the larger box to study the bulk rheological features of a bacterial biofilm. We choose a period \(T\equiv\frac{2\pi}{\nu}\) ranging between \(2\times 10\) up to \(2\times 10^{5}\). At a time step of \(dt=0.01\), a period of \(T=200\) corresponds to 2000 time steps that is within the total lifespan of the simulation runs. For the considered frequency, ranging from a value of \(3\times 10^{-5}\) to a value of 0.3 (in internal units), we run the simulations completing at least 3 box-shearing oscillations. Once the numerical simulations of shearing have been performed in internal units, the rheological results can be converted into physical units by multiplying the shear stresses inside the simulation box into physical pressures in each \(XY\)-plane wherein the shear strain is a constant. For the considered simulations the conversion factor is \(N_{XY}m_{u}/(l_{u}\tau_{u})=4.1\) kPa per internal unit of pressure as considered in each shear plane containing \(N_{XY}=32\times 32=1,024\) particles. ### Viscoelasticity regimes As a rule of thumb on the proposed DPD-rheology, we performed simulations in synthetic biofilms with an intermediate number of particles interacting in rigid realizations under dominant chain crosslinking in the canonical \(T_{A}\) topology. Figure 3 shows typical stress-strain plots obtained for considered medium and high crosslinking (respectively CL = 3600, 5300), and extremal values of the shear frequency. From a phenomenological point of view, the computed stresses are characterized by two regimes at dependence of the applied strain on the synthetic DPD- biofilms (\(u=A/L_{0}\)), which imposes the stress response at dependence of the prescribed degree of crosslinking in the simulated topologies. On the one hand, for the in-phase stresses we detect (Fig. 3a): a) A Hookean elastic regime at low strain below a strengthening point (\(u<u_{st}\)); here, stress and strain are linearly related until the onset for dynamic rigidization (nearby \(u_{st}\)). b) A nonlinear regime beyond the strengthening point (\(u>u_{st}\)); here, the system strengthens under nonlinear stress thus becoming effectively stiffer than in the Hookean regime. Very relevantly, the higher crosslinks' number the strengthen the resulting synthetic biofilms. As expected for rigid networks made of semi-flexible polymers with permanent crosslinks [59], the most rigid synthetic realization arises from the densest crosslinked mesh strained at the highest frequency (at short deformation period \(T=200\)). On the other hand, the out-of-phase stresses show the viscous losses as a dissipative frictional response (Fig. 3b). The presence of finite (non-zero) viscous shear stresses is only detected for the most rigid realizations with the highest density of crosslinks strained at high frequency (black symbols); no viscous stress is detected at low frequency (red symbols), nor at low density of crosslinks (blue symbols). Therefore, we identify the densely crosslinked as a viscoelastic material. Whereas a high systemic shear rigidity makes the elastic response to emerge solid-like (\(\sigma^{*}\sim G^{\prime}u\) with \(G^{\prime}>>0\)), the finite dynamic viscosity imposed by the DPD algorithm makes the frictional losses to effectively emerge only at high frequency (i.e., \(\sigma^{*}\sim G^{\prime\prime}u\) being \(G^{\prime\prime}=\omega\eta\)) the loss modulus as determined by the shear viscosity \(\eta\)). Otherwise, the synthetic system behaves inviscid (\(\sigma^{*}\sim 0\)). ### In vitro biofilm formation The strain _P. fluorescens_ B52, originally isolated from raw milk [60], is used as a model microorganism. Overnight precultures and cultures are incubated at 20\({}^{\circ}\)C under continuous orbital shaking (80 rpm) in tubes containing 10 ml Trypticase Soy Broth (TSB, Oxoid). Cells are recovered by centrifugation at 4,000\(g\) for 10 min (Rotor SA-600; Sorvall RC-5B-Refrigerated Superspeed Centrifuge, DuPont Instruments) and washed twice with sterile medium. Cellular suspensions at OD\({}_{600}\) are first adjusted to 0.12 (equivalent to \(10^{8}\) cfu/ml) and then diluted to start the experiments at \(10^{4}\) cfu/ml. Biofilms are grown on borosilicate glass surfaces (\(20\times 20\) cm) as adhesion substrates. Five glass plates are held vertically into the sections of a tempered glass separating chamber, provided with a lid. The whole system is heat-sterilized as a unit before aseptically introducing 2 ml of the inoculated culture medium. To check the effect of hydrodynamic stress on biofilm mechanical properties, incubation is carried out for 96 h at 20\({}^{\circ}\)C both in an orbital shaker at 80 rpm (shaken sample) and statically (non-shaken sample). For biofilm recovery, plates are aseptically withdrawn, rinsed with sterile saline to eliminate weakly attached cells, and then scraped to remove the attached biomass (cells + matrix) from both sides of the plates. For rheological measurements, the biofilm material is casted every 24 h to be directly poured onto the rheometer plate. Experiments are run in triplicate. ### Experimental rheology The biofilm's viscoelastic response is experimentally determined in a hybrid rheometer under oscillatory shear stress-control (Discovery HR-2, TA Instruments), using a cone-plate geometry (40 mm diameter) and a Peltier element to control temperature [12]. Triplicate measurements are performed at a 1 mm gap between the Peltier surface and the cone-plate tool (TA instruments), where a sinusoidal shear strain \(u(t)\) of amplitude \(u_{0}\) is performed at a frequency \(\omega=2\pi\nu\), i.e., \(u(t)=u_{0}\ sin(\omega t)\). The shear deformations are considered at variable angular frequency (\(\omega=2\pi\nu\)). The lower frequencies are restricted by the extremely long readout times compromising sample stability (\(\nu_{min}=10^{-2}Hz\)). The practicable frequency window is upper limited by inertia (\(\nu_{max}=100Hz\)); higher oscillation frequencies are not usually considered to be affected by artifacts in a blind region dominated by inertia. Figure 3: Rheological DPD-response of synthetic biofilms. In plane stress \(\sigma^{*}\) (panel a) and \(\sigma^{*}\) (panel b) as a function of \(A/L_{0}\) for three different biofilms characterised by 5300 crosslinks and \(T=200\) (black symbols), 5300 crosslinks and \(T=2\times 10^{5}\) (red symbols), 3600 crosslinks and \(T=2\times 10^{5}\) (blue symbols). The straight lines are needed to underline the regime where we consider the shear to be linear. All data refer to \(N_{b}=150\), \(N_{p}=80\) and \(N_{s}=45000\) Measurements are performed at low strain amplitudes (typically \(A=1\%\)), for which the stress responses are found practically linear. The shear stress exerted by the biofilm in the linear regime is monitored as \(\sigma(t)=G^{*}u(t)\), where \(G^{*}\) is the viscoelastic modulus \(G^{*}=G^{\prime}+iG^{\prime^{\prime}}\). The storage modulus accounts for Hookean shear rigidity (\(G^{\prime}\sim G_{0}\)), and the loss modulus for Newtonian viscous friction (\(G^{\prime\prime}=\eta\omega\); at constant shear viscosity \(\eta\)). ## III Results Our numerical simulations focus on mechanical measurements covering linear and nonlinear regimes of predictive viscoelasticity in synthetic DPD-biofilms. As guided by the preliminary simulations performed to determine the viscoelasticity regimes (see Fig. 3), we build upon our numerical setup for near-conservative DPD-rheology by keeping fixed \(N_{b}=150\), \(N_{p}=80\), and \(N_{s}=45000\) (corresponding to a well populated biofilm colony). Hereinafter, all data refer to a low frequency deformation that minimises the frictional losses (at period \(T=2\) x \(10^{5}\)). Simulation runs were performed as a function of the shear amplitude \(u=A/L_{0}\), for different number of polymer-polymer, or polymer-bacteria crosslinks with topology either \(T_{A}\) or \(T_{B}\). The DPD-realizations here performed actually correspond to relatively lower degrees of crosslinking as above considered in Fig. 3. They should capture the fuzzy structure expected for realistic EPS-networks in which the embedded particles are able to explore an ample configurational space even under relatively small amplitudes of the shear field externally applied. ### Synthetic mechanical response under increasing shear deformation: influence of biofilm topology In order to characterize the parameter space of dynamical responses to shear as corresponding to the considered crosslinking topologies (\(T_{A}\) and \(T_{B}\)), we compute the mechanical stress as a function of the shear strain for biofilms as those prepared in the numerical methods' section (keeping fixed \(N_{b}=150\), \(N_{p}=80\), \(T=2\times 10^{5}\), \(l=100\) and the total number of crosslinks \(CL_{pp}+CL_{pb}=1,600\)). The results from these DPD-simulations are reported in Fig. 4 (symbols corresponding to systemically different DPD-realizations). The elastic (in-phase) stress shows a Hookean limit of linear response at low shear deformations (\(u=u_{C}\sim 50\%\)), followed by a nonlinear trend towards higher stresses at larger deformations (\(u>>75\%\)). Differences appear when looking at the magnitude of \(G^{{}^{\prime}}\) if considering biofilms containing crosslinks between particles within the same polymer chain. If they are more than three neighbors apart as in the compact \(T_{B}\) topology the biofilm is always more rigid (Fig. 4; hollow symbols), than when containing crosslinks between particles that are at least ten neighbors apart i.e., the homogeneous topology \(T_{A}\) (solid symbols). The performed DPD-simulations show that crosslinks between particles that are closer along the same polymer chain lead to stiffer biofilms: this might be the consequence of aggregation induced in the polymer chains. Under shear strain the data become clearly grouped as corresponding to the studied topologies; these are: \(T_{A}\)) The structurally homogeneous \(T_{A}\)-topology, showing low stress and broad linear regime under deformation; \(T_{B}\)) The structurally compact \(T_{B}\)-topology, much stiffer, responding hence with higher stresses and ampler non-linearity. We superpose parabolic fits as straight lines to indicate the amplitude range where we can consider the quadratic shear response as limited by the leading Hookean component. The viscous (out-of-phase) stresses remain practically vanishing in these settings which behave practically frictionless (\(\sigma^{{}^{\prime\prime}}\sim 0\); data not shown). In both topologies \(T_{A}\) and \(T_{B}\) sheared under large amplitude deformation (\(u\geq u_{st}\)), the current DPD-simulations evidence nonlinear strengthening without increasing frictional losses as due to conservative elongational chain ordering templated between the permanent crosslinks of a rigid meshwork. Differently to our previous experimental results with _P. fluorescencts_ biofilms [12], and unlike rheological biofilm data reporting on nonlinear softening e.g., in _S. mutans, etc._ (see Ref. [15] for a review), the permanent crosslinking considered by our DPD-simulations predicts biofilm hardening as expected for polymer chains becoming rigidly ordered under stress. Such class of strengthening stresses have been observed in modified biofilms composed by variants of _P. aeruginosa_ able to overproduce rigid E Figure 4: Conservative shear response of synthetic DPD-biofilms under shear strain. The in-phase stress \(\sigma^{{}^{\prime}}\) is represented as a function of \(A/L_{0}\) for three different biofilms characterised each by a different topology: \(T_{A}\) (solid circles) and \(T_{B}\) (hollow circles); in internal units (i.u.). The color code indicates different numbers of polymer-polymer (pp), polymer-bacteria (pb) cross-links: \(CL_{pp}=200\) / \(CL_{pb}=1,400\) (blue); \(CL_{pp}=400\) / \(CL_{pb}=1,200\) (red) and \(CL_{pp}=600\) / \(CL_{pb}=1,000\) (green). All data refer to \(N_{b}=150\), \(N_{p}=80\), \(N_{s}=45,000\) and \(T=2\times 10^{5}\). The straight lines represent parabolic fits to simulation data (see main text). This confirms our "stiff" DPD-simulation framework too rigid to capture collective relaxations from mobile crosslinks and sliding entanglements as existing in the real biofilms. Despite these obvious limitations, we resume our analysis on the possibilities, capacities and strengths of the current DPD-simulations schema as based on permanent crosslinks. ### Effective (linear and nonlinear) shear rigidity We further calculate the effective rigidness of the studied DPD- biofilms defined as the apparent modulus for elastic storage i.e., \(G^{\prime}=\sigma^{\prime}/u\) (see Eq. 5; left). This effective parameter measures the apparent rigidity as recapitulated in a shear modulus for elastic storage. To calculate the effective rigidity as a function of strain \(G^{\prime}(u)\), we use raw simulation data on the stress-strain response (Fig. 4). Figure 5 plots simulation data for the apparent storage modulus \(G^{\prime}\) increasing non-linearly with the shear amplitude (found larger in magnitude for the stiffest \(T_{B}\) biofilm than for the canonical \(T_{A}\) topology). The linear rigidity modulus is given by the Hookean limiting intercept at zero-strain (\(G_{0}\) at \(u=0\)). The effective storage modulus increases with the strain-amplitude in both topologies (being larger in magnitude for the more compact \(T_{B}\)). This is the sort of behavior expected for semiflexible newtworks undergoing stress strengthening as here simulated [59]. We observe in both cases that the shear rigidity is not too much affected by the ratio between polymer-polymer (\(pp\)) versus polymer-bacteria (\(pb\)) crosslinks, as long as the total number of CL remains constant in the simulations. To calculate the Hookean moduli (\(G_{0}\)), and the nonlinear amplitudes (\(g_{1}\)), we exploit the phenomenological dependency \(G^{\prime}=\sigma^{\prime}/u\sim G_{0}+g_{1}u+...\) (see linear fits in Fig. 5). The fitting parameters are collected in Table 1. In both topologies (\(T_{A}\) and \(T_{B}\)), we compute the Hookean moduli with values around \(G_{0}\sim 0.01\sim 1\) kPa (corresponding to a simulation box with \(N=32,768\) particles inside); they estimate the mechanical rigidness of the synthetic biofilms in qualitative agreement with previous experiments performed with the _P. fluorescens_ biofilms studied in the linear rheological regime [12]. The fitted values found for the nonlinear amplitudes \(g_{1}\)'s reveal strong biofilm strengthening under stress which is however not observed in the experimental window explored in the previous paper [12]. The current simulations show the effective DPD-rigidities increasing by more than a 100% for \(T_{A}\), and 300% for \(T_{B}\) (considered both at 100% strain i.e., at \(u=1\); see Fig. 5). Interestingly, the rigidness ratios indicate the relative softness of the homogeneous topology \(T_{A}\) with respect to the compact one \(T_{B}\) i.e., \(G_{0}^{(A)}/G_{0}^{(B)}\sim g_{1}^{(A)}/g_{1}^{(B)}\sim 0.6\). These mechanical ratios are hopefully determined by structural crosslinking probabilities relative to both simulated topologies i.e., \(1-p_{A}/p_{B}\sim 0.6\) as encoded in the DPD- method (see Numerical Methods). Therefore, and in order to accommodate enough structural flexibility in the DPD- biofilms numerically simulated in the following, we will only deal with the homogeneous meshworks modelled at inhibiting crosslinks between the ten closest neighbouring particles belonging to the same chain (canonical \(T_{A}\) topology). ### Tuning structural parameters into synthetic DPD-modeled biofilm viscoelasticity To analyze how far structural parameters such as \(N_{b}\), \(N_{p}\), and \(N_{s}\) affect biofilm viscoelasticity, we consider the homogeneous DPD- biofilms created with the canonical \(T_{A}\) crosslinking topology (i.e. by inhibiting crosslinks between the ten closest neighbours in a polymer chain). We study stress responses from those flexible \(T_{A}\) biofilms designed to be a flexible meshwork within a dissipative dynamics (see Numerical Methods for details on DPD simulations). Figure 6 shows the dependence of the mechanical properties on \(N_{b}\), \(N_{p}\) and \(N_{s}\) as \begin{table} \begin{tabular}{|c|c|c|} \cline{2-3} \multicolumn{1}{c|}{} & \(G_{0}\) (i.u.) & \(g_{1}\) (i.u.) \\ \hline \(T_{A}\) & \(0.007\pm 0.001\) & \(0.019\pm 0.001\) \\ \hline \(T_{B}\) & \(0.011\pm 0.001\) & \(0.033\pm 0.002\) \\ \hline \end{tabular} \end{table} Table 1: Characteristic parameters of shear rigidity calculated from the storage modulus in the studied DPD- topologies. \(G_{0}\) is the Hookean shear rigidity, and \(g_{1}\) the nonlinear amplitude that describes biofilm strengthening under stress. The internal units of rigidness (i.u.) can be converted into physical units of pressure by the factor 1 int. stress unit \(=N_{XY}m_{u}/(l_{u}\tau_{u})=4.1\) kPa for \(N_{XY}=32\times 32\) particles in each shear plane. Figure 5: Storage modulus \(G^{\prime}\) as a function of strain for the topology \(T_{A}\) (filled squares) and \(T_{B}\) (empty circles); in internal units (i.u.). The color code indicates different numbers of polymer-polymer (pp), polymer-bacteria (pb) cross-links: \(CL_{pp}=200\) / \(CL_{pb}=1400\) (blue); \(CL_{pp}=400\) / \(CL_{pb}=1200\) (red) and \(CL_{pp}=600\) / \(CL_{pb}=1000\) (green). All data refer to \(N_{b}=150\), \(N_{p}=80\), \(N_{s}=45000\) and \(T=2\times 10^{5}\). The straight lines represent linear fits to simulation data (see main text). implemented in the canonical \(T_{A}\) topology. We set an intermediate number of crosslinks (\(CL=2300\)), and intermediate values of \(N_{p}=80\), \(l_{p}=100\), \(N_{s}=35000\) within the explored range. Having prepared a DPD-biofilm with an equilibrated \(T_{A}\) topology, we apply a deformation amplitude of \(A=75\) % and a deformation period of \(T=200\), both comprised well within a viscoelastic response entering the onset of nonlinearity. Significantly, we aim at mapping the effective response of the simulated DPD-biofilms in broad ranges of rheological behaviour making the compositional effects to emerge in the apparent viscolastic moduli \(G^{\prime}\) and \(G^{\prime\prime}\) (as calculated from Eqs. (5)). These choices on the \(T_{A}\) topology are dictated by the fact that they correspond to a significant response of \(G^{\prime\prime}\) as we have observed at high frequency, differently from the vanishing losses as happening at lower frequencies. Three different structural biofilm variations will be relevantly considered as referred to changes in the number of bacteria (\(N_{b}\)), of polymer matrix crosslinks (\(N_{p}\)), and of water (solvent) molecules constituting the aqueous environment (\(N_{s}\)). These constitutional effects are discussed below. _Bacterial content._ To check whether the shear stress depends on the number of bacterial cells (\(N_{b}\)), we have computed \(G^{\prime}\) (in-phase response to strain), and \(G^{\prime\prime}\) (out-of-phase response) as a function of \(N_{b}\) while keeping all other parameters fixed (Fig. 6-panel a and d). On the one hand, simulations show that the in-phase response (accounting for the storage modulus \(G^{\prime}\)) remains constant throughout the \(N_{b}\) range (panel a). This result is in agreement with previously reported data [11; 15]. By considering the bacterial biofilm as a polymer-colloid viscoelastic medium [11; 13; 14; 15; 16; 17], \(G^{\prime}\) should increase with the density of bacteria until reaching a solid-like plateau. Our simulation data point out that the probed \(N_{b}\)-range falls into the elastic response regime where \(G^{\prime}\) is not affected by \(N_{b}\) (panel a). On the other hand, the loss modulus \(G^{\prime\prime}\) linearly increases with \(N_{b}\) (panel d), reflecting the high friction imposed on the crosslinked mesh by dragging large bacterial objects while increasing density. In agreement with a solid-like behavior lead by the rigidity of the crosslinked mesh, \(G^{\prime\prime}\) is always smaller than \(G^{\prime}\) but approaches larger values compatible with the storage modulus for the largest number of bacteria hereby considered. We conclude therefore about simulated biofilm behavior as a viscoelastic material with a relative high structural rigidity dominant over viscous fluidity (i.e., \(G^{\prime}>G^{\prime\prime}\)). Otherwise stated, our model predicts the rheological behavior of a quite resilient material with a relatively high mechanical compliance. _Polymer density._ To check the effect of an increasing number of crosslinked polymers on the shear response, we set \(N_{b}=150\), \(l_{p}=100\), \(N_{s}=35000\), \(CL=2300\) with an amplitude of \(A=75\) % and period of the deformation of \(T=200\) (Fig.6, panel b and e). When increasing the number of polymers keeping a constant bacterial density, we observe the simulated biofilm to engender a solid-like network with an increasing rigidity with increasing mesh density (panel b). This suggests that bacteria can colonize a Figure 6: Stress components \(G^{\prime}\) and \(G^{\prime\prime}\) for \(A=75\)%, \(CL=2300\) and \(T=200\) as function of (a,d) the number of bacteria \(N_{b}\); (b,e) the number of polymers \(N_{p}\) and solvent (c,f). preexisting polymer mesh and modify its rigidity according to their space needs; the denser template could entail the more bacteria in a stiffer mesh. However, the out-phase stress response increases only slightly with increasing the number of polymers (panel e). Since this increase of \(G^{\prime\prime}\) is quite small compared to the response caused by the presence of bacteria (panel d), we conclude about \(G^{\prime\prime}\) to be almost independent on the polymer concentration as corresponding to our essentially rigid DPD-system describe mesoscopicaly unrelaxed chains within the permanent crosslinks considered. As referred to the predominantly solid-like behavior emerged with increasing bacterial numbers (\(N_{b}\) in panels a and d), the present results also evidence how biofilm rigidity can be enhanced by increasing polymer numbers (\(N_{p}\) in panels b and e). These simulations on the compaction effects induced by polymer density essentially reproduce the nonlinear rheological characteristics experimentally observed for bacterial biofilms of single species with a well structured EPS (i.e., macroscopically homogeneous, compact and continuous) [15]. _Solvent effects_. Furthermore, mechanical responses were tested for different hydration levels (\(CL=2300\), \(N_{b}=184\), \(N_{p}=80\), \(l_{p}=100\)). Upon biofilm deformations of high amplitude (\(A=75\) %), and long period (\(T=200\)), we observe a hydration effect that makes the storage modulus to decrease with the number of solvent particles (panel c), and preserves regulated friction losses (panel f). Our biofilm DPD- model seems able to absorb the water molecules to mechanically soften at near constant viscosity. Such nonlinear softening emerges as far as averaged distances between bacteria and polymers largely increases by solvent dilution under intense shear deformation thus making the network to become mechanically softer [15]. Mesh weakening is however not reflected in the loss modulus because DPD only catches the viscosity of the solvent but not collectively mesoscopic relaxations. ### Rheological dependence on the shear frequency: experiment vs. simulation We performed rheological experiments on _P. fluorescens_ biofilms grown upon both static and shaken culture conditions (see Experimental Rheology). The viscoelastic responses were subject to dynamic scrutiny under oscillatory shear deformation fixed in the linear regime (\(A=1\) %), and variable frequency in the experimentally available window from \(\nu=0.01\) Hz up to ca. \(100\) Hz (\(\nu=\omega/2\pi\)). Figure 7 compares experimental results (upper panel), and DPD-simulations as performed at equivalent time units (lower panel). The _P. fluorescens_ biofilms have both been grown for 24 hours: the first one under static culture conditions (Fig. 7a; red symbols), and the other one under shaking (Fig. 7a; black symbols). Immediately after culture, we place the grown biofilm _ex vivo_ in the rheometer, and then measure both the linear response \(G^{\prime}\) (solid symbols) and \(G^{\prime\prime}\) (hollow symbols), either for the static biofilm (red symbols), and for the shaking biofilm (black symbols). Due to the technical limitations of the used rheometer, we underline that we could not measure the stress moduli of the _P. fluorescens_ biofilm grown under shaking conditions for frequencies beyond the \(100\) Hz established as a practical ceiling value for experimental macro-rheology. The recorded data intentionally correspond to the linear regime of viscoelastic response (\(A=1\)%), in which a quasi-static response is expected at compatibility with the permanent nature of the rigid (passive) crosslinks considered in the simulations. Further active effects present in the real biofilms as nonlinear softening stresses (\(A\geq 10\)%) [12], re-configurable crosslinks or propulsion bacterium impulses e.g., are not here discussed as not being explictely considered in the current DPD-simulations only capturing quasi-static interactions between "passive" biofilm components in a fixed meshwork of permanent crosslinks. The _P. fluorescens_ biofilms grown for 24 hours under shaking conditions present higher shear moduli (both \(G^{{}^{\prime}}\) and \(G^{{}^{\prime\prime}}\); symbols in black) than those of a biofilm grown under static conditions (symbols in red), independently on the shear frequency probed in the linear regime of rheological response. Otherwise stated, the shaken biofilms develop higher rigidness than those grown under static conditions. Such Hookean stiffening is compatible with the nonlinear strengthening observed under stress (see Fig. 4). Culture shaking might constitute indeed a highly stressed biofilm growth setting which elicits higher crosslinking than static growth, resulting thus into structural stiffening (so much as nonlinear stress results into effective dynamic strengthening). In general, the measured values of the storage modulus are found higher than the frictional losses (especially for low and intermediate frequencies when \(G^{\prime}>G^{\prime\prime}\)). This is the typical behavior of soft solids displaying higher elastic resistance than frictional opposition to shear flow [62]. The observed rheological parsimony as a soft solid is characteristic for any _P. fluorescens_ biofilm grown longer than 24h (data not shown). For the biofilms grown under static conditions, the rheological experiments show however an increase in frictional losses at the highest frequencies tested i.e., \(G^{\prime\prime}\sim\omega\) (see Fig. 7a; dashed line). This dissipative contribution leads a viscoelastic inversion \(G^{\prime}\sim G^{\prime\prime}\), which is also characteristic for soft solids that strained at high rates make the viscous flow to emerge as a consequence of stress [62]. A similar behaviour seems to be extrapolated in the shaken biofilm as far as the difference between \(G^{\prime}\) and \(G^{\prime\prime}\) becomes reduced upon increasing frictional losses at high frequencies whereas the elastic storage remains constant (see Fig. 7a). To test the capability of our synthetic biofilm model to capture experimental behavior in _P. fluorescens_ biofilms, we perform numerical rheology as reported in Fig.7b. After having explored the parameter space in the previous section, we choose to prepare a biofilm with the following parameters: \(N_{b}=184\), \(N_{p}=80\), \(l_{p}=100\), \(N_{s}=35000\) and \(A=75\) %, with a variable number of crosslinks spanning the broad range \(CL=2000-7000\) (considered representative for poor and rich crosslinking, respectively leading soft and stiff elasticity). As previously justified in Ref. [12], we assume indeed that the main difference between the biofilm grown under static and shaking conditions is only in the matrix composition, being the one of the shaken-grown biofilm the richer in crosslinks. Figure 7.b plots the different datasets computed for \(G^{\prime}\) (closed symbols) and \(G^{\prime\prime}\) (hollow symbols) as a function of the frequency of the shear strain imposed to the simulation box for the different number of crosslinks in the chosen canonical \(T_{A}-\) topology. Accordingly to the established experimental conditions, we numerically compute the shear moduli of the DPD-model by changing the stress frequency over more than four orders of magnitude. Internal simulation units are converted into physical units accordingly to the conversion factors previously described. Consistently with what we observed in experiments, \(G^{\prime}\) softens with decreasing the number of crosslinks but does not show a significant change upon increasing \(\omega\), as corresponding to a soft solid with permanent crosslinks unable to undergo relaxation [59]. Moreover, the computed values for the storage modulus (closed symbols) are systematically higher than the ones found for the loss modulus (hollow symbols), as expected for soft solids [58]. The numerical results in Fig. 7.b refer to the \(T_{A}\) topology corresponding to the more open realization facilitating for bacterial reorganization. As previously shown in Fig. 2, these DPD- biofilms are spatially homogeneous in the microscopic terms revealed by the PDF structure. However, not only their rheological properties show to give consistent results when tested varying the parameter space (Fig. 6), but are also capable to qualitatively reproduce the high-frequency (small-scale) rheological behaviour observed in experiments (Fig. 7.a), even if they do not consider spatial inhomogeneity characteristic of real biofilms. Further work is in progress to explicitly mimic spatial inhomogeneities in the numerical DPD-based approach. As expected, the simulated frictional losses markedly follow the Dissipative Particle Dynamics as imprinted by the constant solvent viscosity in the whole frequency domain (i.e., \(G^{\prime\prime}=\omega\eta\); see Fig. 7b). This marked unitary \(G^{\prime\prime}\approx\omega\) fingerprint actually corresponds to the intrinsic dissipation of the DPD-simulation method which no longer operate as a collective relaxation in the largest mesoscopic scales (at low frequencies). This result is not only compatible with the Newtonian fluidity inherent to the microscopic-DPD but also agrees with what experimentally observe at high frequencies in both the stimulated shaken biofilms and the statically grown ones (although they do preserve the high frictional relaxation at low frequencies; see Fig. 7a). However, the DPD-simulated values of \(G^{\prime\prime}\) obtained at the lowest simulation frequencies appear more than two order of magnitudes smaller than \(G^{\prime}\), which indicates a predominantly conservative mechanics under the _quasi_-static deformation conditions considered in the simulations. The different simulated datasets fall indeed within a same master curve as corresponding to a common viscous friction imparted by the solvent on the single structural entities (independently on the number of crosslinks). The difference is progressively reduced upon increasing \(\omega\) (increasing viscous friction under constant rigidity), up to the Newtonian flow behavior \(G^{\prime}\sim G^{\prime\prime}=\eta\omega\) undergone by the synthetic DPD- biofilms at the higher simulated frequency (corresponding to the faster strain rates). A qualitatively similar behavior occurs in the experiments for _P. fluorescens_ biofilms grown under static conditions albeit the difference between \(G^{\prime}\) and \(G^{\prime\prime}\) at low frequencies is not actually so marked as predicted by the simulations. These compared results validate, at least qualitatively, a synthetic capacity of our model DPD-biofilm for predictive rheology in terms of permanent matrix crosslinking. Noticeably, no viscoelastic relaxation can be predicted as far the crosslinks remain fixed across the mapped scales. ## IV Discussion Biofilms are composed of bacteria that secrete a mesh of extracellular polymeric substances (EPS) producing a viscoelastic matrix to which they eventually crosslink through of complex (conservative and dissipative) mechanisms that result in a complex rheological response [14; 15; 16; 17; 12; 13]. Even though a math Figure 7: a) Experimental measurements of the elastic modulus (\(G^{\prime}\); solid symbols), and loss modulus (\(G^{\prime\prime}\); hollow symbols) for two different classes of _P. fluorescens_ biofilms: The rigid one grown under shaking conditions (black), and the softer one obtained under static conditions (red); measurements performed in triplicate). The symbols represent averaged data replicated in each condition (experimental reproducibility compatible with symbol size). The near constant storage moduli evidence soft solid behavior (\(G^{\prime}\sim G_{0}\)). The dashed line indicates the Newtonian trend expected at high frequency for the loss modulus under dominant viscous friction (\(G^{\prime\prime}\sim\omega\)). The grey shadowed blind region is not accessible to experiments (see Methods: Experimental Rheology). b) Synthetic rheology in DPD-biofilms with variable number of permanent crosslinks at the \(T_{A}\)-topology: \(G^{\prime}\) (solid symbols) and \(G^{\prime\prime}\) (hollow symbols) computed for the biofilm model. Data refer to \(N_{b}=184\), \(N_{p}=80\), \(l=100\), \(N_{s}=35000\), and variable number of crosslinks; from CL = 7000 (upper dataset), down to CL = 2000 (lower dataset); see legend. Whereas the effective rigidities for elastic storage decrease with the number of crosslinks (as observed in experiments; Panel ), all the calculated frictional losses fall within a common master curve compatible with the constant viscosity encoded in the DPD-simulations (dashed line) ematical modelling of bacterial biofilms might provide a useful tool for controlling biofilm formation, up-to-date modelling cannot be considered completely satisfactory. Tailoring a detailed model that includes physical parameters to mimic hydrodynamics, solute mass transport and active dynamics of the bacterial population within the biofilm is indeed a challenging task. The parametrization of the individual components of a biofilm model increases the complexity of the algorithm beyond limited computational capacity [63]. Therefore, it would be highly desirable to design a structurally simplified model while retaining the relevant dynamics. Further including dynamical details arising from systemic memory effects could be also desirable although computationally unavoidable in the mesoscopic level of complexity requested to capture biofilm rheology as measurable in experiments. By taking advantage of a Dissipative Particle Dynamics (DPD) simulation algorithm, we have validated a coarse-grained model of biofilm behavior to numerically explore the rheological properties of a _P. fluorescens_ biofilm. The synthetic model is based on the DPD-approach depicted in [12], and consists of topologically tunable simulation platform of permanent polymer chain crosslinking able to mimic a real bacterial biofilm approaching an effectively passive living component embedded with a variable contents of EPS. Because DPD models allow to simulate short range interactions of any statically embedded particle, our model enables for progressively increasing structural details at smaller spatial resolution in the polymerized (EPS) mesh; from the macroscopic scale of the rheological response, through the structural mesoscopic details, down to the fluctuating behavior of the single particles under detailed balance of thermal motion against frictional dissipation. This is an important advantage over generally used long-range interaction models [64], which are spatially limited by a continuous scale cut-off that impedes going deeper into smaller levels of detail including the topology of crosslinking and the viscous friction in the underlying microscopic motions. Moreover, the "soft" nature of DPD-interactions allows to increase the integration time by orders of magnitude, allowing to explore rheological time scales normally inaccessible to atomistic simulations. However, our numerical DPD-simulation schema is only weakly dissipative (or too much "conservative") as only considering static permanent crosslinking differently to real biofilms including highly dissipative sinks and sources of mechanical energy e.g., reconfigurable crosslinks, sliding entanglements and propulsive forces within the thriving bacteria. Therefore, our structurally simplified DPD-approach only captures essential features of linear mechanics but fails in describing the active nonlinear response observed in experiments. Whereas the topologically static DPD-simulations naturally exhibit nonlinear strengthening as due to chain ordering under large deformation the real _P. fluorescens_ biofilms used as a validating setting exhibit contradictory mechanical softening as due to biological activities not considered in the current simulations [12]. Finally and foremost, our numerical DPD-algorithm explicitly includes hydrodynamics and thermal fluctuations, which are known to play a crucial role at introducing variability in biological systems such as the biofilms here considered [65]. Our modelling approach in essence simplify the complexity of real biofilms treating them as composite materials described by a set of physical parameters that recapitulate the principal ingredients of compositional and topological structure. The simulated DPD-biofilms have been constructed as a complex structure of prescribed topology as fixed by the degree and density of crosslinking and known composition by class and number of interacting particles. Because of its mesoscopic nature we have got access to simulate their rheological behaviour as dependent of the treatment processes imposing the history of the shear deformation in a way controlled by the dissipative frictional memory of the microscopic components. We have chosen to study biofilms formed by _P. fluorescens_ for three main reasons: 1) It is the same model system as explored in our previous study [12], and other related ones, _P. aeruginosa_ e.g. (see Ref. [15] for a recent review), making it easy to validate our simulation results with already published ones (both experimental and numerical with a more limited scope). 2) The DPD-simulations here validated allow to make predictions in more extended spatio-temporal scales than conventional macro-rheological experiments do allow (including high frequencies and high strain rates often precluded by spurious inertial effects in experiments). 3) Although the model microorganism used in this work for experimental validation of the simulation framework is non-pathogenic, other members of the genus _Pseudomonas_ are often pathogenic thus being requested for synthetic analysis _in silico_. As a relevant counterexample, _P. aeruginosa_ is an opportunistic pathogen that is frequently associated with chronic biofilm infections being hence attractive for simulation forecasting. Overall, the composition of the EPS is similar in both species, with a high proportion of acetylated polysaccharides (alginate-like) and extracellular DNA [66; 67], which suggests that the biofilm could be characterised by a similar mechanical behaviour that _P. fluorescens_. The proposed DPD-simulations could make reasonable numerical forecasting on the physical biofilm's fate as performed in "digital tweens" for mechanical behavior, built _in silico_ as synthetic biofilms resembling the mechanical properties of the real (pathogenic) biofilms. ## V Conclusions By comparing the experimental to the numerical results, we conclude that our proposed coarse grained model qualitatively reproduces the behavior of rheological moduli over several decades of dynamic behavior. The measured elastic modulus was always higher than the loss modulus both in experiments and simulations as corresponding to soft solid behavior. Moreover, we predict the decreasing difference between \(G^{\prime}\) and \(G^{\prime\prime}\) observed at higher frequencies as a consequence of dominant viscous friction. This was clearly observed in the softer biofilms prepared under static conditions, and predicted by the DPD-simulations including frictional dissipation as an essential dynamic ingredient. As a very characteristic feature of biofilm rheology, \(G^{\prime}\) and \(G^{\prime\prime}\) tended to converge for high frequency both in numerical and experimental outcomes. Nonetheless, simulations showed a larger dissipative gap be tween \(G^{\prime}\) and \(G^{\prime\prime}\) than what observed in experiments. In general, when \(G^{\prime}>>G^{\prime\prime}\) the system flows more like an inviscid liquid whereas the regime where \(G^{\prime}\sim G^{\prime\prime}\) indicates a viscoelastic system. Hence, our DPD-model presents a marked transition from a rigid solid at low frequency towards a viscoelastic regime at higher frequencies. In the experiments with the real biofilms this transition happens in a less pronounced way more compatible with a soft solid behavior over broader scales than in the simulations. The qualitative correspondence between real biofilms and synthetic DPD simulations endows the forecasting potential of using a coarse grained DPD approach to biofilm rheology when modelling extremely complex biofilms. Even though the model strongly relies on an _a priori_ detailed study of the parameter space, in future prospective, this will allow us to evaluate in more detail the biofilm transitions from a predominantly solid-like system to a predominantly liquid-like behavior, through the interplay of the elastic and loss moduli upon changing the biofilm composition or growth conditions. In fact, this transition can be even more dramatic in biofilms prepared under static conditions and longer maturation times, where the elastic modulus vanishes at high frequency. We are currently working in this direction, studying how cross-linking dynamics may play a relevant role in these transitions. As a prospective outlook, DPD-simulation knowledge on the dynamics of the mechanical transitions within synthetic biofilms may guide future strategies that allow the penetration of antimicrobial agents into "soften" biofilm matrices, even predicting conditions for structural disassembly relevant for technological applications. ###### Acknowledgements. B.O., I.L-M and C.V. acknowledges funding from Grant UCM/Santander PR26/16. FM acknowledges funding from Grant PID2019-105606RB-I00, FIS2016-78847-P, PID2019-108391RB-I00, and FIS2015-70339 from MINECO; REACT-EU program PR38-21-28 ANTICIPA-CM - A grant by Co-munidad de Madrid and European Union under FEDER Program; from EU in response to COVID-19 pandemics; and Comunidad de Madrid under grants S2018/NMT-4389 and Y2018/BIO-5207. C.V. from PID2019-105343GB-I00 of the MINECO F.A acknowledges support from the "Juan de la Cierva" program (FICI-2017-33580). AKM is recipient of a Sara Borrell fellowship (CD18/00206) financed by the Spanish Ministry of Health. V.B. acknowledges the support from the European Commission through the Marie Sklodowska-Curie Fellowship No. 748170 ProFrost. The authors acknowledge the computer resources from the Red Espanola de Supercomputacion (RES) FI-2020-1-0015 and FI-2020-2-0032, and from the Vienna Scientific Cluster (VSC). ## Data Availability Statement The data that support the findings of this study are available from the corresponding author upon reasonable request.
2309.11482
X-rays Trace the Volatile Content of Interstellar Objects
The non-detection of a coma surrounding 1I/`Oumuamua, the first discovered interstellar object (ISO), has prompted a variety of hypotheses to explain its nongravitational acceleration. Given that forthcoming surveys are poised to identify analogues of this enigmatic object, it is prudent to devise alternative approaches to characterization. In this study, we posit X-ray spectroscopy as a surprisingly effective probe of volatile ISO compositions. Heavily ionized metals in the solar wind interact with outgassed neutrals and emit high-energy photons in a process known as charge exchange, and charge exchange induced X-rays from comets and planetary bodies have been observed extensively in our Solar System. We develop a model to predict the X-ray flux of an ISO based on its chemical inventory and ephemeris. We find that while standard cometary constituents, such as H$_2$O, CO$_2$, CO, and dust are best probed via optical or infrared observations, we predict strong X-ray emission generated by charge exchange with extended comae of H$_2$ and N$_2$ -- species which lack strong infrared fluorescence transitions. We find that XMM-Newton would have been sensitive to charge exchange emission from 1I/`Oumuamua during the object's close approach to Earth, and that constraints on composition may have been feasible. We argue for follow-up X-ray observations of newly discovered ISOs with close-in perihelia. Compositional constraints on the general ISO population could reconcile the apparently self-conflicting nature of 1I/`Oumuamua, and provide insight into the earliest stages of planet formation in extrasolar systems.
Samuel H. C. Cabot, Q. Daniel Wang, Darryl Z. Seligman
2023-09-20T17:26:35Z
http://arxiv.org/abs/2309.11482v1
# X-rays Trace the Volatile Content of Interstellar Objects ###### Abstract The non-detection of a coma surrounding 1I/'Oumuamua, the first discovered interstellar object (ISO), has prompted a variety of hypotheses to explain its nongravitational acceleration. Given that forthcoming surveys are poised to identify analogues of this enigmatic object, it is prudent to devise alternative approaches to characterization. In this study, we posit X-ray spectroscopy as a surprisingly effective probe of volatile ISO compositions. Heavily ionized metals in the solar wind interact with outgassed neutrals and emit high-energy photons in a process known as charge exchange, and charge exchange induced X-rays from comets and planetary bodies have been observed extensively in our Solar System. We develop a model to predict the X-ray flux of an ISO based on its chemical inventory and ephemeris. We find that while standard cometary constituents, such as H\({}_{2}\)O, CO\({}_{2}\), CO, and dust are best probed via optical or infrared observations, we predict strong X-ray emission generated by charge exchange with extended comae of H\({}_{2}\) and N\({}_{2}\) -- species which lack strong infrared fluorescence transitions. We find that _XMM-Newton_ would have been sensitive to charge exchange emission from 1I/'Oumuamua during the object's close approach to Earth, and that constraints on composition may have been feasible. We argue for follow-up X-ray observations of newly discovered ISOs with close-in perihelia. Compositional constraints on the general ISO population could reconcile the apparently self-conflicting nature of 1I/'Oumuamua, and provide insight into the earliest stages of planet formation in extrasolar systems. ISOs (52); Comets (280) 0000-0002-0002-3870]Samuel H. C. Cabot 0000-0002-3886-7886]Q. Daniel Wang 0000-0002-4883-0888]Darryl Z. Seligman ## 1 Introduction Knowledge of the compositions of minor bodies has considerably influenced our understanding of the early Solar System. While investigations of their dynamical histories are limited by the Solar System's chaotic nature (Wisdom, 1980; Laskar, 1989; Batygin & Laughlin, 2008; Laskar & Gastineau, 2009), compositional measurements of minor bodies -- especially of their interiors -- have the potential to reveal their formation environments (Oberg et al., 2011). Remarkably, the Solar System likely ejected tens of earth masses of volatile rich material during the early stages of its formation and evolution (Hahn & Malhotra, 1999; Gomes et al., 2004; Tsiganis et al., 2005; Morbidelli et al., 2005; Levison et al., 2008; Raymond et al., 2018, 2020). Therefore, it is reasonable to expect that extrasolar planetary systems contribute to a galactic population of interstellar comets, and that some will encounter the Solar System on hyperbolic trajectories (Moro-Martin et al., 2009; Engelhardt et al., 2017; Cook et al., 2016). Naturally, it came as a surprise that 1I/'Oumuamua, the first discovered ISO, exhibited none of the typical properties of Solar System comets (for recent reviews, see Moro-Martin, 2022; Jewitt & Seligman, 2022). It had no visible coma (Meech et al., 2017; Jewitt et al., 2017; Trilling et al., 2018; Bolin et al., 2018), an extreme shape (e.g., Knight et al., 2017; Mashchenko, 2019), a reddened reflection spectrum (Masiero, 2017), a young dynamical age (e.g., Mamajek, 2017; Hsieh et al., 2021) and non-zero nongravitational acceleration (Micheli et al., 2018). The second discovered ISO, 2I/Borisov, was unambiguously a comet (Jewitt & Luu, 2019), perhaps more inline with expectations. Its cometary activity was readily measured and typical carbon and nitrogen bearing species were detected (Opitom et al., 2019; Kareta et al., 2020; Lin et al., 2020; Bannister et al., 2020; Xing et al., 2020; Aravind et al., 2021). _ALMA_ and _Hubble_ observations revealed that it was enriched in CO relative to H\({}_{2}\)O (Bodewits et al., 2020; Cordiner et al., 2020). This finding suggests that 2I/Borisov formed exterior to the CO snowline in the protoplanetary disk (Price et al., 2021; Seligman et al., 2022) of a very young (Lisse et al., 2022) or a very carbon-enriched system (cf. Bodewits et al., 2020). 'Oumuamua's former provenance remains in question, and dynamically backtracing either ISO to their respective host systems is difficult (Mamajek, 2017; Bailer-Jones et al., 2018). 1I/'Oumuamua's peculiar properties and elusiveness to follow-up spectroscopy have prompted a variety of theories regarding its origin. If the measured nongravitational acceleration was driven by cometary outgassing, this would be energetically consistent with a composition of H\({}_{2}\)(Fuglistaler & Pfenniger, 2018; Seligman & Laughlin, 2020; Levine & Laughlin, 2018), N\({}_{2}\)(Jackson & Desch, 2021; Desch & Jackson, 2021), or CO (Seligman et al., 2021). Confirming an H\({}_{2}\) or N\({}_{2}\) composition would have immediately revealed a new mechanism of generating minor bodies: formation in the cores of molecular clouds (Seligman & Laughlin, 2020), or ejection of ice from the surfaces of Pluto analogues due to impact events (Desch & Jackson, 2021). Alternatively, if the acceleration was caused by radiation pressure (Micheli et al., 2018), then 1I/'Oumuamua must have been extremely porous (Moro-Martin, 2019; Sekanina, 2019; Flekkoy et al., 2019; Luu et al., 2020) or thin (Bialy & Loeb, 2018). While H\({}_{2}\)O, CO\({}_{2}\), and CO species in cometary comae are easily measured via infrared spectroscopy, the possible presence of difficult to detect homonuclear H\({}_{2}\) or N\({}_{2}\) prompts a search for alternative methods for characterizing interstellar comets. This study presents such a new approach: X-ray observations of solar wind (SW) charge exchange (CX). X-ray observations of an interstellar object should immediately reveal the presence of an outgassed coma. CX involves the transfer of electrons from a cool neutral medium to heavily ionized metals in the SW. Following the unexpected detection of X-rays emanating from C/1996 B2 (Hyakutake) (Lisse et al., 1996; Cravens, 1997), CX has been observed with a multitude of other comets (Dennerl et al., 1997; Cravens, 2000, 2002; Lisse et al., 1999; Lisse et al., 2004; Bodewits et al., 2007). CX conveys information about the composition and speed of the solar wind (Beiersdorfer et al., 2001; Bodewits et al., 2007; Gu et al., 2016). Studies have also linked CX emission to physical and chemical properties of the minor body and its coma through the morphology of the X-ray emission profile (Wegmann et al., 2004) and emission line ratios (Mullen et al., 2017). Indeed, there are encouraging prospects for discovering additional ISOs in the coming decade. Population synthesis models of the galactic population of ISOs have demonstrated that the Rubin Observatory Legacy Survey of Space and Time (LSST) will detect \(1-2\) 1I/'Oumuamua analogues every year (Hoover et al., 2022). This paper is organized as follows. In SS2, we discuss the advantages and limitations of X-ray observations for characterizing minor bodies, and outline a simple model of charge exchange emission. We apply our model to 1I/'Oumuamua and 2I/Borisov in SS3, and compare our predictions to the known correlation between X-ray and visible-band flux. Our model suggests that if 1I/'Oumuamua did indeed outgas a coma, that CX would have been detectable with current-generation X-ray facilities during the period that it approached Earth. SS4 establishes expectations for detecting CX with new ISOs discovered by the Rubin Observatory. Finally, our conclusions are summarized in SS5. We refer the reader to Appendix A for a brief review of CX fundamentals, and to Appendix B for the details of our CX model. ## 2 Motivation & Model This section establishes use cases for X-ray observations of interstellar objects, and sets forth an analytic model of X-ray emission from outgassed comae. It also specifies the traits of ISOs (i.e., bulk composition and orbital trajectory) from which we expect robust detection of X-ray emission with present-day facilities. ### Information Gained from X-ray Observations As a primer, it is useful to review the prototypical case of CX from an outgassed coma: C/1996 B2 (Hyakutake), a long-period comet which came within 0.23 au of the Sun at perihelion. Infrared observations yielded unambiguous detections of H\({}_{2}\)O, CO, CH\({}_{4}\), and C\({}_{2}\)H\({}_{6}\)(Mumma et al., 1996; Dello Russo et al., 2002), Contemporaneously, significant X-ray emission was detected out to \(\sim 100,000\) km (6.7\(\times 10^{-4}\) au) from its nucleus by Lisse et al. (1996). CX was suggested as the responsible process by Cravens (1997), and reaffirmed by Wegmann et al. (1998). Other mechanisms (e.g., thermal bremsstrahlung, scattering, and fluorescence) may be ruled out (Krasnopolsky, 1997; Lisse et al., 1999; Lisse et al., 2004). Since this detection, sophisticated models of CX with cometary comae have been developed, such as the hydrodynamical simulations by Wegmann et al. (2004), and CX has been observed with comae of numerous other comets (e.g. Ewing et al., 2013; Bodewits et al., 2007). X-rays have been conventionally of limited value for compositional studies of minor bodies. For one, CX detections arise only from significantly extended comae. Moreover, the most immediate insights conveyed are the morphology of the coma and the properties of the solar wind (Lisse et al., 2004). Infrared/radio spectroscopy remains the _de facto_ technique for detecting molecular species which exhibit solar-pumped fluorescence transitions (Crovisier and Encrenaz, 1983; Bockelee-Morvan et al., 2004). Such species (reviewed by e.g. Biver and Bockelee-Morvan, 2016; Bockelee-Morvan and Biver, 2017) include H\({}_{2}\)O, CO, CO\({}_{2}\), CH\({}_{4}\), C\({}_{2}\)H\({}_{2}\), C\({}_{2}\)H\({}_{6}\), and HCN. Complementary to infrared observations, UV spectroscopy can measure the production of some additional species (Biver et al., 2022) such as noble gases and homonuclear diatomic molecules. However, except for a small number of tenuous reports of their sublimation (e.g., Ar in Hale-Bopp, Stern et al., 2000) noble gases remained elusive in comets prior to _in situ_ detections in 67P (Rubin et al., 2018) by the European Space Agency's _Rosetta_ mission. _Rosetta_ also confirmed the presence O\({}_{2}\) and N\({}_{2}\) in comets (Rubin et al., 2015; Bieler et al., 2015). This study explores a novel use case for X-ray observations: to reveal an outgassed coma that is invisible in the infrared, and which may arise from the perihelion passage of an interstellar object. While the composition of 1I/'Oumuamua remains debated, a number of candidate species (Table 1) are monotonic or homonuclear diatomic, and will not fluoresce in the infrared. However, CX will occur at a neutral medium, even one that does not produce observable thermal emission, fluorescence, or scattering. Therefore, the outgassed coma of an interstellar object may be X-ray bright even in the absence of infrared and optical detections. In particular, X-ray observations offer a promising avenue for remote sensing an H\({}_{2}\) or N\({}_{2}\) coma. Both of these species are thermodynamically consistent with 1I/'Oumuamua's nongravitational acceleration under outgassing, and comprise the focus of this study. Per Seligman et al. (2021), we benchmark results to a CO composition (c.f. Trilling et al., 2018); although this molecule is more readily detected in the infrared. ### Scaling Relationships In Appendix B, we develop a model that links X-ray flux to a minor body's physical properties. Specifically, it depends on the following key parameters: the neutral production rate (\(Q_{\rm gas}\)), mean radius of the nucleus (\(a\)), perihelion distance (\(r_{\rm peri}\)), sublimation + kinetic energy of the relevant volatiles (\(\mathcal{H}\)), geocentric distance (\(\Delta_{e}\)), solar wind number density (\(n_{\rm SW}\)), CX interaction area of the coma (\(\mathcal{A}\)), and X-ray lumonisty (\(L_{\rm X}\)) and flux (\(F_{\rm X}\)). The pertinent relationships are as follows: \[Q_{\rm gas} \propto a^{2}r_{\rm peri}^{-2}\mathcal{H}^{-1} \tag{1}\] \[\mathcal{A} \propto Q_{\rm gas}^{2}\left(\rm{to}\,leading\,order\right)\] (2) \[n_{\rm SW} \propto r_{\rm peri}^{-2}\] (3) \[L_{\rm X} \propto n_{\rm SW}\mathcal{A}\] (4) \[F_{\rm X} \propto\Delta_{e}^{-2}L_{\rm X}\propto a^{4}r_{\rm peri}^{-6} \mathcal{H}^{-2}\Delta_{e}^{-2}. \tag{5}\] In our model, constants of proportionality are calibrated to measurements of C/1996 B2 (Hyakutake) (Lisse et al. (1996), Appendix B). The above scaling relationships demonstrate that the X-ray brightness of a minor body can be significantly enhanced by a close-in perihelion passage and a particularly volatile composition. Figure 1 shows a schematic of such an ISO encounter, where the scaling relationships and underlying physical processes are depicted in the individual panels. ### Detection Thresholds In this section we guage the feasibility of detecting CX with an outgassed coma, including the requisite X-ray flux and exposure durations. Our predictions are for the EPIC pn detector onboard _XMM-Newton_, which has \begin{table} \begin{tabular}{c c c c c c} \hline Species & \(T_{\rm Sub}\) & \(\gamma\) & \(\Delta H\) & \(c_{s}\) & \(\mathcal{H}\) \\ & (K) & & (kJ mole\({}^{-1}\)) & (km s\({}^{-1}\)) & (kJ \(\times 10^{-23}\)) \\ \hline H\({}_{2}\) & 6 & 7/5 & 1 & 0.19 & 0.18 \\ Ne & 9 & 5/3 & 1.9 & 0.11 & 0.34 \\ N\({}_{2}\) & 25 & 7/5 & 7.34 & 0.14 & 1.27 \\ CO & 58 & 7/5 & 7.6 & 0.15 & 1.37 \\ Ar & 30 & 5/3 & 7.79 & 0.15 & 1.36 \\ O\({}_{2}\) & 30 & 7/5 & 9.26 & 0.10 & 1.60 \\ Kr & 40 & 5/3 & 11.53 & 0.12 & 2.01 \\ Xe & 55 & 5/3 & 15.79 & 0.12 & 2.75 \\ CO\({}_{2}\) & 82 & 8/6 & 28.84 & 0.14 & 4.94 \\ H\({}_{2}\)O & 155 & 8/6 & 54.46 & 0.31 & 9.33 \\ \hline \end{tabular} \end{table} Table 1: Cryogenic chemical properties for volatiles which may be constituents of ISOs. For each species, we report the temperature of sublimation (\(T_{\rm Sub}\)), ideal adiabatic index (\(\gamma\)), enthalpy of sublimation (\(\Delta H\)), and sound speed (\(c_{s}\)) per Equation B13. We also report the total energy input to a single particle (\(\mathcal{H}\)) per Equation B12. Data from Shakeel et al. (2018), except for CO which is obtained from Stephenson and Malanowski (1987) and the NIST database. the advantage of a larger effective area (approximately \(A=900\) cm\({}^{2}\) at 0.5 kev) compared to the other space-based X-ray detectors. To estimate the detectability of a comet or ISO, one needs to account for the noise contribution from the background, which is chiefly made of background AGNs and the diffuse hot plasma emission from the general interstellar and circumgalactic media of the Galaxy, the Local Bubble, and solar wind charge exchange emission within the heliosphere (e.g., McCammon et al., 2002; Koutroumpa, 2012). Based on archival blank-sky EPIC pn observations, we estimate that the total background intensity is \(\Sigma=1.4\times 10^{-3}\) counts s\({}^{-1}\) arcmin\({}^{-2}\) in the \(0.5-0.7\) keV band, for example, which encloses the typically strong Ovii and Oviii K\(\alpha\) lines. The total background intensity also includes the contribution from non-X-ray (cosmic-ray induced instrument) events. Let \(\Phi_{X}\) denote the X-ray flux at the detector due to CX with the target (units of counts s\({}^{-1}\) cm\({}^{-2}\)), and let \(\tau\) denote the exposure duration. Furthermore, we assume that nearly all of the signal is contained within a solid angle \(\Omega\) for simplicity. Then the signal-to-noise (S/N) of the CX signature is \[\mathrm{S/N}=\Phi_{X}A\sqrt{\tau}/\sqrt{\Phi_{X}A+\Sigma\Omega}\,. \tag{6}\] Alternatively, this equation can also be rearranged into an expression for \(\tau\) for a desired S/N threshold. One caveat is that a continuous exposure may be ineffective for fast-moving targets. Photon events may, however, be spatially calibrated across discrete slews that keep the object on the focal plane array (FPA). This enables multiple on-target pointings and reconstruction of events in the cometocentric frame based on the attitude of the telescope. The maximum exposure duration for any one pointing is then limited by the field of view of the telescope and the angular velocity of the target. For reference, 1I/'Oumuamua's angular velocity reached \(1.4\times 10^{-4}\) deg. s\({}^{-1}\) at closest approach to Earth. The crossing time for _XMM-Newton_'s \(30^{\prime}\times 30^{\prime}\) field of view would have been up to \(\sim 3.5\) ks. Space-based X-ray observatories have minimum solar elongation angles (\(\psi^{\mathrm{min}}_{\odot}\)) for reasons pertaining to detector safety (e.g., avoiding burn-out incidents similar to ROSAT's), thermal stability, and power supply. For _Chandra_, \(\psi^{\mathrm{min}}_{\odot}=46.4^{\circ}\), and for _XMM-Newton_, \(\psi^{\mathrm{min}}_{\odot}=70^{\circ}\). These constraints permit observations of small bodies at \(R\geq 0.72\) au and \(R\geq 0.94\) au, respectively. These lower bounds can only be attained under optimal alignment between Earth and the object. Future missions will likely have similar safety exclusion zones to protect equipment at the FPA. Figure 2 depicts the exposure times necessary to achieve S/N = 5 as a function of \(Q_{\mathrm{gas}}\) and \(r_{\mathrm{peri}}\), specifically for the _XMM-Newton_ pn detector. For simplicity, the geocentric distance is assumed \(\Delta_{e}=1\) au. The Figure 1: Schematic of solar wind charge exchange with the outgassed neutrals in the coma of an ISO. _Panel 1_: An instantaneous view of the inner solar system and the trajectory of an ISO, currently at perihelion (\(r_{\mathrm{peri}}\)). _Panel 2_: The flux of solar radiation (\(F_{\odot}\)) heats the volatiles on the ISO’s surface, which causes sublimation and outgassing of neutral volatiles at a rate \(Q_{\mathrm{gas}}\). _Panel 3_: Solar wind ions (with number density \(n_{\mathrm{SW}}\)) interact with the outgassed coma through charge exchange. Electrons are transferred from the neutral coma to the bare or H-like solar wind ions, and the subsequent cascade emits X-rays. The coma’s interaction cross-section (\(\mathcal{A}\)) scales with \(Q_{\mathrm{gas}}^{2}\). As a result, the X-ray luminosity follows \(L_{\mathrm{X}}\propto r_{\mathrm{peri}}^{-6}\) (Equation 5, §2.2). X-ray luminosity of CX is given by Equation B15, and the angular extent of the emission is set by \(\tilde{b}\) (Equation B10). Markers in Figure 2 are located at outgassing rates and perihelia of real minor bodies (the outgassing rate of 1I/'Oumuamua was not directly measured, but is estimated in SS3). This figure immediately conveys the feasibility of detecting CX emission from outgassed comae with currently operational facilities. For modest exposure times of order 10 ks, CX should be detectable when \(Q>10^{27}\) s\({}^{-1}\) over a range of \(r_{\rm peri}\). Qualitatively, this finding is consistent with previous high-significance _ROSAT_ detections of CX with six comets (Dennerl et al., 1997), whose ougassing rates were between \(Q_{\rm gas}=2\times 10^{27}-9\times 10^{29}\) s\({}^{-1}\). The heliocentric distances in this sample were between \(R=0.99-1.96\) au, and geocentric distances between \(\Delta_{e}=0.12-1.61\) au (these comets are discussed further in SS3.3). While the minor bodies need not be observed at perihelion, our model indicates that \(F_{\rm X}\) scales more strongly with heliocentric distance (\(R\)) than with \(\Delta_{e}\)(Lisse et al., 1999) per Equations B11 & B14. As discussed in the next section, 1I/'Oumuamua's proximity to the Earth and Sun would have made it amenable to X-ray observations. On the other hand, 2I/Borisov would have been a more difficult case because it only came within a minimum 2 au of the Sun. ## 3 Detectability of X-rays from 2I/Borisov and 1I/'Oumuamua We proceed to estimate the X-ray luminosity for the only two identified ISOs that traversed our Solar System: 1I/'Oumuamua and 2I/Borisov. We first estimate \(L_{\rm X}\) using the analytic framework presented in the previous section. We then make an independent estimate based on known trends between optical and X-ray luminosities for comets. We briefly review pertinent measurements of both ISOs. From its photometric lightcurve structure, 1I/'Oumuamua is estimated to have had an oblate 6:6:1 ellipsoidal geometry (Mashchenko, 2019). Its albedo (\(p\)) was not constrained, so its true dimensions are uncertain. Meech et al. (2017) and Bolin et al. (2018) measured an effective radius of \(\sim 0.1\) km assuming an albedo of 0.04. The light curve fit by Mashchenko (2019) yielded dimensions \(115\times 111\times 19\) m with \(p=0.1\). For our order-of-magnitude calculations, we treat 1I/'Oumuamua as spherical with radius of \(a=100\) m. However, we emphasize that the true dimensions are albedo-dependent, and the lengths scale as \(1/\sqrt{p}\). At perihelion of \(r_{\rm peri}=0.25\) au, 1I/'Oumuamua's distance to Earth was \(\Delta_{e}=1.23\) au with an apparent magnitude \(V=21\). The object reached an apparent magnitude of \(V=19.7\) at its closest approach to Earth with \(\Delta_{e}=0.16\) au. Infrared _Spitzer_ observations produced non-detections when 1I/'Oumuamua was outbound at \(\sim 2\) au, providing upper limits on the production rates of CO or CO\({}_{2}\)(Trilling et al., 2018). However, sporadic outgassing of CO might reconcile the nongravitational acceleration and _Spitzer_ non-detection (Seligman et al., 2021). The nuclear radius of comet-like 2I/Borisov was constrained to \(0.2\,{\rm km}<a<0.5\,{\rm km}\) based on its brightness profile and nongravitational acceleration assuming a range of plausible bulk densities (Jewitt and Luu, 2019; Figure 2: Exposure time (ks) required for a \(5\sigma\) detection of CX X-ray emission (Equation B15), as a function of outgassing rate \(Q_{\rm gas}\) and perihelion distance \(r_{\rm peri}\). Geocentric distance is fixed at \(\Delta_{e}=1\) au. The background rate and effective area correspond to that of EPIC pn on _XMM-Newton_. For reference, markers are plotted at locations representative of real minor bodies: light-blue (‘H’) corresponds to \(Q_{\rm gas}=1.5\times 10^{29}\) s\({}^{-1}\) observed at 1.0 au — similar to C/1996 B2 (Hyakutake); tan (‘B’) corresponds to \(Q_{\rm gas}=10^{27}\) s\({}^{-1}\) observed at 2.0 au — similar to 2I/Borisov; coral (‘O’) corresponds to \(Q_{\rm gas}=2.8\times 10^{28}\) s\({}^{-1}\) observed at 0.25 au and \(Q_{\rm gas}=1.8\times 10^{27}\) s\({}^{-1}\) observed at 1.0 au — estimates corresponding to 1I/’Oumuamua under a CO composition. The solar elongation angles of _Chandra_ (\(\psi^{\rm min}_{\odot}=46.4^{\circ}\)) and _XMM-Newton_ (\(\psi^{\rm min}_{\odot}=70^{\circ}\)) impose lower bounds (\(\sin\psi^{\rm min}_{\odot}\)) on observable \(r_{p}\), and their respective restricted zones are hatched in the figure. The lower bound is attained only in the ideal orbital case and observable heliocentric distances are typically significantly larger due to spacecraft limitations. Bolin et al., 2020). We adopt a nominal radius of \(a=300\) m. Unlike 1I, gas emission activity typical of comets was definitely detected from this body. When contemporaneous measurements of H\({}_{2}\)O and CO production rates were obtained after perihelion (\(r_{\rm peri}=2\) au), CO dominated the outflow. Specifically Bodewits et al. (2020) measured \(Q({\rm CO})=7.5\times 10^{26}\) s\({}^{-1}\) at 3.3 days after perihelion and \(Q({\rm CO})=1.07\times 10^{27}\) s\({}^{-1}\) at 22.4 days after perihelion. Also, Cordiner et al. (2020) measured \(Q({\rm CO})=4.4\times 10^{26}\) s\({}^{-1}\) in observations conducted \(7-8\) days after perihelion. While water production exceeded \(Q({\rm H_{2}O})=10^{27}\) s\({}^{-1}\) prior to perihelion, it dropped below \(6\times 10^{26}\) s\({}^{-1}\) for observations conducted after perihelion (Bodewits et al., 2020). Jewitt et al. (2020) measured \(V=16.6\) at 26 days after perihelion, when 2I/Borisov's distances were \(R=2.1\) au and \(\Delta_{\rm e}=1.9\) au. In the following, \(Q_{\rm gas}\) denotes the total outgassing rate across all neutral volatiles. In certain instances, a dominant species \(X\) was determined, in which case its outgassing rate is denoted \(Q(X)\). Our predictions for \(L_{\rm X}\) neglect subtle dependencies on properties of the solar wind and coma. In the optically thick regime, all solar wind minor ions will undergo charge exchange. However, the X-ray lines emitted depend on the neutral composition (Mullen et al., 2016). Also, CX cross sections depend strongly on the solar wind velocity (Kharchenko and Dalgarno, 2001; Beiersdorfer et al., 2001; Bodewits et al., 2007). ### Analytic Predictions for 2I/Borisov For 2I/Borisov, we adopt the measured production rate \(Q({\rm CO})=1.07\times 10^{27}\) s\({}^{-1}\) at \(R=2.07\) au (Bodewits et al., 2020). We note that Equation B11 would predict \(Q_{\rm gas}=5.9\times 10^{26}\) s\({}^{-1}\) near perihelion, and modifying the scaling relation to take into account a CO composition (i.e., an \(\mathcal{H}\) appropriate for CO ice) yields \(Q_{\rm gas}=4.0\times 10^{27}\) s\({}^{-1}\), which are both within an order-of-magnitude of the observed value. Accounting for the reduced solar wind density at 2 au, we calculate \(L_{\rm X}=7.5\times 10^{11}\) erg s\({}^{-1}\) (see Table 2). A 900 cm\({}^{2}\) effective area (characteristic of EPIC pn) at Earth would detect \(6.3\times 10^{-5}\) counts s\({}^{-1}\), which would require a considerable 400 ks exposure for a 5\(\sigma\) detection. Indeed, most previous CX detections in comets have involved much brighter X-ray luminosities \(L_{\rm X}>10^{14}\) erg s\({}^{-1}\)(Lisse et al., 1999; Lisse et al., 2004). Since there are observations of 2I/Borisov at multiple points along its trajectory, we may perform a more detailed comparison between our model and measurements of outgassing over time. We plot the observed time-evolution of \(Q({\rm H_{2}O})\) and \(Q({\rm CO})\) in Figure 4 with our production rate model (Equation B11). Our model, which was calibrated to observations of C/1996 B2 (Hyakutake), predicts water production rates within a factor a few of those observed. The discrepancies may be due to composition inhomogeneities in the nuclues and/or the sporadic nature of outgassing. On the other hand, our high model values show that Borisov could not be made of pure CO, but is more in line with the \(\sim 10-20\%\) vs. water maximal abundances of CO found in solar system comets (Bockelee-Morvan and Biver, 2017). ### Analytic Predictions for 1I/'Oumuamua As a baseline estimate for 1I/'Oumuamua, we assume a bulk composition of CO, which is energetically compatible with the nongravitational acceleration and revised _Spitzer_ limits on the production rate given the low activity state outbound at 2 au (Trilling et al., 2018; Seligman et al., 2021). Equation B11 with \(\mathcal{H}\) corresponding to a CO composition suggests \(Q_{\rm gas}=2.8\times 10^{28}\) s\({}^{-1}\) at perihelion (the leftmost marker in Figure 2), and \(Q_{\rm gas}=4.4\times 10^{26}\) s\({}^{-1}\) at 2 au, in agreement with the Seligman et al. (2021) estimate, as expected. With X-ray flux received at Earth depending strongly on the target's distance from the Sun, 1I/'Oumuamua would have only become accessible to _XMM-Newton_ at its minimum observable heliocentric distance \(R=0.943\) au. This geometry is depicted in Figure 3. It would have been prudent to observe 1I/'Oumuamua in X-ray at this time, which roughly corresponded with 1I/'Oumuamua's closest approach to Earth. At \(R=0.943\) au and \(\Delta_{e}=0.276\) au, we estimate \(Q_{\rm gas}=2.0\times 10^{27}\) s\({}^{-1}\), \(L_{\rm X}=1.2\times 10^{13}\) erg s\({}^{-1}\), and a flux at Earth of \(F_{\rm X}=5.4\times 10^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\). Assuming an N\({}_{2}\)(Desch and Jackson, 2021) or H\({}_{2}\)(Seligman and Laughlin, 2020) composition yields even higher \(F_{\rm X}\), as reported in Table 2. For any of these three highly volatile compositions, a signal would have been unambiguously detected in a \(<1\) ks X-ray follow-up exposure, producing a unique, easily obtained, positive outgassing detection. ### Observations of CX with Comets Detections of CX from Solar System comets help gauge conditions under which we may detect CX from ISOs. The detection of X-rays from C/1996 B2 (Hyakutake) (Lisse et al., 1996) prompted Dennerl et al. (1997) to search archival _ROSAT_ data for other instances of CX with comet comae. Confirmed cases include: C/1990 Kl (Levy), C/1990 Ni (Tsuchiya-Kiuchi), 45P (Honda-Mrkos-Pajduskova; HMP), and C/1991 A2 (Arai). The faintest target was 45P, with \(F_{\rm X}=1.5\times 10^{-13}\) erg s\({}^{-1}\) cm\({}^{-2}\), which was detected at high confidence by _ROSAT_ with a 191 s exposure. This comet sample serves as an independent diagnostic for selecting ISOs with detectable CX emission. It is established that a correlation exists between comet X-ray (\(L_{\rm X}\)) and optical (\(L_{\rm opt}\)) luminosities (e.g., Lisse et al., 1999; Lisse et al., 2004). \(L_{\rm X}\) plateaus for the brightest comets, but follows linear trend for modest luminosities \(L_{\rm X}<10^{15}\) erg s\({}^{-1}\) where \(L_{\rm X}\sim 10^{-4}L_{\rm opt}\); however, there is dependence on the dust-to-gas ratio [D/G] that induces \(1-2\) dex of scatter. Using the trend presented by Dennerl et al. (1997), we estimate 2I/Borisov's X-ray luminosity as \(L_{\rm X}=6.1\times 10^{12}\) erg s\({}^{-1}\) near perihelion, based on its \(V=16.6\). For 1I/'Oumuamua, at close approach to Earth and \(V=19.7\), we find \(L_{\rm X}=2.6\times 10^{9}\) erg s\({}^{-1}\). (Note that this estimate likely underestimates X-ray emission from 1I/'Oumuamua, and serves only as a foil to other estimates presented in this section.) It is worth mentioning that dusty comae (e.g., C/Hale-Bopp 1995 O1 or 17P/Holmes, Lisse et al., 1997, 2013) may destroy solar wind ions without concomitant CX X-ray production (Dennerl et al., 1997; Lisse et al., 2004). This effect may prove an obstacle to X-ray observations of dusty ISOs. However, the typical dust content of ISOs is unclear. Jewitt et al. (2017) and Meech et al. (2017) placed stringent upper bounds on 1I/'Oumuamua's dust production rate, at \(2\times 10^{-4}\) kg s\({}^{-1}\) and \(1.7\times 10^{-3}\) kg s\({}^{-1}\) respectively. Estimates of 2I/Borisov's reach as high as 35 kg s\({}^{-1}\)(Kim et al., 2020). ## 4 Future ISO CX X-ray Expectations For the prospects of detecting CX X-ray emission with a future ISO, it is important to establish: (1) a set of criteria that provides high-odds of detecting CX X-ray emission; and (2) an estimated fraction of ISOs discovered that will exhibit a detectable X-ray flux. To accomplish these goals, we conduct a Monte Carlo analysis assuming Rubin Observatory sky parameters (as the largest and deepest continual all-sky survey of the next decade, e.g., Li et al., 2022) which incorporates baseline predictions for the distribution of perihelia and geocentric distances of ISO trajectories, as well as their size distribution and possible compositions. ### Trajectories Our Monte Carlo analysis involves randomly drawing orbital configurations for ISOs from previous population synthesis studies. We predict X-ray flux at the minimum heliocentric distance that satisfies the solar elongation angle constraint of _XMM-Newton_. This alignment occurs at time \(t^{*}\), defined as: \[t^{*}\equiv\underset{t}{\rm argmin} \|\mathbf{R}(t)\|\] s.t. \[\cos^{-1}\left(-\frac{(\mathbf{R}(t)-\mathbf{R}^{\prime}(t)) \cdot\mathbf{R}^{\prime}(t)}{\|\mathbf{R}(t)-\mathbf{R}^{\prime}(t)\|\|\mathbf{ R}^{\prime}(t)\|}\right)\geq\psi_{\odot}^{\rm min}, \tag{7}\] where \(\mathbf{R}\) and \(\mathbf{R}^{\prime}\) are vectors representing the positions of the ISO and Earth, respectively, with the Sun at the origin. The ISO's hyperbolic trajectory may be expressed in a Cartesian coordinate system where \(\mathbf{R}=\{R_{1},R_{2},0\}\): \[R_{2}=\pm\sqrt{e^{2}-1}\\ \times\sqrt{(R_{1}+eq/(e-1))^{2}-(q/(e-1))^{2}}\,, \tag{8}\] with eccentricity \(e>1\) and perihelion \(r_{\rm peri}\) sampled from distributions provided by Hoover et al. (2022). Their study coupled dynamical simulations with the local stellar velocity dispersion to constrain the population of 1I/'Oumuamua-like ISOs that the Rubin Observatory will find. They computed the probability density of perihelia, which indicates that \(10\%,25\%\), and \(50\%\) of detected ISOs will attain \(r_{\rm peri}\leq 0.35\), \(0.54\), and \(0.83\) au, respectively. Under this distribution, it is clear that 2I/Borisov had an unlikely large perihelion at 2 au (and Figure 3: Trajectory and X-ray observability of 1I/’Oumuamua, prior to its discovery. Plotted lines connect the instantaneous positions of Earth and 1I/’Oumuamua. Blue lines denote a solar elongation angle meeting X-ray observing requirements, \(\psi_{\odot}\geq\psi_{\odot}^{\rm min}\), where \(\psi_{\odot}^{\rm min}=70^{\circ}\) for _XMM-Newton_. The smallest heliocentric distance allowed by this constraint is denoted \(R^{*}=0.943\) au. At this time, the distance between Earth and 1I/’Oumuamua was \(\Delta_{e}^{*}=0.276\) au. was anomalous with respect to other orbital parameters, such as distance from the solar apex at encounter, and eccentricity). By contrast, 1I/'Oumuamua's perihelion distance of 0.25 au was unexpectedly small. Next, we sample Earth's position as a unit vector with azimuthal and polar angles jointly distributed according to \(p(\theta,\phi)=\sin\phi/4\pi\), and scaled by 1 au. In reality, the orientation of the ISO's trajectory with respect to the ecliptic is determined by the inbound velocity vector and impact parameter. Stellar kinematics are a good proxy for the galactic distribution of ISOs; however, the typical ages of ISOs are poorly constrained at a population level. We assume the above distribution of (\(\theta\), \(\phi\)) for our order-of-magnitude calculations. The distances \(R^{*}=\|\mathbf{R}(t^{*})\|\) and \(\Delta_{e}^{*}=\|\mathbf{R}(t^{*})-\mathbf{R}^{\prime}(t^{*})\|\) are solved for numerically by fixing Earth's position and assuming \(\psi_{\odot}^{\rm min}=70^{\circ}\). These distances are shown in Figure 3, with 1I/'Oumuamua's trajectory as an example. In cases where the entirety of the ISO's orbit lies within the solar avoidance zone, the Monte Carlo X-ray flux is immediately set to zero. We neglect the hyperbolic velocity \(v_{\infty}\) in our analysis since the vast majority of ISOs accessible to the Rubin Observatory will have \(v_{\infty}\leq 40\) km s\({}^{-1}\). Larger velocities, however, would augment the solar wind ion \begin{table} \begin{tabular}{l l c c c c} \hline \hline Object & Method & \(R\) (au) & \(Q_{\rm gas}\) (s\({}^{-1}\)) & \(L_{\rm X}\) (ergs s\({}^{-1}\)) & \(F_{\rm X}\) (ergs s\({}^{-1}\) cm\({}^{-2}\)) \\ \hline 2I/Borisov & Measured \(Q\)(CO) and modeled CX emission. & 2 & \(1.07\times 10^{27}\) & \(7.5\times 10^{11}\) & \(6.2\times 10^{-17}\) \\ 2I/Borisov & Empirical relationship between \(L_{\rm opt}\) and \(L_{\rm X}\). & 2 & - & \(6.1\times 10^{12}\) & \(6.0\times 10^{-16}\) \\ 1I/’Oumuamua & Modeled \(Q\)(CO) and CX emission. & 0.943 & \(2.0\times 10^{27}\) & \(1.2\times 10^{13}\) & \(5.4\times 10^{-14}\) \\ 1I/’Oumuamua & Modeled \(Q\)(CO) and CX emission at \(r_{\rm peri}\). & 0.25 & \(2.8\times 10^{28}\) & \(3.3\times 10^{16}\) & \(7.7\times 10^{-12}\) \\ 1I/’Oumuamua & Modeled \(Q\)(N\({}_{2}\)) and CX emission. & 0.943 & \(1.3\times 10^{28}\) & \(4.7\times 10^{14}\) & \(2.2\times 10^{-12}\) \\ 1I/’Oumuamua & Modeled \(Q\)(H\({}_{2}\)) and CX emission. & 0.943 & \(1.5\times 10^{28}\) & \(6.9\times 10^{14}\) & \(3.2\times 10^{-12}\) \\ 1I/’Oumuamua & Empirical relationship between \(L_{\rm opt}\) and \(L_{\rm X}\). & 0.943 & - & \(2.6\times 10^{9}\) & \(3.6\times 10^{-17}\) \\ \hline \end{tabular} \end{table} Table 2: Estimated X-ray luminosity (\(L_{\rm X}\)) and flux (\(F_{\rm X}\)) of 2I/Borisov and 1I/’Oumuamua using our model. For 1I/’Oumuamua, quoted outgassing rates (\(Q_{\rm gas}\)) depend on the object’s assumed bulk composition, and heliocentric (\(R\)) and geocentric (\(\Delta_{e}\)) distances. 1I/’Oumuamua’s pericenter was \(R=r_{\rm peri}=0.25\) au; however, the solar elongation constraint for _XMM-Newton_ was first satisfied at \(R=0.943\) au. For 2I/Borisov, we quote values based on our CX emission model, but adopted the measured \(Q\)(CO) (Bodewits et al., 2020). For both objects we also estimate \(L_{\rm X}\) from an empirically derived relationship with \(L_{\rm opt}\). Figure 4: Time-evolution of production rates for 2I/Borisov near perihelion (for CO, H\({}_{2}\)O, and OH). Figure adapted from Seligman et al. (2022). References for data are provided in their Table 2. The measurements demonstrate that CO production was substantially greater than H\({}_{2}\)O production after perihelion — the only time when both species were observed contemporaneously. Our production rate model (Equation B11) is plotted as dashed lines, holding 2I/Borisov’s radius as a constant \(a=0.3\) km, and adopting an appropriate \(\mathcal{H}\) for H\({}_{2}\)O and CO. We obtained 2I/Borisov’s heliocentric distance with a rebound(Rein & Liu, 2012) simulation using the Mercurius integrator (Rein et al., 2019). For reference, the trajectory of 2I/Borisov through the Solar System is shown at right, with perihelion marked. The solid blue portion corresponds to the timespan plotted in the left panel. collision rate by a factor of 10% or greater. It is prudent to conduct X-ray observations of such ISOs during the inbound portion of their trajectories. ### Bulk Properties The composition of a neutral medium does not strongly affect the X-ray power density from CX in the optically thick regime. However, the coma's density profile depends on the neutral production rate, which in turn depends on the volatility of surface material in the nucleus of an ISO. While Equation B11 estimates neutral production for bulk compositions similar to C/1996 B2 (Hyakutake)'s, it is straightforward to account for alternative materials. The flux of molecules ejected from a unit surface area patch of the object (in units of s\({}^{-1}\) cm\({}^{-2}\)) is (e.g., Seligman et al., 2022), \[\mathcal{N}=\frac{(1-p)I(t)-\epsilon\sigma T_{\text{Sub}}^{4}}{\Delta H/ \mathcal{N}_{A}+\gamma k_{B}\,T_{\text{Sub}}}. \tag{9}\] In the above, \(p\) is the albedo, \(I(t)\) is the time-dependent, solar irradiance, and \(\epsilon\) is the surface emissivity. Besides \(I(t)\) and physical constants, each parameter in the above equation is material-dependent. In practice, grey-body emission from the minor body is negligible and the incident radiation term dominates. Using the definition in Equation B12, exchanging \(I(t)\) for the incident solar flux, and integrating Equation 9 over the illuminated surface area (\(\Sigma_{I}\)) yields, \[Q_{\text{gas}}=(1-p)\iint_{\mathcal{S}}\,\left(\,\frac{L_{\odot}}{4\pi R^{2} \mathcal{H}}\,\right)d\Sigma_{I}\,. \tag{10}\] Albedo is unconstrained but is likely of order \(p\sim 0.1\) for most minor bodies in the Solar System. The average projected surface area over all viewing angles is 1/4 times the total surface area for any convex object (Meltzer, 1949), so \(\Sigma_{I}\simeq\Sigma/4\), where \(\Sigma\) is the total surface area of the object. For a sphere, \(\Sigma_{I}=\pi a^{2}\), where \(a\) is the radius as defined earlier in this paper. Therefore, Equation 10 has the same functional form as Equation B11. Hoover et al. (2022) noted that the size distribution of ISOs could enhance the Rubin Observatory's yield and estimated detection rates for a range of absolute magnitudes (see their Table 2). Many minor body populations adhere to a power-law size distribution \[n(>a)=ka^{-\beta}, \tag{11}\] for some powerlaw index \(\beta\) and normalization constant \(k\). Do et al. (2018) note that \(\beta>3\) for ISOs, or else the mass would diverge at large radii. In a collisionally evolved system \(\beta=2.5\)(Dohnanyi, 1969). However, Solar System comets have a more complex collisional history, with a powerlaw index that depends on the size regime. For example, \(\beta=1.45\) in the \(1-10\) km range, and \(\beta=1.91\) in the \(2-5\) km subset (Meech et al., 2004). For main-belt asteroids, \(\beta=1.0-1.4\) for sub-km diameters (depending on location within the belt), and \(\beta=1.8\) for larger objects (Yoshida et al., 2003). The distribution for ISOs may be shallow in the case that they are ejected from their respective systems before attaining collisional equilibrium. Also, the bulk of ISOs may be well-described by a single value of \(\beta\), but the frequency will necessarily drop off for the largest and smallest objects due to physical limitations. We incorporate the uncertainty surrounding \(\beta\) into our analysis by drawing it randomly from a uniform distribution: \(\beta\sim\mathcal{U}(1.0,2.0)\). As more ISOs are detected, it will be important to reconsider their size distribution. ### X-ray Flux We further restrict the number of free parameters by assuming the Rubin Observatory will detect 1.5 ISOs per year, the average of the optimistic and conservative rates presented by Hoover et al. (2022). The remaining free parameters are \(a\), \(R\), and composition. We randomly draw a nuclear radius \(a\) by first drawing a value for \(\beta\), and subsequently sampling Equation 11. We truncate the distribution at \(a_{\text{min}}=25\) m and \(a_{\text{max}}=10\) km, which prohibits ISOs larger than most Solar System comets and also guarantees they are large enough to realistically be detected by the Rubin Observatory (an object with 1/4 the radius of 1I/'Oumuamua and with the same albedo will have \(V=24\) at the same \(q=0.25\) au). The new cumulative probability distribution is: \[n(>a)=\frac{(a^{-\beta}-a_{\text{max}}^{-\beta})}{(a_{\text{min}}^{-\beta}-a_{ \text{max}}^{-\beta})}. \tag{12}\] The minimum allowed heliocentric distance \(R=R^{*}\) is determined per Equation 7, and the neutral production is calculated using Equation B11 for four representative pure compositions: H\({}_{2}\), N\({}_{2}\), CO, and H\({}_{2}\)O. We make an important modification, however, by drawing the heliocentric distance scaling exponent from a random uniform distribution \(x\sim\mathcal{U}(-4.33,-1.82)\) as opposed to a fixed \(x=-2\), per Combi et al. (2019). The CX flux is then calculated using Equation B6 and an optically thick cross-section given by Equation B9 and correction factor \(\kappa\). Figure 5 summarizes our results, and demonstrates the viability of using X-rays as a diagnostic probe of ISOs across different compositional assumptions. As discussed earlier, composition has a profound effect on the extent of the outgassed coma and the expected X-ray flux. In the most conservative scenario that ISOs are predominantly composed of water ice, then about 3% found by the Rubin Observatory will exhibit \(F_{\rm X}\gtrsim 2.7\times 10^{-15}\) erg s\({}^{-1}\) cm\({}^{-2}\), which can be robustly detected by a 10 ks exposure (i.e., \(\tau_{5\sigma}=10\) ks corresponds to a \(5\sigma\) detection; about 6% are accessible with a \(\tau_{5\sigma}=100\) ks exposure). Another 90% of H\({}_{2}\)O-dominated ISOs will exhibit undetectable levels of CX emission (i.e., \(\tau_{5\sigma}>\)1000 ks). If CO is the dominant constituent of ISOs, then about 10% will emit X-rays detectable with a 10 ks exposure. The detectability fraction is even higher for the most volatile compositions, reaching 31% for N\({}_{2}\) ice and 34% for H\({}_{2}\) ice. With a longer 100 ksec exposure, these fractions reach 19%, 47%, and 50% for CO, N\({}_{2}\), and H\({}_{2}\), respectively. These statistics suggest that the Rubin Observatory will discover new ISOs amenable to X-ray spectroscopic characterization. If the survey finds 15 2I/Borisov analogues (i.e., CO-dominated interstellar comets) over a ten-year campaign, then approximately \(1-2\) of them will be accessible with a 10 ks EPIC pn exposure. The Rubin Observatory will probably detect one such ISO in its first five years of operation. The approach is similarly effective for 1I/'Oumuamua analogues. X-ray emission should be detectable for about one third of N\({}_{2}\)-dominated and H\({}_{2}\)-dominated ISOs. On the other hand, consistent non-detections of CX emission will point to an alternative, refractory composition. For objects that exhibit nongravitational acceleration, lack of X-rays may also indicate an alternative mechanism that provides the required force (e.g., radiation pressure). ## 5 Conclusions In this paper, we propose X-ray spectroscopy as a means of detecting outgassed comae and revealing highly volatile compositions of interstellar objects, a new class of astrophysical phenomena. The neutral gas comae of ISOs should undergo charge exchange with solar wind ions in the same manner as has been widely observed with Solar System comets. We suggest a tiered list of scientific goals that are realistically achievable by existing and concept X-ray missions, sorted by feasibility: 1. Photometric detection of CX (i.e., at \(\geq 5\sigma\) significance) in order to detect an outgassed coma. Modest exposure times of order 10 ks may reveal a statistically significant X-ray flux originating from an interstellar object, particularly when observed at heliocentric distances \(R\lesssim 1\) au. The viable parameter space is depicted in Figure 6 for pn onboard _XMM-Newton_. Confidently detecting X-ray emission over a bandpass of \(0.5-0.7\) keV would immediately reveal the existence of a coma undergoing CX with solar wind ions. X-ray observations of 1I/'Oumuamua near its close approach to Earth would have either confirmed the presence of a coma, or placed an upper bound on the production rate to a point that another mechanism would be necessary to explain the anomalous acceleration. 2. Joint constraints on \(a\) and \(Q_{\rm gas}\) in order to infer an ISO's volatile content. Once an ISO's ephemeris is determined, the X-ray flux is degenerate with \(a\) and \(\mathcal{H}\). By assuming a range of plausible albedos, one can constrain \(a\) from an optical light curve (e.g., Mashchenko, 2019). The radius is also constrained by the energetics of nongravitational acceleration. For example, Seligman and Laughlin (2020) suggested various ices (H\({}_{2}\), Ne, N\({}_{2}\), Ar, Figure 5: Cumulative distribution function of X-ray flux (\(F_{\rm X}\)) for ISOs detected by the Rubin Observatory. Equivalently, the fraction of ISOs detected by the Rubin Observatory that will exhibit a given X-ray flux or less. Four distributions are depicted, each corresponding to an ISO population comprised of a different pure volatile: H\({}_{2}\)O (blue), CO (red), N\({}_{2}\) (green), or H\({}_{2}\) (orange). On the top x-axis, \(F_{\rm X}\) is converted into an exposure duration (\(\tau_{5\sigma}\)) that would yield a \(5\sigma\) detection with _XMM-Newton_ EPIC pn. The vertical blue line delineates \(F_{\rm X}\simeq 2.7\times 10^{-15}\) erg s\({}^{-1}\) cm\({}^{-2}\); a signal which would be detected at \(5\sigma\) confidence with at 10 ks exposure. For example, about 10% of CO-dominated ISOs detected by the Rubin Observatory will exhibit this X-ray flux or greater. O\({}_{2}\), Kr, Xe, CO\({}_{2}\), H\({}_{2}\)O) that could be compatible with 1I/'Oumuamua's outbound acceleration. For a subset of these species (N\({}_{2}\), H\({}_{2}\), CO, Ne, plus CH\({}_{4}\)), Jackson & Desch (2021) showed the relationship between nongravitational acceleration, albedo, and mean spherical radius. Across these different species, the allowable range for \(a\) lies between \(20-25\) m for albedo \(p>0.6\). However, their respective sublimation energies (\(\mathcal{H}\)) vary by a factor of 10, which, according to our model, implies \(\mathcal{A}\) varies by a factor of 100. This interaction area is probed by the X-ray flux from CX. If the nuclear radius of an ISO can be constrained within a factor of order unity, then an anomalously high \(F_{\mathrm{X}}\) would immediately reveal an exotic, highly-volatile composition due to the inverse proportionality between \(Q_{\mathrm{gas}}\) and \(\mathcal{H}\). We reiterate that a non-detection in X-ray would favor low-volatility (e.g., H\({}_{2}\)O, CO\({}_{2}\)) or an alternative acceleration mechanism. 3. Measuring line strength ratios towards robustly determining the coma composition. Mullen et al. (2017) demonstrated that relative CX emission line strengths (including strong transitions from Cv, Cvi, and Ovii) depend on the neutral target medium. Specifically, N\({}_{2}\), H\({}_{2}\)O, O, OH, CO, and CO\({}_{2}\) targets were considered in their study. A high-fidelity X-ray spectrum may be achievable by existing facilities for the brightest ISOs. However, this method is most amenable to next-generation X-ray telescopes which feature both large effective areas and high-spectral resolution. A detailed study of this approach, and its feasibility with X-ray concept missions, would be scientifically useful. Baseline estimates of the Rubin Observatory's yield suggest around 15 new ISOs over a ten-year campaign. If these objects are predominantly composed of CO, about one or two will exhibit detectable levels of X-ray flux. In the case that most ISOs are comprised of H\({}_{2}\) ice, CX will be observable for about five of them. Efforts to understand this mysterious population benefit from broadband spectroscopy -- infrared, optical, and UV observations provide complementary insights into physical and chemical properties of the nucleus and coma. As X-ray observations of CX with comets have been an active area of study, similar efforts directed toward ISOs will at the very least reaffirm findings made at other wavelengths, and potentially discern the true compositions of these minor bodies. We thank the referees of this manuscript for their constructive comments, which significantly improved the presentation of the paper. We extend particular thanks to Carey Lisse for a number of detailed and insightful recommendations which led to a much stronger final version. We also thank Greg Laughlin for his review of the manuscript and for constructive discussions. D.Z.S. acknowledges financial support from the National Science Foundation Grant No. AST-2107796, NASA Grant No. 80NSSC19K0444 and NASA Contract NNX17AL71A from the NASA Goddard Spaceflight Center. D.Z.S. is supported by an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-2202135. This research award is partially funded by a generous gift of Charles Simonyi to the NSF Division of Astronomical Sciences. The award is made in recognition of significant contributions to Rubin Observatory's Legacy Survey of Space and Time. ## Appendix A Review of Solar Wind Charge Exchange Since the discovery of strong X-ray emission from a cool cometary source, C/Hyakutake 1996B2, in 1996 by Lisse et al., solar wind charge exchange has gained considerable traction for its ability to explain X-ray emission from objects within the Solar System. CX models have successfully explained observed X-rays from the volatile comae of Solar System comets (Cravens, 1997; Wegmann et al., 1998), as well as emission originating from Earth (Wargelin et al., 2004; Fujimoto et al., 2007), Mars (Kallio et al., 1997; Dennerl, 2002), Venus (Dennerl et al., 2002), and Jupiter (Metzger et al., 1983; Cravens et al., 2003). In order to produce X-ray emission, heavy ions from the solar wind interact with cool atomic and molecular gas surrounding the object. This process also produces a source of X-ray emission from star-forming galaxies such as M82 (e.g., Liu et al., 2012; Zhang et al., 2014). In such a galaxy, CX occurs at the interface between hot and cool interstellar gases, likely responsible for the enhanced forbidden transitions of K\(\alpha\) triplet emissions from He-like ions and high-order transitions in Lyman series of H-like ions (Zhang et al., 2014). There has also been tentative detection of charge exchange in galaxy clusters (Gu et al., 2018). The general CX reaction is \[\mathrm{A^{q+}+N\rightarrow(A^{*})^{(q-1)+}+N^{+}},\] (A1) where the ion \(\mathrm{A^{q+}}\) gains an electron from the neutral species N (atomic or molecular, often H, H\({}_{2}\) and He). The integer q denotes the initial ionization state of the species A. The electron is in an excited state, denoted by the superscript \(*\). As it cascades to the ground state, the electron emits X-ray and UV photons. Implicit in this equation are initial and final state quantum numbers (\(n\), \(l\), and \(m\)) of the transferred electron. The most probable final-state principal quantum number \(n\) is given by Equation 2.6 of Janev & Winter (1985), which is an approximation that is generally applied while modeling this process (Smith et al., 2012). The CX interaction cross-section (\(\sigma_{\mathrm{sq}}\)) for a given species (denoted by subscript s) and charge (denoted by subscript q) was modeled by Wegmann et al. (1998), and is given by, \[\sigma_{\mathrm{sq}}=\,\left(\,\frac{\mathrm{q}-1}{\mathrm{q}^{2}/2n^{2}-|I_{ \mathrm{p,s}}/27.2\,\mathrm{eV}|}\,\right)^{2}\times 0.88\times 10^{-16}\;( \mathrm{cm}^{2}).\] (A2) The variables q and \(I_{\mathrm{p,s}}\) in Equation A2 correspond to the dimensionless integer charge of the ion and ionization potential (measured in eV) of the target species, respectively. For comets, the solar wind typically interacts with volatile coma particles. Some of the most common target species outgassed by Solar System comets are water and its constituents (O, OH, H) (Cochran et al., 2015; Biver & Bockelee-Morvan, 2016; Bockelee-Morvan & Biver, 2017). All of these species have similar ionization energies \(\sim 13\) eV (Wegmann et al., 1998). CX cross-sections are \(\sim 10^{-15}\) to \(10^{-14}\) cm\({}^{2}\) for heavy solar wind ions such as Oviii, Ovii, Cvii, Cvi, Nvii, Neix, Six, and Fexii. That is, charge exchange occurs when these ions come within \(\sim 1\) nm of the neutral targets. The energy difference between the captured electron's initial (excited) state and ground state (\(E_{\mathrm{excit}}\)) can reach several hundred eV. Therefore, these transitions provide significant contributions to EUV and X-ray radiation. CX emission from cometary comae has rich observational and theoretical backing, as reviewed by Bodewits et al. (2012) and Dennerl et al. (2012). Our main concern is charge exchange between the solar wind and the outgassed species within an ISO's coma. Assuming that a given solar wind ion undergoes charge exchange once during its passage through the neutral medium, the X-ray power density \(P_{\mathrm{sj}}\) for transition (j) at a given position \(\mathbf{r}\) is: \[P_{\mathrm{sj}}(\mathbf{r})=\Phi_{\mathrm{sq}}(\mathbf{r})\,\sigma_{\mathrm{sq }}\,b_{\mathrm{sqj}}\,n_{\mathrm{n}}(\mathbf{r})\,\Delta E_{\mathrm{sqj}}\,,\] (A3) (Cravens, 1997, 2000; Cravens et al., 2009) which has units of eV cm\({}^{-3}\) s\({}^{-1}\). Here \(\Phi_{\mathrm{sq}}\) denotes the solar wind flux of a given ion, \(\sigma_{\mathrm{sq}}\) is the CX cross-section (Equation A2), \(b_{\mathrm{sqj}}\) is the spectral cascading probability for transition (j) which releases a photon of energy \(\Delta E_{\mathrm{sqj}}\), and \(n_{\mathrm{n}}\) is the number density of neutral species. The solar wind flux may be written as \(\Phi_{\mathrm{sq}}=f_{\mathrm{sq}}n_{\mathrm{SW}}u_{\mathrm{SW}}\), where \(n_{\mathrm{SW}}\) and \(u_{\mathrm{SW}}\) correspond to solar wind proton number density and velocity, respectively. The solar wind fraction of a given species/charge is \(f_{\mathrm{sq}}\). These quantities are functions of position in space. For example, the unshocked solar wind has a typical speed of \(\sim 400\) km s\({}^{-1}\) and density of \(\sim 0.4\) cm\({}^{-3}\) at 5 au (Cravens et al., 2003). The density \(n_{\mathrm{SW}}\) falls off as the square of distance from the host star (Cravens, 2000). While the solar wind flux varies over time, the fraction of heavy ions to protons in the solar wind remains fairly consistent at \(f\approx 10^{-3}\). Intensity is obtained by integrating Equation A3 over a given path length (i.e., performing a line integral). ## Appendix B Charge Exchange with Outgassed Comae Our prescription for CX emission is calibrated to observations of C/1996 B2 (Hyakutake) and early models that described its X-ray luminosity. Wegmann et al. (1998) calculated the total X-ray emissivity by summing over all solar wind ions in Equation A3. They adopted parameter values of \(\sigma_{\mathrm{sq}}=3\times 10^{-15}\) cm\({}^{2}\), \(f_{\mathrm{O}}=0.0005\), and 1100 eV emitted per oxygen ion in the solar wind based on summing the CX excited state energies of all ions and weighting them according to their relative abundance fraction. Further, they assumed an 'effectivity' of 0.4 which replaces the \(b_{\mathrm{sqj}}\) terms. The power density is approximately \[P_{\mathrm{X}}(\mathbf{r}^{\prime})=4\times 10^{-20}n_{\mathrm{SW}}n_{\mathrm{n}}( \mathbf{r}^{\prime})\,\mathrm{erg}\,\mathrm{cm}^{3}\,\mathrm{s}^{-1}\,,\] (B4) where the density of neutrals at distance \(\mathbf{r}^{\prime}\) from the comet's nucleus follows \[n_{\rm n}({\bf r}^{\prime})=\,\left(\,\frac{Q_{\rm gas}}{4\pi v\left|{\bf r}^{ \prime}\right|^{2}}\right)e^{-|{\bf r}^{\prime}|/\lambda}\,,\] (B5) for a total neutral production rate \(Q_{\rm gas}\), outflow velocity \(v\), and photodestruction length scale \(\lambda\). Wegmann et al. (1998) adopted \(Q_{\rm gas}=1.5\times 10^{29}\) s\({}^{-1}\) and \(n_{\rm SW}=7\) cm\({}^{-3}\). By neglecting photodestruction, which Cravens (1997) deemed negligible within \(5\times 10^{5}\) km, and adopting a nominal \(v=1\) km s\({}^{-1}\), integration over a sphere of beam radius \(135,000\) km yields \(L_{\rm X}\sim 6\times 10^{15}\) erg s\({}^{-1}\), comparable to the measured luminosity and other estimates in the literature. As follows, we determine expectations for CX emission from the comae of other minor bodies. ### Luminosity from the Optically Thick Coma CX produces a surface luminosity, \(4\pi I\)(Cravens et al., 2003), given by the equation, \[4\pi I=n_{\rm SW}\,u_{\rm SW}f\,N\,\Delta E\,.\] (B6) Each ion contributes, on average, \(N\) photons of typical energy \(\Delta E\), and the solar wind minor ion fraction is given by \(f\). The total luminosity is obtained by integrating Equation B6 over the effective surface area. Our baseline model assumes that this area is the region where the coma is optically thick to CX. That is, it satisfies the criterion: \[\int_{S}\left(\,\sigma_{\rm sq}n_{\rm n}({\bf r}^{\prime})\,\right)ds>1\] (B7) for a linear projected path \(S\) of the solar wind through the coma, where \({\bf r}^{\prime}\) is the distance from the cometary nucleus as in Equation B5. As long as the coma neutral column is large enough that the solar wind is fully depleted of all highly stripped minor ions, the individual transition probabilities are not important. For simplicity, we also fix the cross-section at \(\sigma_{\rm sq}=3\times 10^{-15}\) cm\({}^{2}\). This value is consistent with expectations from the ionization potentials of H\({}_{2}\) (\(I_{p,s}\approx 15.4\) eV) and N\({}_{2}\) (\(I_{p,s}\approx 15.6\) eV). Considering these compositions and those more typical of comets, we find \(2\times 10^{-15}\) cm\({}^{2}\leq\sigma_{\rm sq}\leq 13\times 10^{-15}\) cm\({}^{2}\) for ion charges \(+5\leq q\leq+7\). We assume that the coma is approximately spherical and follows a neutral density profile given by Equation B5. We define the impact parameter, \(b\), as the minimum distance between the path \(S\) and the nucleus. Also, let \(R\) be the distance between the Sun and the nucleus, and assume that \(R\gg b\). We parameterize the path \(S\) as a function of \({\bf r}^{\prime}\). We define the dimensionless parameter \(x\equiv|{\bf r}^{\prime}|/b\) and a coefficient \(K\equiv\sigma_{\rm sq}Q_{\rm gas}/4\pi w\). By substituting Equation B5 into Equation B7 with these new parameters, the optically thick transition occurs where \[\frac{2K}{b}\int_{1}^{\infty}\frac{1}{x\sqrt{x^{2}-1}}e^{-bx/\lambda}dx=1\,,\] (B8) or, equivalently, where \[\frac{2K}{b}\int_{0}^{\pi/2}e^{-b\,{\rm sec}\theta/\lambda}d\theta=1\,.\] (B9) Equation B9 is equivalent to a function \(b=b(\sigma_{\rm sq},w,\lambda,Q_{\rm gas})\) which may be evaluated numerically. It returns a distance within which the coma is optically thick to CX. For physically plausible photodestruction scales of \(\lambda=10^{8}-10^{11}\) cm (Combi et al., 2004), a reasonable approximation is that \(b\propto Q_{\rm gas}\) in the regime \(Q_{\rm gas}<10^{28}\) s\({}^{-1}\). Figure 6 shows the validity of this approximation for different assumed \(\lambda\). As \(\lambda\rightarrow\infty\), the function approaches a linear relationship. Taking C/1996 B2 (Hyakutake) as a nominal example (hence the above assumed \(\lambda\) and \(Q_{\rm gas}\) values), then we find \(b\approx 10,000\) km which is in reasonable agreement with the 30,000 km penetration depth estimated by Wegmann et al. (1998). In order to estimate total X-ray luminosity, we assume \(n_{\rm SW}=7\) cm\({}^{-3}\) at 1 au, \(u_{\rm SW}=400\) km s\({}^{-1}\), \(f=0.001\), \(N=1\), and \(\Delta E=550\) eV. The above equations predict \(L_{\rm X}=8.2\times 10^{14}\) erg s\({}^{-1}\) from the optically thick region alone. This estimate is about \(5\times\) less than the total luminosity measured by _ROSAT_. Using a refined hydrodynamical model, Wegmann et al. (2004) also estimated \(L_{\rm X}\) for C/1996 B2 (Hyakutake) during its close approach to Earth. Their measurements were centered around \(10^{16}\) erg s\({}^{-1}\), which is higher than the initial estimate from Lisse et al. (1996). The discrepancy between these literature estimates and our own is probably due to significant emission that took place in optically thin regions of the coma, currently unaccounted for in our model for the optically thick zone. Therefore, it is useful to calibrate our model and correct for this component per the following. ### Extensions to the Analytic Model Holding other parameters constant, \(b\) scales linearly with \(Q_{\rm gas}\) in the regime \(Q_{\rm gas}<10^{30}\) s\({}^{-1}\) for \(\lambda\sim 5\times 10^{10}\) cm. The CX interaction area in this regime can be approximated by \(\pi b^{2}\approx\Gamma Q_{\rm gas}^{2}\), with \(\Gamma=1.5\times 10^{-40}\) s\({}^{2}\) cm\({}^{2}\). A similar conclusion was reached by Wegmann et al. (2004), who developed a hydrodynamical model of X-ray emission morphology and applied it to observations of comets. They derived a relationship \(L_{\rm X}=CHQ_{\rm gas}^{2}\) where \(H=u_{\rm SW}n_{\rm SW}fN\Delta E\) is the heavy ion solar wind flux. By fitting their estimates of \(HQ_{\rm gas}^{2}\) to observed X-ray luminosities of comets, they determined a constant of proportionality \(C=10^{-38}\) s\({}^{2}\) cm\({}^{2}\). Let \(\mathcal{A}\) define an effective, optically thick surface area that would yield a given \(L_{\rm X}\). Then the relationship found by Wegmann et al. (2004) is equivalent to \(\mathcal{A}=CQ_{\rm gas}^{2}\). We adopt a correction factor \(\kappa\equiv\sqrt{C/\Gamma}\approx 8.1\) that defines a new effective radius, \[\tilde{b}\equiv\kappa b\] (B10) In other words, \(\kappa\) is a factor that increases the radius of the CX interaction area from \(b\) to \(\tilde{b}\), thereby approximately accounting for emission from regions beyond the optically thick zone. Prior observations of CX within cometary comae (e.g. Lisse et al., 1999; Lisse et al., 2004) have produced detections of significant X-ray emission extending as little as \(\sim 5\times 10^{4}\) km (for 2P/Encke, 2003), and as far as an apparent limit at \(\sim 10^{6}\) km (for C/1996 B2 (Hyakutake), C/1991 K1 (Levy), and C/1995 O1 (Hale-Bopp)). We adopt \(Q_{\rm gas}\propto a^{2}R^{-2}\), where \(a\) is body's radius, and \(R\) is the instantaneous heliocentric distance. The \(R^{-2}\) dependency is the same as that of the incident solar radiation. While physically motivated, this scaling simplifies true cometary activity. Following a long-baseline survey of water outgassed by 61 comets, Combi et al. (2019) found generally steeper scaling: \(Q(\rm H_{2}O)\propto R^{x}\), with \(-4.33\leq x\leq-1.82\) depending on taxonomic class. This finding is incorporated into our expectations for future ISO observations in SS4. Our model must also account for the fact that more volatile species will sublimate and outgas at higher rates (potential compositions are discussed further in SS4). We use the following scaling relation, \[Q_{\rm gas}=1.5\times 10^{29}\times\Big{(}\frac{a}{2.4\,{\rm km}}\Big{)}^{2} \Big{(}\frac{R}{1\ {\rm au}}\Big{)}^{-2}\Big{(}\frac{\mathcal{H}}{9.3\times 10^{-23 }\,{\rm kJ}}\Big{)}^{-1}{\rm s}^{-1}\,,\] (B11) Figure 6: Relationship between outgassing rate \(Q_{\rm gas}\) and radius \(b\) within which a coma is optically thick to CX. Since \(b\) lacks a closed form (Equation B9), it is solved for numerically. The function is plotted for several assumptions of the ionization scale \(\lambda\). In all cases \(v=1\) km s\({}^{-1}\). Our main analysis assumes \(\lambda=5\times 10^{10}\) cm (red line). As \(\lambda\) increases, the function approaches a linear relationship between \(Q_{\rm gas}\) and \(b\) (black dashed line). which is calibrated to the nuclear radius of C/1996 B2 (Hyakutake) (Lisse et al., 1999), and the production rate adopted by Wegmann et al. (1998). Production rates for OH were measured by Schleicher (1996), and varied from \(\log_{10}Q(\mathrm{OH})=28.89\) (at \(R=0.94\) au) to \(\log_{10}Q(\mathrm{OH})=29.17\) (at \(R=1.08\) au). In the above, \(\mathcal{H}\) is the total energy input for each coma particle, \[\mathcal{H}\equiv\Delta H/\mathcal{N}_{A}+\gamma k\,T_{\mathrm{Sub}}\,.\] (B12) The above expression involves enthalpy of sublimation (\(\Delta H\)), temperature of sublimation (\(T_{\mathrm{Sub}}\)), and adiabatic index of the escaping vapor (\(\gamma\)), all of which are material dependent (Table 1). The quantity \(\mathcal{H}\) is the sum of two components: the energy required to sublimate an ice molecule (the first term) and the energy required to heat the molecule to the outflow velocity (the second term). In the case of water, \(\mathcal{H}_{\mathrm{H_{2}O}}=9.3\times 10^{-23}\) kJ (values for other species are listed in Table 1). Chemical properties also determine the outgassing velocity \(v\): \[v\simeq c_{s}=\sqrt{\gamma k\,T_{\mathrm{Sub}}/\mu m_{H}}\,,\] (B13) for mean molecular weight \(\mu\) and sound speed \(c_{s}\). For typical values \(v=0.3-1\) km s\({}^{-1}\), \(Q<10^{30}\) s\({}^{-1}\), photodestruction timescales \(t_{\mathrm{photo}}<10^{7}\) s, and \(\lambda=wt_{\mathrm{photo}}\), the optically thick radius does not exceed \(10^{6}\) km under our model. It is limited by the exponential decay term in Equation B5. We reiterate that only monotonic and homonuclear diatomic species (having zero diplole moment) will be more amenable to CX X-ray characterization than to infrared fluorescence spectroscopy. The normalization in Equation B11 is to C/1996 B2 (Hyakutake), and the expression should be considered an order-of-magnitude estimate of \(Q_{\mathrm{gas}}\) for other objects. Ideally, one may use independent estimates of \(Q_{\mathrm{gas}}\) obtained either from infrared spectroscopy or a more detailed theoretical model that accounts for the target's composition. For most Solar System comets, \(Q_{\mathrm{gas}}\simeq Q(\mathrm{H_{2}O})+Q(\mathrm{CO_{2}})+Q(\mathrm{CO})\), which is the sum of production rates of the most common volatile components in comets (Bockelee-Morvan and Biver, 2017). Nevertheless, Equation B11 enables baseline predictions for new minor bodies, and is especially useful for population-level predictions when given distributions for \(a\) and perihelion \(r_{\mathrm{peri}}\). Additionally, we assume that the solar wind proton number density is given by, \[n_{\mathrm{SW}}(R)=7\,\mathrm{cm}^{-3}\,\Big{(}\frac{R}{1\,\mathrm{au}}\Big{)} ^{-2},\] (B14) which is maximized at perihelion when \(R=r_{\mathrm{peri}}\). Note, flares can enhance the solar wind ion flux, thus improving the feasibility of CX X-ray observations at larger \(R\). For example, a flare nearly doubled the soft X-ray count rate from C/1999 S4 (LINEAR) (Lisse et al., 2001). Combining Equations B6 & B10, we arrive at a general model for X-ray luminosity \[L_{\mathrm{X}}=\pi\tilde{b}^{2}n_{\mathrm{SW}}u_{\mathrm{SW}}fN\Delta E,\] (B15) which is a function of \(\{\sigma_{\mathrm{sq}},\lambda,a,r_{\mathrm{peri}},\mu,\Delta H,\gamma,T_{ \mathrm{Sub}}\}\), assuming observations take place at perihelion. Also, a nominal \(\lambda=5\times 10^{10}\) cm is adopted for our analysis. We use this model to predict the luminosity of CX emission with ISOs in SS4, and a schematic of the entire CX process with ISOs is shown in Figure 1. It is worth explicitly highlighting our model's dependence on perihelion distance. We demonstrated an approximately linear relationship between \(b\) and \(Q_{\mathrm{gas}}\), where \(Q_{\mathrm{gas}}\propto r_{\mathrm{peri}}^{-2}\) through its dependence on the solar radiation flux \(F_{\odot}\propto r_{\mathrm{peri}}^{-2}\). The CX surface area follows \(\mathcal{A}\propto b^{2}\), and the incident heavy ion flux follows \(n_{\mathrm{SW}}\propto r_{\mathrm{peri}}^{-2}\). Therefore, \(L_{\mathrm{X}}\propto r_{\mathrm{peri}}^{-6}\): the perihelion of an ISO strongly dictates whether its CX X-ray emission is detectable (similar scaling relationships were explored by Lisse et al., 1999).
2309.10944
Proof of the Verjovsky Conjecture
In this paper we present a proof of the Verjovsky conjecture: Every codimension-one Anosov flow on a manifold of dimension greater than three is topologically equivalent to the suspension of a hyperbolic toral automorphism. In fact, the conjecture is derived from possible more general result that says that for every codimension-one volume-preserving Anosov flow on a manifold of dimension greater than three, a suitable time change guarantees that the stable and unstable sub-bundles are then jointly integrable.
Khadim War
2023-09-19T21:54:57Z
http://arxiv.org/abs/2309.10944v1
# Proof of the Verjovsky Conjecture ###### Abstract In this paper we present a proof of the Verjovsky conjecture: Every codimension-one Anosov flow on a manifold of dimension greater than three is topologically equivalent to the suspension of a hyperbolic toral automorphism. In fact, the conjecture is derived from possible more general result that says that for every \(\mathcal{C}^{4}\) codimension-one volume-preserving Anosov flow on a manifold of dimension greater than three, a suitable time change guarantees that the stable and unstable sub-bundles are then jointly integrable. 1 Footnote 1: With pleasure, we thank Andrei Agrachev, Thierry Barbot and Stefano Luzzatto for listening the first time to the proof. We thank Keith Burns, Boris Hasselblatt, Rafael Potrie, Mark Pollicott, Raul Ures, Marcelo Viana and Amie Wilkinson for encounraging comments. We also thank Federico Rodriguez-Hertz for bringing my attention for the first time on this Conjecture. We thank Oliver Butterley with who we discussed many techniques used here. ###### Contents * 1 Introduction and results * 2 Anosov flows and reparametrisation * 2.1 Reparametrisation of Anosov flow and Parry's formula * 2.2 Parry's synchronisation * 3 Special synchronisation and Integrability * 3.1 Special synchronisation * 3.2 Integrability and global cross section * 3.3 Proof of Theorem B * 4 Special synchronisation equation The time change * 5.1 General strategy * 5.2 Time change on sections * 5.2.1 Definition of the perturbation * 5.2.2 Estimates of the perturbation ## 1 Introduction and results Anosov flows are central examples of chaotic dynamical systems. Let \(M\) be a compact connected Riemannian manifold. A flow \(F:=\{f^{t},t\in\mathbb{R}\}:M\to M\) is called _Anosov_ if it is _uniformly hyperbolic_ in the sense the tangent bundle splits into three invariant sub-bundles \(\mathbb{E}^{c}\), \(\mathbb{E}^{s}\) and \(\mathbb{E}^{u}\), where \(\mathbb{E}^{c}\) is one-dimensional and contains the flow direction and where the vectors in \(\mathbb{E}^{s}\) (resp. \(\mathbb{E}^{u}\)) are exponentially contracted (resp. expanded). The sub-bundles \(\mathbb{E}^{s}\) and \(\mathbb{E}^{u}\) are referred to as stable and unstable bundles. The three main examples of Anosov flows are: (1) Suspensions over Anosov diffeomorphisms; (2) Geodesic flows on manifolds of negative curvature; (3) Small perturbations of such geodesic flows (Anosov are structurally stable but typically the perturbed flow will fail to be a geodesic flow). Two flows are said to be _topologically equivalent_ if there exists a homeomorphism which maps orbits to orbits homeomorphically and preserves orientation of the orbits. It is natural to ask for a classification of Anosov flows up to topological equivalence. An Anosov flow is said to be _codimension-one_ if the stable or unstable bundle is one-dimensional. A flow admits a global cross section if there exists a closed co-dimension one manifold which intersects every orbit transversally and consequently the manifold is topologically a suspension manifold. The geodesic flow on a negatively curved compact surface is an example of a three-dimensional Anosov flow with no global cross-section (if it had a global cross-section it would contradict a result of Plante [14, Theorem 4.8]). Consequently it cannot be topologically equivalent to the suspension of an Anosov diffeomorphism. However geodesic flows are never codimension-one in higher dimensions. In the 1970s Verjovsky made a conjecture concerning this question in higher dimensions. **The Verjovsky Conjecture.**_Any codimension-one Anosov flow on a closed manifold of dimension greater than three is topologically equivalent to the suspension flow of a hyperbolic toral automorphism._ If true this conjecture would give a complete classification of codimension-one Anosov flows, up to topological equivalence, in higher dimensions. Plante [13] proved the Verjovsky Conjecture when the underlying manifold has solvable fundamental group. Ghys [8] proved this conjecture when2\(\mathbb{E}^{su}\) is \(\mathcal{C}^{1}\) or when the codimension-one sub-bundle is \(\mathcal{C}^{2}\) and the flow is volume preserving (i.e., \(\dim\mathbb{E}^{u}=1\) and \(\mathbb{E}^{cs}\) is \(\mathcal{C}^{2}\) or with stable and unstable swapped). These results were improved by Simic [16] who proved the conjecture under the additional assumption that \(\mathbb{E}^{su}\) is Lipschitz and then [17] for the case where the codimension-one sub-bundle is \(\mathcal{C}^{1+\alpha}\) for all \(\alpha<1\). The complete resolution of the conjecture was announced by Asaoka [2] but a gap was found in a result of Simic [18] that had been used in the proof [2, Erratum]. In this article, using many of the ideas developed by Simic, we complete this work. **Theorem A**.: _The Verjovsky Conjecture is true._ In order to prove the above result we first prove a result related to integrability of the \(\mathbb{E}^{su}\) sub-bundle. A sub-bundle is said to be _integrable_ if there exists a foliation tangent to it. The integrability (or non-integrability) of sub-bundles in important from a dynamical systems point of view in many situations. The sub-bundles \(\mathbb{E}^{s}\) and \(\mathbb{E}^{u}\) are both integrable but in general the sub-bundle \(\mathbb{E}^{su}\) is not _integrable_. In particular the set of Anosov flows where the stable and unstable bundles are not jointly integrable is \(\mathcal{C}^{1}\)-open and \(\mathcal{C}^{k}\)-dense (for all \(k\in\mathbb{N}\)) in the set of all Anosov flows [7] (see references within concerning the prior work of Brin). Nevertheless we are able to prove integrability of \(\mathbb{E}^{su}\) for certain exceptional Anosov flows. **Theorem B**.: _Let \(F:=\{f^{t},t\in\mathbb{R}\}:M\to M\) be \(a^{3}\)\(\mathcal{C}^{2}\) codimension-one volume preserving Anosov flow with \(\dim(M)>3\). If \(F\) is specially synchronisable then it is topologically equivalent to an Anosov flow whose stable and unstable are jointly integrable._ One of the main difficulties of integrating these bundles is lack of regularity and typically the sub-bundles of Anosov flows are merely Holder. In our setting \(\mathbb{E}^{s}\) and \(\mathbb{E}^{cu}\) are \(\mathcal{C}^{1}\) but it cannot be hoped in general that \(\mathbb{E}^{u}\) is better than Holder. The ideas of the proof of the above theorem are inspired by techniques developed by Luzzatto, Tureli and the author [11] for integrating bundles that are just continuous. Section 2 is devoted to recall general facts about Anosov flows and their reparametrisations. In this section, we will recall a bunching condition that guarantee a certain regularity of the stable bundle which is used to define the Parry's synchronisation. This is the parametrisation so that the unstable Jacobian is constant. We believe that this passage to the Parry's synchronisation is not necessary for the proof of the conjecture but it simplifies the calculation for constructing the adequate time change for integrability. In Section 3.1, we give the definition of special synchronisation for codimension-one Anosov flow. In Section 3.2, we discuss the integrability of continuous bundles and the existence of global cross section. In Section 3.3 we prove Theorem B. Section 5 is devoted to prove that every codimension-one Anosov flow on a manifold of dimension at least four is specially synchronisable. **Remark 1**.: That the Vejovsky conjecture is true implies that codimension-one Anosov flows on higher dimensional manifolds only exist on manifolds with solvable fundamental group. Consequently Plante's result [13] covered all cases but we do not know how to directly show this and take advantage of the previous work. Combining Theorem B with various known results allows us to prove the Theorem A. First we recall the following two results. **Theorem** (Verjovsky [19, Theorem 1.1]).: _Every codimension-one Anosov flow on a closed manifold of dimension greater than three4 is topologically transitive._ Footnote 4: As shown by Franks & Williams [6], in the case where both the stable and unstable bundles are at least two dimensional there are examples of non-transitive Anosov flows. In the three dimensional case the question of transitivity of Anosov flows remains open. **Theorem** (Asaoka [2, Main Theorem]).: _Every topologically transitive codimension-one Anosov flow is topologically equivalent to a \(\mathcal{C}^{\infty}\) volume-preserving Anosov flow._ Combining the two theorems means that it suffices to consider only \(\mathcal{C}^{\infty}\) volume-preserving Anosov flows. When talking about codimension-one Anosov flow, without loss of generality we suppose \(\dim\mathbb{E}^{s}=1\). We apply the process of synchronisation, as used by Parry [12], which involves a time change of the flow. Full details are contained in Section 2.2 and the pertinent details described in Proposition 2. In particular in Proposition 24 _Suppose that a \(\mathcal{C}^{2}\) volume-preserving Anosov flow with \(\dim\mathbb{E}^{s}=1\) and \(\dim\mathbb{E}^{u}\geq 2\) is given. Then the Parry's synchronized flow is topologically conjugate to the original flow and satisfies the assumptions of Theorem B._ Consequently, using Theorem B, we conclude that every codimension-one Anosov flow on a manifold of dimension greater than three is topologically equivalent to a codimension-one Anosov flow with integrable \(\mathbb{E}^{su}\). Plante proved the following, using integrability to demonstrate the existence of a global cross-section and the work of Franks [5] and Newhouse [15] concerning the classification of Anosov diffeomorphisms. **Theorem** (Plante [14, Theorem 3.7]).: _Every codimension-one Anosov flow with integrable \(\mathbb{E}^{su}\) is topologically conjugate to a suspension of a hyperbolic toral automorphism._ Since we have shown that it suffices to consider flows with integrable \(\mathbb{E}^{su}\) the proof of Theorem A follows from the above result. ## 2 Anosov flows and reparametrisation ### Reparametrisation of Anosov flow and Parry's formula In this section we review classical results about reparametrisation of Anosov flows. Let \(F:=\{f^{t},t\in\mathbb{R}\}:M\to M\) be an Anosov flow generated by a \(C^{1}\) vector field \(Z\). Let \(\mathbb{E}^{s}\oplus\mathbb{E}^{u}\) be the corresponding hyperbolic splitting, i.e. there exsits \(\lambda_{s}\in(0,1),\lambda_{u}>1\) and \(C>0\) such that for every \(t>0\) we have \[\|Df^{t}|_{\mathbb{E}^{s}}\|\leq C\lambda_{s}^{-t}\quad\text{ and }\quad\|Df^{-t}|_{ \mathbb{E}^{u}}\|\leq C\lambda_{u}^{-t} \tag{1}\] where \(\|.\|\) is the norm given by the Riemannian structure. The definition of Anosov flow implies the existence of an invariant 1-form \(\eta\) on \(TM\), i.e.5 Footnote 5: This one form can be explicitely defined by \(\ker(\eta)=\mathbb{E}^{s}\oplus\mathbb{E}^{u}\) and \(\alpha(Z)=1\). The invariance of \(\ker(\eta)\) together with \(\eta(Z)=1\) implies that \((f^{t})^{*}\eta=\eta\ \forall t\in\mathbb{R}\). \[(f^{t})^{*}\eta=\eta\quad\text{ and }\quad\eta(Z)\equiv 1\quad\forall t\in \mathbb{R}. \tag{2}\] The codimension-one condition implies the definition of 1-form \(\omega\) such that \[\ker(\omega)=\mathbb{E}^{cu}:=\mathbb{E}^{u}\oplus<Z>.\] In the next section we use a specific vector field that generates \(X\) that generates the stable bundle and imposes the condition \[\omega(X)\equiv 1.\] so that \(\omega\) is well defined. It is standard that if \(\alpha_{0}\) is a 1-form such that \(\alpha_{0}(Z)\equiv 1\) then the 1-form \[\alpha=\alpha_{0}-\alpha_{0}(X)\omega\] satisfies the following: there exists \(C_{1}>0,\zeta>0\) such that for all \(t>0\) \[\|(f^{-t})^{*}\alpha-\eta\|\leq C_{1}e^{-\zeta t}. \tag{3}\] In other words, the family of 1-forms defines a dynamical approximation of \(\eta\). We emphasize that in all the situations that this approximation will be used; the vector field \(X\), the 1-forms \(\omega\) and \(\alpha_{0}\) are \(C^{1}\) which will imply that \(\{\alpha^{(t)}:=(f^{-t})^{*}\alpha,t>0\}\) is the \(C^{1}\) approximation of \(\eta\). Let \(\psi:M\rightarrow(0,\infty)\) be a \(C^{1}\) function. It is standard that if a flow generated by a vector field \(Z\) leaves invariant a smooth volume form \(m\) then the flow generated by \(Z/\psi\) leaves invariant the volume form \(\psi m\). In the case of Anosov flow, Anosov [1] proves that if \(F=\{f^{t},t\in\mathbb{R}\}\) is an Anosov flow generated by \(Z\) then the flow \(F_{\psi}:=\{f^{t}_{\psi},t\in\mathbb{R}\}\) generated by \(Z/\psi\) is also an Anosov flow. Furthermore, Parry [12, SS6] gives an explicite formulation of the associated invariant bunldes. More precisely, if \(\mathbb{E}^{s}_{\psi}\oplus\mathbb{E}^{u}_{\psi}\) are the corresponding invariant splitting of \(F_{\psi}\) we have \[\mathbb{E}^{\sigma}_{\psi}:=\{Y+\xi^{\sigma}(Y)Z:Y\in\mathbb{E}^{\sigma}\}, \sigma=s,u\] where \[\xi^{s}(Y)=\frac{1}{\psi}\int_{0}^{\infty}d(\psi\circ f^{t})(Y)dt\quad\text{ and }\quad\xi^{u}(Y)=\frac{1}{\psi}\int_{0}^{\infty}d(\psi\circ f^{-t})(Y)dt. \tag{4}\] One of the main ingredient in proving the conjecture is based on the following buchning condition: If \(F\) is a volume preserving \(C^{2}\) Anosov flow with \(\dim(\mathbb{E}^{s})=1\) and \(\dim(\mathbb{E}^{u})\geq 2\) then there exists \(\zeta,C>0\) such that: \[\|Df^{t}|_{\mathbb{E}^{s}}\|\cdot\|Df^{t}|_{\mathbb{E}^{u}}\|\leq Ce^{-\zeta t },\quad\text{ for all }\quad t>0. \tag{5}\] This bunching condition was initially observed by Plane in [14] and was recently used by the author and Butterley [4] to construct open sets of exponentially mixing Anosov flows. ### Parry's synchronisation Suppose that \(\widetilde{F}=\{\widetilde{f}^{t}:t\in\mathbb{R}\}:M\to M\) is a \(\mathcal{C}^{2}\) volume-preserving Anosov flow with one-dimensional stable bundle and unstable bundle at least two-dimensional. The \(\mathcal{C}^{r}\) section theorem of Hirsh, Pugh and Shub [10] implies that the unstable bundle \(\tilde{\mathbb{E}}^{s}\) is \(\mathcal{C}^{1}\) because of the bunching between \(\tilde{\mathbb{E}}^{s}\) and \(\tilde{\mathbb{E}}^{cu}\). The \(\mathcal{C}^{r}\) section theorem also means that the centre stable bundle \(\tilde{\mathbb{E}}^{cs}\) is \(\mathcal{C}^{1}\) (since this sub-bundle is codimension-one). We may assume,6 by a smooth change of metric [3, Appendix A] if required, that the flow is immediately expanding (resp. contracting) in the sense that there exists \(\gamma>0\) such that \(\|D\tilde{f}^{t}|\tilde{\mathbb{E}}^{s}_{q}\|\leq e^{-\gamma t}\) and \(\|D\tilde{f}^{-t}|\tilde{\mathbb{E}}^{u}_{q}\|\leq e^{-\gamma t}\) for all \(t\geq 0\), \(q\in M\). Synchronisation, exactly as below, was used by Parry [12] in order to modify an Anosov flow in such a way that the SRB measure coincides with the measure of maximal entropy. It was also used by Simic [17] to prove the Verjovsky conjecture with additional regularity assumptions. Footnote 6: Simic [17] also has an argument like this but we want to ensure better smoothness in our setting. We guarantee immediate expansion / contraction but do not make the stable and unstable bundles orthogonal and hence maintain the regularity of the metric. We denote by \(\operatorname{Jac}(D\tilde{f}^{t}|\tilde{\mathbb{E}}^{s})\) the unstable Jacobian. Let \[u(q):=\lim_{t\to 0}\tfrac{1}{t}\log\operatorname{Jac}(D\tilde{f}^{t}| \tilde{\mathbb{E}}^{s})(q)\] and observe that \(u\) is \(\mathcal{C}^{1}\) and negative. We define the new vector field \[Z=\frac{1}{u}\tilde{Z}\] and consider \(F=\{f^{t}:t\in\mathbb{R}\}:M\to M\), the flow associated to \(Z\). **Proposition 2**.: _Suppose that \(\widetilde{F}=\{\tilde{f}^{t}:t\in\mathbb{R}\}:M\to M\) is a \(\mathcal{C}^{2}\) volume-preserving Anosov flow with \(\dim\tilde{\mathbb{E}}^{s}=1\) and \(\dim\tilde{\mathbb{E}}^{u}\geq 2\). Let \(F=\{f^{t}:t\in\mathbb{R}\}:M\to M\) denote the synchronised flow defined as above. Then_ 1. \(F\) _is a_ \(\mathcal{C}^{1}\) _Anosov flow;_ 2. \(F\) _is topologically equivalent to_ \(\widetilde{F}\)_;_ 3. _There exists a non-zero_ \(\mathcal{C}^{1}\) _vector field_ \(X\) _such that,_ \[Df^{t}X=e^{-t}X,\quad\text{for all }t\in\mathbb{R}.\] The first two claims of the proposition are simple and as described by Parry [12]. For the convenience of the reader, the remainder of this section is devoted to collecting together the necessary details for the proof of the above. The existence of the vector field \(X\) with the appropriate properties was proven by Simic [17, SS4]. The required regularity of \(X\) was claimed in the unpublished manuscript of Simic [18, Lemma 3.3 of ver. 3] but for completeness we include here the proof. **Remark 3**.: The existence of such a vector field \(X\), as given in Proposition 2, is a strong property which tells us a lot about the global behaviour of the flow. Such a vector field, if it exists, is unique up to multiplication by a constant. The synchronised flow is a \(\mathcal{C}^{1}\) Anosov flow because \(1/u\) is \(\mathcal{C}^{1}\). Since the new flow is obtained by modifying only the speed of the flow, the new flow has exactly the same orbits as the original, i.e., it is topologically equivalent to the original flow. The central stable and central unstable bundles for the synchronised flow are identical to the bundles prior to synchronisation. However there is no reason to expect the new unstable bundle \(\mathbb{E}^{s}\) to coincide with \(\tilde{\mathbb{E}}^{s}\) (and similarly for the stable bundle). In the remainder of this section we describe the proof of the final claim of Proposition 2. Without loss of generality we suppose that \(\tilde{\mathbb{E}}^{s}\) is orientable (if not then we pass to a 2-cover of \(M\)). Fix \(\tilde{X}\) a unit vector field tangent to \(\tilde{\mathbb{E}}^{s}\). Let \(\omega\) be the one-form defined by requiring \[\ker\omega=\tilde{\mathbb{E}}^{cu},\quad\omega(\tilde{X})=1.\] Since the two sub-bundles used to define \(\omega\) are both \(\mathcal{C}^{1}\) we know that \(\omega\) is \(\mathcal{C}^{1}\). By Frobenius, since \(\tilde{\mathbb{E}}^{cu}\) is integrable, there exists a continuous 1-form \(\eta\) such that \[d\omega=\eta\wedge\omega.\] **Lemma 4**.: _Let \(\eta\), \(\tilde{Z}\) and \(u\) be defined as above. Then \(\eta(\tilde{Z})=u\)._ Proof.: Since \(\omega(\tilde{X})=1\) and \(\tilde{Z}\in\ker\omega\), \[\eta(\tilde{Z}) =\eta(\tilde{Z})\omega(\tilde{X})-\eta(\tilde{X})\omega(\tilde{Z})\] \[=(\eta\wedge\omega)(\tilde{Z},\tilde{X})\] \[=d\omega(\tilde{Z},\tilde{X}).\] Cartan's formula in this setting reads \(\mathcal{L}_{\tilde{Z}}\omega=i_{\tilde{Z}}d\omega+d(i_{\tilde{Z}}\omega)\). Since \(\tilde{Z}\in\ker\omega\) this means that \(d\omega(\tilde{Z},\tilde{X})=(\mathcal{L}_{\tilde{Z}}\omega)(\tilde{X})\). Consequently \(\eta(\tilde{Z})=(\mathcal{L}_{\tilde{Z}}\omega)(\tilde{X})\). Observe that \(\left(\tilde{f}^{t}\right)^{*}\omega=\operatorname{Jac}(D\tilde{f}^{t}| \tilde{\mathbb{E}}^{s})\;\omega\). Since \(\mathcal{L}_{\tilde{Z}}\omega=\left.\frac{d}{dt}\right|_{t=0}\left(\tilde{f}^ {t}\right)^{*}\omega\) it follows that \(\eta(\tilde{Z})=(\mathcal{L}_{\tilde{Z}}\omega)(\tilde{X})=u\). Let \(X\) be defined as the vector field tangent to \(\mathbb{E}^{s}\) such that \(\omega(X)=1\) (subsequently we will obtain an exact formula for \(X\)). **Lemma 5**.: _Let \(F\) be the synchronised flow and let \(X\) be the vector field defined above. Then \(Df^{t}X=e^{-t}X\) for all \(t\)._ Proof.: Observe that \(\eta(Z)=\eta(\frac{1}{u}\tilde{Z})=1\) by Lemma 4. Since \(\mathbb{E}^{cu}=\tilde{\mathbb{E}}^{cu}\) we know that \(Z\in\ker\omega\) and by choice \(\omega(X)=1\). Therefore, similar to above, \[\eta(Z) =\eta(Z)\omega(X)-\eta(X)\omega(Z)\] \[=(\eta\wedge\omega)(Z,X)\] \[=d\omega(Z,X)\] \[=(\mathcal{L}_{Z}\omega)(X).\] That \((\mathcal{L}_{Z}\omega)(X)=1\) implies that, as required, \(Df^{t}X=e^{-t}X\) for all \(t\) To finish the proof of Proposition 2 it would suffice to show that \(\mathbb{E}^{u}\) is a \(\mathcal{C}^{1}\) subbundle. Unfortunately this doesn't follow from the \(\mathcal{C}^{r}\) section theorem [10] because the synchronised flow is merely \(\mathcal{C}^{1}\) and so we need to argue directly and use the connection to the original flow. **Remark 6**.: In this proof we are not using the full strength of the bunching (5) which we have in this setting. If required, using the formulae given below, it can be shown that \(\mathbb{E}^{u}\) is a \(\mathcal{C}^{1+\alpha}\) sub-bundle for some \(\alpha>0\). We don't pursue this direction because it is not required for our present purposes. **Lemma 7**.: _Let \(X\) be as defined above. Then \(X\) is a \(\mathcal{C}^{1}\) vector field._ Proof.: We have that [12, SS6] \[X=\tilde{X}+\xi(\tilde{X})\tilde{Z}\quad\text{ where }\quad\xi(\tilde{X})= \frac{1}{u}\int_{0}^{\infty}\tilde{X}(u\circ\tilde{f}^{t})\ dt.\] The above integral is well defined since \(\|D\tilde{f}^{t}\|\tilde{\mathbb{E}}^{s}\|\leq e^{-\gamma t}\) and the above identity is verified by showing that the bundle defined as above is indeed invariant under the action of the synchronised flow \(\phi_{Z}^{t}\)[12, SS6]. We merely need to show that \(x\mapsto\xi_{x}(\tilde{X})\) is \(\mathcal{C}^{1}\). As already noted \(\operatorname{Jac}(D\tilde{f}^{t}\|\tilde{\mathbb{E}}^{s})\) is \(\mathcal{C}^{1}\). Recall that we are working in a metric where \(\tilde{\mathbb{E}}^{cu}\) and \(\tilde{\mathbb{E}}^{s}\) are orthogonal but which agrees with the smooth original metric within the sub-bundles. This means that \(u\) is as smooth as \(D\phi_{\tilde{Z}}^{t}\) along the unstable bundle and so \(\tilde{X}(u)\circ\tilde{f}^{t}\) is \(\mathcal{C}^{1}\). To complete the proof of the regularity we take advantage of the bunching (5) between \(\tilde{\mathbb{E}}^{cs}\) and \(\tilde{\mathbb{E}}^{u}\). First observe that \[\tilde{X}(u\circ\phi_{Z}^{-t})(q)=\operatorname{Jac}(D\tilde{f}^{t}|\tilde{ \mathbb{E}}^{s})(q)\cdot(\tilde{X}u)\circ\tilde{f}^{t}(q).\] For convenience let \(\lambda_{t}(q):=\operatorname{Jac}(D\tilde{f}^{t}|\tilde{\mathbb{E}}^{u})\). Differentiating we obtain sum of two terms, \[D\lambda_{t}(q)(\tilde{X}u)(\tilde{f}^{t}q)+\lambda_{t}(q)(D(\tilde{X}u))( \tilde{f}^{t}q)D\tilde{f}^{t}(q). \tag{6}\] We will estimate the \(\mathcal{C}^{1}\) norm of these two terms. Suppose first that \(t\in\mathbb{R}\) and note that \(\lambda_{t}(q)=\prod_{s=0}^{t-1}\lambda(\tilde{f}^{s}q)\) where, for convenience, we write \(\lambda=\lambda_{1}\). Consequently \[D\lambda_{t}(q)=\sum_{s=0}^{t-1}\frac{D\lambda}{\lambda}(\tilde{f}^{s}q)D \tilde{f}^{s}(q)\lambda_{t}(q).\] Bunching means that there exists \(\zeta>0\), \(C>0\) such that, for all \(s\geq 0\), \(q\in M\), \(\|D\tilde{f}^{s}(q)\|\,\lambda_{s}(q)\leq Ce^{\zeta s}\). We also observe that \(\sup_{q\in M}\frac{\|D\lambda\|}{\lambda}(q)<\infty\) and \(\frac{\lambda_{t}(q)}{\lambda_{s}(q)}=\lambda_{t-s}(\phi_{Z}^{s}q)\leq e^{- \eta(t-s)}\) for all \(0\leq s\leq t\). Consequently (increasing \(C>0\) as required) \[\|D\lambda_{t}(q)\| \leq\left(\sup_{y\in M}\frac{\|D\lambda\|}{\lambda}(y)\right) \sum_{s=0}^{t-1}\|D\tilde{f}^{s}(q)\|\,\lambda_{t}(q)\] \[\leq C\sum_{s=0}^{t-1}e^{-\zeta s}\frac{\lambda_{t}(q)}{\lambda_{ s}(q)}\leq Ce^{-\eta t}\sum_{s=0}^{t-1}e^{s(\eta-\zeta)}=Ce^{-\eta t}(e^{t( \eta-\zeta)}-1).\] This estimate, exponentially small with increasing \(t\), holds also for all \(t\in\mathbb{R}\) by increasing \(C>0\). For the second term of (6) we use the easy estimate \[\lambda_{t}(q)\left\|D\tilde{f}^{t}(q)\right\|\leq Ce^{\zeta t}\] from the bunching property. The idea of proving Theorem is about introducing a new time change of the synchronized Anosov flow to guarantee the joint integrability of stable and unstable bundle. To this end will need to show the following **Proposition 8**.: _Let \(F=\{f^{t}\}\) be a codimension-one Anosov flow that is synchronized as in Proposition 2. If \(\psi:M\to(0,\infty)\) is a \(C^{2}\) function then the vector field \(Z_{\psi}=Z/\psi\) generates an Anosov flow whose stable bundle is \(C^{1}\)._ **Remark 9**.: _Observe that Proposition 8 would be given by the \(C^{1}\) section Theorem is the flow was \(C^{2}\), however the synchronized flow is just \(C^{1+\theta}\)._ Proof.: The fact that the flow generated by \(Z_{\psi}\) is Anosov follows directly from 1. To prove that \(\mathbb{E}^{s}_{\psi}\) is \(C^{1}\) we will use its formula. By (4), \(\mathbb{E}^{s}_{\psi}\) is generated by the vector field \[X_{\psi}=X+h_{\psi}Z\quad\text{ where }\quad h_{\psi}=\frac{1}{\psi}\int_{0}^{ \infty}e^{-t}X(\psi)\circ f^{t}dt.\] Therefore to prove that \(\mathbb{E}^{s}_{\psi}\) is \(C^{1}\), it is enough to prove that \(g_{\psi}\) is \(C^{1}\). It is easy to see that for any vector field \(V\in TM\) we have \[\|V(g_{\psi})\|\leq\int_{0}^{\infty}e^{-t}\|Df^{t}\|\|\psi\|_{C^{2}}dt\] Thus since the synchronized flow satisfies the bunching condition, we have the the integral converges therefore \(g_{\psi}\) is \(C^{1}\). ## 3 Special synchronisation and Integrability ### Special synchronisation The aim of this section is to prove Theorem B under the special synchronisation condition. throughout this section, we suppose that we have a \(C^{1}\) codimension-one volume preserving Anosov flow \(F=\{f^{t},t\in\mathbb{R}\}:M\to M\). We also suppose that the stable bundle, which is one dimensional, is spanned by a \(C^{1}\) vector field \(X\). Admissible disk:An admissible \(\mathcal{D}\) disk is a codimension-one open \(C^{1}\) submanifold that is transverse to the flow \(F\), tangent to \(X\) and contains an unstable manifold of \(F\). More precisely, if \(\mathcal{L}_{p}\) is a local unstable manifold through \(p\in M\) and \(\varepsilon>0\), a typical admissible disk through \(p\) is of the form \[\mathcal{D}_{p}:=\bigcup_{|s|<\varepsilon}e^{sX}\mathcal{L}_{p}\] where \(e^{sX}\) denotes the time \(s\) map of the flow generated by the vector field \(X\) through \(p\). **Remark 10**.: We notice that if the vector field \(X\) is \(C^{1}\) then \(\mathcal{D}_{p}\) defines a \(C^{1}\) submanifold. It has the properties that \(X\) is tangent to \(\mathcal{D}_{p}\) and the local unstable manifold \(\mathcal{L}_{p}\) is in \(\mathcal{D}_{p}\). However this disk is tangent to \(\mathbb{E}^{s}\oplus\mathbb{E}^{u}\) only when the stable and unstable bundles are jointly integrable. Admissible section:An admissible section for \(F\) is a finite collection of admissible disk \(\mathcal{S}:=\{\mathcal{D}_{1},\cdots,\mathcal{D}_{\ell}\}\) for some \(\ell\in\mathbb{N}\) such that: 1. for every \(q\in M\), there exists \(t_{q}^{\pm}\geq 0\) such that \(f^{t_{q}^{+}}(q)\in\mathcal{S}\) and \(f^{-t_{q}^{-}}(q)\in\mathcal{S}\), 2. \(\overline{\mathcal{D}}_{i}\bigcap\overline{\mathcal{D}}_{j}=\emptyset\) for \(i\neq j\), where \(\overline{\mathcal{D}}_{i}\) defines the closure of \(\mathcal{D}_{i}\). Admissible flow box:Given an admissible disk \(\mathcal{D}\) and \(\tau>0\) an admissible flow box of length \(\tau\) is defined by \[\mathcal{U}^{(\tau)}:=\bigcup_{|t|<\tau}f^{t}\mathcal{D}.\] The space of functions:Given a continuous vector field \(Y\) on \(M\) and a \(C^{1}\) function \(\psi\), we write \(Y(\psi)=d\psi(Y)\) where \(d\psi\) is the 1-form given by the exterior derivative of the 0-form \(\psi\). Let \(\mathfrak{D}\) be the space of functions \(\psi\) such that: 1. \(\psi,X(\psi)\in C^{1}\), 2. \(1<\psi(q)<2\), for every \(q\in M\). If \(\psi\in\mathfrak{D}\), we define \[\|\psi\|_{\mathfrak{D}}=\max\{\|\psi\|_{C^{0}},\|g_{\psi}\|_{C^{0}},\sup_{V\in \mathbb{E}^{cu},\|V\|=1}\|V(\psi)\|_{C^{0}},\sup_{V\in\mathbb{E}^{u},\|V\|=1} \|V(g_{\psi})\|_{C^{0}}\} \tag{7}\] **Remark 11**.: We notice that \(\mathfrak{D}\) is not a linear space, let alone a Banach space. The motivation of the definition of \(\|.\|_{\mathfrak{D}}\) is to allow the flexibility that the derivative in the stable direction is not controlled however all the quantities that are involve in the definition of the stable and unstable bundle of a reparametrized flow by a function in \(\mathfrak{D}\) are controlled by \(\|.\|_{\mathfrak{D}}\). Precisely the function \(\xi\) that is defined in (4) is controlled by the norm \(\|.\|_{\mathfrak{D}}\). In particular this space helps to define certain flow whose vector field is not necessarily \(C^{1}\) but are Anosov in the sense of (1) as we see in Proposition 12. **Proposition 12**.: _If \(\{\psi_{n},n>1\}\subset\mathfrak{D}\) is such that \(\|\psi_{n}-\psi_{n+1}\|_{\mathfrak{D}}\leq\theta^{n}\) for some \(\theta\in(0,1)\) then \(\{\psi_{n},n>1\}\) converges in the \(C^{0}\) topology to a function \(\psi^{*}\) such that the vector field \(Z_{\psi^{*}}=Z/\psi^{*}\) generates a flow that is Anosov as in (1)._ Proof.: For \(n\geq 1\), let \(F_{n}:=\{f_{n}^{t}:t\in\mathbb{R}\}\) be the flow generated by the vector field \(Z_{n}=Z/\psi_{n}\) and \(X^{(n)}\) is the corresponding vector field that spans the stable bundle of \(F_{n}\). Since \(\|\psi_{n}-\psi_{n+1}\|_{\mathfrak{D}}\leq\theta^{n}\) for some \(\theta\in(0,1)\) then, in particular, \(\psi_{n}\) has a limit \(\psi^{*}\) in the \(C^{0}\) topology that is differentiable along the weak unstable bundle. Let \(t\in\mathbb{R}\) we want to prove that, \(Df_{n}^{t}\), the time \(t\) differential of the flow generated by the vector field \(Z_{n}:=Z/\psi_{n}\) converges uniformly in order to have that the limit flow is \(C^{1}\). Since the derivatives of \(\psi_{n}\) along the center unstable bundle are converging uniformly then \(Df_{n}^{t}|_{\mathbb{E}^{cu}}\) converges uniformly. Therefore to prove that \(Df_{n}^{t}\) converges uniformly, it is enough to prove that \(Df_{n}^{t}|_{\mathbb{E}^{s}}\) converges uniformly. **Claim 1**.: _We have the following_ \[Df_{n}^{t}X^{(n)}=\exp\left(\int_{0}^{t}\frac{u}{\psi_{n}}\circ f_{n}^{s}ds \right)X^{(n)}\] _where \(u:=\frac{d}{dt}|_{t=0}\operatorname{Jac}(Df^{t}|\mathbb{E}^{s})\)._ Proof.: By (4) we have \[X^{(n)}=X+g_{\psi_{n}}Z_{n}\quad\text{ where }\quad g_{\psi_{n}}=\int_{0}^{ \infty}d(\psi\circ f^{t})(X)dt.\] Using linearity of the Lie bracket we have \[\left[X^{(n)},Z_{n}\right] =\left[X,\frac{Z}{\psi_{n}}\right]+\left[g_{\psi_{n}}\frac{Z}{ \psi_{n}},\frac{Z}{\psi_{n}}\right]\] \[=-\frac{X(\psi_{n})}{\psi_{n}^{2}}Z+\frac{1}{\psi_{n}}\left[X,Z \right]-\frac{Z(g_{\psi_{n}})}{\psi_{n}^{2}}Z\] Since \(Df^{t}X=\operatorname{Jac}(Df^{t}|\mathbb{E}^{s})X\) then we have \([X,Z]=uX\), similarly, it is easy to see that \(Z(g_{\psi_{n}})=g_{\psi_{n}}+uX(\psi_{n})\). Substituting this into the above gives \[\left[X^{(n)},Z_{n}\right]=\frac{u}{\psi_{n}}X^{(n)}\] therefore using that \(\left[X^{(n)},Z_{n}\right]=\frac{d}{dt}|_{t=0}Df_{n}^{t}X^{(n)}\) then we get the desired equality. Since \(X^{(n)}\) converges uniformly to a continuous vector field then we have \(Df_{n}^{t}X^{(n)}\) converges uniformly. Thus \(F^{*}\) defines a \(C^{1}\) flow that is Anosov. The following Lemma gives regularity of the stable foliation. **Lemma 13**.: _Let \(\{\psi_{n},n>1\}\subset\mathfrak{D}\) such that \(\|\psi_{n}-\psi_{n+1}\|_{\mathfrak{D}}\leq\theta^{n}\) for some \(\theta\in(0,1)\) and \(X^{(n)}\) denotes the vector field that spans the stable bundle of \(Z_{n}=Z/\psi_{n}\). Then for every \(s\in\mathbb{R}\), the sequence of differential of the flow given of \(X^{(n)}\), \(\{De^{sX^{(n)}}\}\), converges uniformly. In particular the \(X^{*}\) defines a \(C^{1}\) flow._ Proof.: To prove the lemma, it is enough to choose a set of continuous vector field and evaluate the differential on that set. It is easy to see that \(De^{sX^{(n)}}X^{(n)}=X^{(n)}\) and since for \(V\in\mathbb{E}^{u}\) the functions \(V(g_{\psi_{n}})V(\psi_{n})\) converge uniformly to continuous functions then if \(V\in\mathbb{E}^{u}\) is a continuous vector field then \(De^{sX^{(n)}}V\) converges to a continuous vector field. In order to conclude the proof, we only need to evaluate \(De^{sX^{(n)}}Z_{n}\). As in the proof of Claim 1, we have \([X^{(n)},Z_{n}]=\frac{u}{\psi_{n}}Z_{n}\) therefore we have \[De^{sX^{(n)}}Z_{n}=Z_{n}+\int_{0}^{s}\frac{u}{\psi_{n}}\circ e^{\tau X^{(n)}} d\tau X^{(n)}\] Then we have \(De^{sX^{(n)}}Z_{n}\) is converging uniformly. Thus \(De^{sX^{(n)}}\) converges uniformly and \(X^{*}\) defines a \(C^{1}\) flow. **Definition 14**.: [Special synchronisation] The flow \(F\) is said to be _specially synchronisable_ if: 1. there exists a \(C^{1}\) differential 1-form \(\alpha\) such that \(X\in\ker(\alpha),\alpha(Z)\equiv 1\) 2. there exists a sequence of functions \(\{\psi_{n},n\geq 1\}\subset\mathfrak{D}\) and \(\theta\in(0,1)\) such that \(\|\psi_{n}-\psi_{n+1}\|_{\mathfrak{D}}\leq\theta^{n}\) 3. there exist an admissible section \(\mathcal{S}\) for the flow \(F\), a sequence of \(T_{n}\to\infty\) and \(\tau_{n}\to 0\) with : \[\|Df^{T_{n}}|_{\mathbb{E}^{s}}\|\leq\tau_{n}^{2}\] (8) and \[\|i_{X_{\psi_{n}}}\circ d\alpha_{f^{T_{n}}\mathcal{S}^{(\tau_{n})}}^{(n)}\| \leq\theta^{n}\quad\text{where}\quad\alpha^{(n)}=\psi_{n}\alpha-h_{\psi_{n}}\omega.\] (9) **Remark 15**.: We observe that the first item in the above definition is trivial as we will see in the next sections. The most important part of the definition is given by (9). In Section 3.3, we will prove Theorem B that states that special synchronisability implies joint integrablity of the limit Anosov flow. ### Integrability and global cross section The problem of integrability of \(C^{1}\) sub-bundles has been resolved by the well known Frobenius Theorem which states that a smooth 1-form \(\eta\) is integrable if and only if it satisfies the involutivity condition \[\eta\wedge d\eta=0 \tag{10}\] where \(d\eta\) is the exterior derivative of \(\eta\). In the case of a continuous \(1\)-form, the problem of integrability is more subtile in the sense that the exterior derivative \(d\eta\) might not exist. However in the context of smooth dynamical systems most invariant \(1\)-forms are typically just Holder continuous nevertheless there are dynamical methods to integrate them. For instance, the well known stable manifold theorem solve the problem of integrability of certain Holder continuous \(1\)-forms. In [11], we give a continuous version of the Frobenius Theorem that gives an alternative proof of the stable manifold Theorem. The ideas of integrability in [11] are the main inspiration of this work. On the other hand, we notice that the existence of exterior derivative is a weaker condition than having smoothness of the \(1\)-form; indeed if \(\psi\) is a \(C^{1}\) function then the \(1\)-form \(d\psi\) is just continuous however has an exterior derivative that is identically zero. Along the same lines, Hartman [9] defines the notion exterior derivative of a continuous \(1\)-form via Stokes Theorem. **Definition 16**.: _A continuous \(1\)-form \(\eta\) has exterior derivative if there exists a continuous \(2\)-form \(\beta\) such that_ \[\int_{J}\eta=\int_{D}\beta\] _for every Jordan curve \(J\) that bounds a disk \(D\). In this case we write \(d\eta=\beta\)._ With this definition, Hartman [9] gives a similar statement to Frobenius Theorem: Let \(\eta\) be a continuous \(1\)-form that has a continuous exterior derivative. If \(\eta\) is involutive then it is integrable ([9, Theorem 3.1]). Weaker forms of Frobenius Theorem has been proved by the author in [11] that in particular gives an alternative proof of the well known stable manifold theorem for hyperbolic systems. However for the purpose of this paper, we will apply the above version of Hartman. The case of integrability of \(\mathbb{E}^{s}\oplus\mathbb{E}^{u}\) is very special, Plante [14] proves that following **Proposition 17**.: _Let \(F=\{f^{t},t\in\mathbb{R}\}:M\to M\) be an Anosov flow such that for every \(q\in M\), there exists a \(C^{1}\) local submanifold tangent to \(\mathbb{E}^{s}\oplus\mathbb{E}^{u}\). Then corresponding invariant \(1\)-form \(\eta\) that is defined in (2) has an exterior derivative that is zero, i.e_ \[d\eta=0.\] **Definition 18**.: _A flow \(F=\{f^{t},t\in\mathbb{R}\}\) has a global cross section if there exists a \(C^{1}\) closed submanifold of codimension-one such_ \[\forall q\in M,\exists t_{q}\in\mathbb{R}:f^{t_{q}}(q)\in N.\] The following is also proved by Plante **Theorem 19** (Plante [14, Theorem 3.7]).: _Let \(F=\{f^{t},t\in\mathbb{R}\}:M\to M\) be a codimension-one Anosov flow such that for every \(q\in M\), there exists a \(C^{1}\) local submanifold tangent to \(\mathbb{E}^{s}\oplus\mathbb{E}^{u}\). Then \(F\) admits a global cross section._ ### Proof of Theorem B Let \(\{\psi_{n},n>1\}\subset\mathfrak{D}\) be given by the definition of special synchronisability and \(\psi^{*}\) be a \(C^{0}\) limit of the sequence \(\{\psi_{n},n>1\}\). By Proposition 12, the vector field \(Z^{*}=Z/\psi^{*}\) defines an Anosov flow \(F^{*}\); let \(\eta^{*}\) be the corresponding invariant 1-form given in (2). The proof of Theorem B consists of proving that \(\eta^{*}\) which concludes the proof of Theorem B. For notational purposes the flow generated by \(Z_{n}:=Z/\psi_{n}\) is denoted by \(F_{n}:=\{f^{t}_{n},t\in\mathbb{R}\}\). Let \(\phi^{\pm}:M\to\mathbb{R}\) be the function defined by \[\phi^{\pm}(q):=\min\{t>0:f^{\pm t}(q)\in\mathcal{S}\}\] This corresponds to the first return time in forward and backward time for \(q\) to the section \(\mathcal{S}\) with respect to the flow \(F\). To simplify the notation, for \(q\in M\), we write \(\mathcal{D}^{\pm}(q)\) to mean the piece of section in which \(q\) returns for the first time in forward and backward time, i.e. \[f^{\phi^{\pm}(q)}(q)\in\mathcal{D}^{\pm}(q).\] We recall that since the pieces of sections are \(C^{1}\) then \(\phi^{\pm}\) define piecewise \(C^{1}\) functions and for \(n>1\), we write \[\phi^{\pm}_{n}:=\phi^{\pm}\circ f^{-T_{n}}.\] Notice that the functions \(\phi^{\pm}_{n}\) defines the first return times in forward and backward times with respect to the section \(f^{T_{n}}\mathcal{S}\). Moreover since the local sections are tangent to the stable bundle of \(F\) which is spanned by the vector field \(X\) then we have \[X(\phi^{\pm}_{n})_{q}=0 \tag{11}\] where \(q\) is a point of differentiability of \(\phi^{\pm}_{n}\). Similarly, by the definition of Anosov flow, \(Df^{-T_{n}}\) does not expand vectors in \(\mathbb{E}^{cu}\); there exists a constant \(C_{1}>0\) that only depends on \(\mathcal{S}\) such that \[|Y(\phi^{\pm}_{n})_{q}|\leq C_{1}\|Y\| \tag{12}\] where \(q\) is a point of differentiability of \(\phi^{\pm}_{n}\) and \(Y\in\mathbb{E}^{cu}\). We notice that the non-differentiable point of \(\phi^{\pm}_{n}\) is given by the set \(f^{T_{n}}\partial\mathcal{S}\); the iterates of the boundaries of the section under \(f^{T_{n}}\). For \(n>1\), let \(b^{\pm}_{n}:M\to\mathbb{R}\) be the function defined as follows: if \(q\in M\) we consider \(f^{T_{n}}\mathcal{D}^{+}(q)\) and \(f^{T_{n}}\mathcal{D}^{-}(q)\) where be the piece of section whose orbit of \(q\) returns for the first time in forward and backward time respectively. We define \[W^{s,\pm}_{loc,n}(q):=\{q^{\prime}\in W^{s}_{n}(q):f^{T_{n}}\mathcal{D}^{\pm} (q)=f^{T_{n}}\mathcal{D}^{\pm}(q^{\prime})\}\] where \(W^{s}_{n}(q)\) is the local stable manifold through \(q\) under the flow \(F_{n}\). This consists of points on the stable manifolds of \(q\) with respect to \(F_{n}\) that share the the same first return section. \(b^{\pm}_{n}(q)\) is the length of the piece of stable \(W^{s,\pm}_{loc,n}(q)\). We define \(\overline{\phi}^{\pm}_{n}:M\to\mathbb{R}\) by \[\overline{\phi}^{\pm}_{n}(q)=\frac{1}{b^{\pm}_{n}}\int_{0}^{b^{\pm}_{n}}\phi^ {\pm}_{n}\circ\gamma(t)dt\] where \(\gamma\) is a unit speed parametrisation of the piece of stable manifold \(W^{s,\pm}_{loc,n}(q)\). We observe that, from its definition, \(\overline{\phi_{n}^{\pm}}\) is piecewise constant along stable manifolds of \(F_{n}\) and defines a piecewise \(C^{1}\) function. **Lemma 20**.: _For every \(q\in M\) and \(n\) large enough we have_ \[f_{n}^{\overline{\phi_{n}^{\pm}}(q)}(q)\in\mathcal{U}^{(\tau_{n})}.\] Proof.: For \(q^{\prime}\in W^{s,\pm}_{loc,n}(q)\) we have \[\left|\overline{\phi_{n}^{\pm}}(q)-\phi_{n}^{\pm}(q^{\prime})\right|=\left| \frac{1}{b_{n}^{\pm}}\int_{W^{s,\pm}_{loc,n}(q)}\phi_{n}^{\pm}\circ\gamma(t)- \phi_{n}^{\pm}(q^{\prime})dt\right|\leq\|X^{(n)}(\phi_{n}^{\pm})\|b_{n}^{\pm}\] where the last inequality uses the Mean Value Theorem. Using (11), (12) and the formula of \(X^{(n)}\), we have \(\|X^{(n)}(\phi_{n}^{\pm})\|\leq C\) where \(C>0\) is a constant that is estimated by \(C\leq\sup_{n}\|\psi_{n}\|_{\mathfrak{D}}<\infty\). Since the width of every piece of section \(f^{T_{n}}\mathcal{D}_{i}\) is bounded by \(\|Df^{T_{n}}\|_{\mathbb{E}^{s}}\|\) then using the formula of \(X^{(n)}\) we have \(b_{n}^{\pm}\leq C\|Df^{T_{n}}\|_{\mathbb{E}^{s}}\|\). Therefore using (8) in the definition of special synchronisability we have \[\left|\overline{\phi_{n}^{\pm}}(q)-\phi_{n}^{\pm}(q^{\prime})\right|\leq C \tau_{n}^{2}.\] Since the constant \(C\) does not depend on \(n\) then for \(n\) large enough we have \(\left|\overline{\phi_{n}^{\pm}}(q)-\phi_{n}^{\pm}(q^{\prime})\right|<\tau_{n}\) which gives the lemma. From the definition of special synchronisability, we have \(\|i_{X^{(n)}}\circ d\alpha^{(n)}_{f^{T_{n}}\mathcal{U}^{(\tau_{n})}}\|\leq \theta^{n}\), therefore we can choose a sequence \(t_{n}\to\infty\) such that \[\|Df_{n}^{-t_{n}}\|\cdot\|i_{X^{(n)}}\circ d\alpha^{(n)}_{f^{T_{n}}\mathcal{U }^{(\tau_{n})}}\|\leq\theta^{n/2}. \tag{13}\] We let \(\eta^{(n)}\) be the 1-form defined by \[\eta^{(n)}:=\varphi_{n}^{+}\cdot(f_{n}^{-t_{n}-\overline{\phi_{n}^{+}}})^{*} \alpha^{(n)}+\varphi_{n}^{-}\cdot(f_{n}^{-t_{n}-\overline{\phi_{n}^{-}}})^{*} \alpha^{(n)}.\] where \[\varphi_{n}^{\pm}:=\frac{\overline{\phi_{n}^{\pm}}}{\overline{\phi_{n}^{+}}+ \overline{\phi_{n}^{-}}}.\] We recall that since the functions \(\overline{\phi}^{\pm}\) and \(\overline{\phi_{n}^{\pm}}\) are piecewise differentiable then the 1-form \(\eta^{(n)}\) is also piecewise differentiable. Proof of Theorem B.: First of all we recall that from the definition of special synchronisation, \(\psi_{n}\to\psi^{*}\) in \(\mathfrak{D}\). In particular, by the definition of \(\|\psi_{n}-\psi^{*}\|_{\mathfrak{D}}\) if \(\{Y_{1},\cdots,Y_{m}\}\) span \(\mathbb{E}_{q}^{u}\) for some \(q\) and \[Y_{i}^{(n)}:=Y_{i}+\frac{1}{\psi_{n}}\int_{0}^{\infty}d(\psi_{n}\circ f^{-t}) (Y_{i})dtZ,\quad i=1,\cdots m\] spans the unstable bundle of \(Z_{n}\) and converges to the unstable bundle of \(Z_{*}\). Similalry \(X^{(n)}\) converges to \(X^{*}\) the stable bundle of \(Z_{*}\). Since \(X^{(n)}\in\ker(\eta^{(n)})\) and \(\eta^{(n)}(Z_{n})\equiv 1\) then using Cartan formula we have \[d\eta^{(n)}(X^{(n)},Z_{n})_{q}\equiv 0 \tag{14}\] where \(q\in M\) is a point where \(\eta^{(n)}\) is differentiable. It is easy to see that \[d\eta^{(n)}= d\varphi_{n}^{+}\wedge(f_{n}^{-t_{n}-\phi^{+}})^{*}\alpha^{(n)}+ \varphi_{n}^{+}\left(d\overline{\phi}_{n}^{+}\wedge(f_{n}^{-t_{n}-\phi^{+}})^{ *}\mathfrak{L}_{Z_{n}}\alpha^{(n)}+\overline{\phi}_{n}^{+}(f_{n}^{-t_{n}-\phi ^{+}})^{*}d\alpha^{(n)}\right)\] \[+d\varphi_{n}^{-}\wedge(f_{n}^{-t_{n}-\phi^{-}})^{*}\alpha^{(n)}+ \varphi_{n}^{-}\left(d\overline{\phi}_{n}^{-}\wedge(f_{n}^{-t_{n}-\phi^{-}})^ {*}\mathfrak{L}_{Z_{n}}\alpha^{(n)}+\overline{\phi}_{n}^{-}(f_{n}^{-t_{n}-\phi ^{-}})^{*}d\alpha^{(n)}\right). \tag{15}\] We will estimate each term separately. For the first and fourth we have \[d\varphi_{n}^{\pm}\wedge(f_{n}^{-t_{n}-\phi^{\pm}})^{*}\alpha^{(n)}(X^{(n)},Y_ {i}^{(n)})=X^{(n)}(\varphi_{n}^{\pm})\cdot(f_{n}^{-t_{n}-\phi^{\pm}})^{*} \alpha^{(n)}(Y_{i}^{(n)})=0 \tag{16}\] where the last equality uses \(X(\varphi_{n}^{+})=0\). For the second and fifth term we have \[d\overline{\phi}_{n}^{\pm}\circ f^{-t_{n}}\wedge(f_{n}^{-t_{n}-\phi^{\pm}})^{ *}\mathfrak{L}_{Z_{n}}\alpha^{(n)}(X^{(n)},Y_{i}^{(n)})=X^{(n)}(\overline{\phi }_{n}^{\pm}\circ f^{-t_{n}})(f_{n}^{-t_{n}-\phi^{\pm}})^{*}\mathfrak{L}_{Z_{n} }\alpha^{(n)}(Y_{i}^{(n)})=0 \tag{17}\] where the last equality uses that, from its definition, we have \(X^{(n)}(\overline{\phi}_{n}^{\pm}\circ f^{-t_{n}})=0\). For the third an sixth term, we use the definition of special synchronisation and Lemma 20 to have \[\left|(f_{n}^{-t_{n}-\phi^{\pm}})^{*}d\alpha^{(n)}(X^{(n)},Y_{i}^{(n)})\right| \leq C\|Df_{n}^{-t_{n}}\|\cdot\|i_{X^{(n)}}\circ d\alpha_{f^{T_{n}}\mathcal{U} ^{(n)}}^{(n)}\| \tag{18}\] Substituting, (16), (17) and (18) into (15) and using (14) and (13) we have \[\|i_{X^{(n)}}\circ d\eta^{(n)}\|\leq C\theta^{n/2} \tag{19}\] The rest of the proof consist of constructing a submanifold that is tangent to \(\ker(\eta^{*})\) through every point. To this end, we take an unstable manifold \(\mathcal{L}_{*}\) of \(Z_{*}\) that is parametrized by a \(C^{1}\) map \(\phi:(-\varepsilon,\varepsilon)^{m}\to M\) for some \(\varepsilon>0\). We consider the following object \(\mathcal{W}_{n}:(-\varepsilon,\varepsilon)^{m+1}\to M\): \[\mathcal{W}_{n}(s,u_{1},\cdots,u_{m}):=e^{sX^{(n)}}\circ\phi(u_{1},\cdots,u_{m }).\] **Claim 2**.: _For \(s\in(-\varepsilon,\varepsilon)\), if \(\gamma_{s}:(-\varepsilon,\varepsilon)\to M\) is the curve defined by_ \[\gamma_{s,i}(u):=e^{sX^{(n)}}\circ\phi(u_{1},\cdots,u_{i-1},u,u_{i+1},\cdots,,u_{m})\] _then_ \[\lim_{n\to\infty}\int_{\gamma_{s,i}}\eta^{*}=0.\] Proof.: Let \(\Gamma=\gamma_{0,i}\cup\gamma_{s,i}\cup\beta_{1}\cup\beta_{2}\) be the closed curve such that \[\beta_{1}(s)=e^{sX^{(n)}}\circ\phi(-\varepsilon,\cdots,-\varepsilon)\quad\text{ and }\quad\beta_{1}(s)=e^{sX^{(n)}}\circ\phi(\varepsilon,\cdots,\varepsilon)\] \(\Gamma\) bounds a disk that is defined by \[D(s,u)=e^{sX^{(n)}}\circ\gamma_{0,i}(u).\] We recall that from the definition of \(\eta^{(n)}\), the set of points where \(\eta^{(n)}\) is given by the iterates of the boundaries of the section \(f^{T_{n}}\mathcal{S}\) therefore since \(D\) is transversal to the flow then we can partition the disk \(D\) into finite number of disks \(D_{1},\cdots,D_{\ell}\) such that \(\eta^{(n)}\) is differentiable in \(int(D_{i})\) and \(int(D_{i})\cap int(D_{j})=\emptyset\) for \(i\neq j\). Using Stokes Theorem we have \[\int_{\Gamma}\eta^{(n)}=\sum_{i=1}^{\ell}\int_{\Gamma_{i}}\eta^{(n)}=\sum_{i= 1}^{\ell}\int_{D_{i}}d\eta^{(n)}_{i}\] Using (13) and the fact that each \(D_{i}\) is tangent to \(X^{(n)}\) we have \[\left|\int_{\Gamma}\eta^{(n)}\right|\leq\sum_{i=1}^{\ell}|D_{i}|\|i_{X^{(n)}} \circ d\eta^{(n)}\|\leq C|D|\theta^{n/2}. \tag{20}\] We have the following \[\int_{\gamma_{s,i}}\eta^{*}=\int_{\Gamma}\eta^{(n)}+\int_{\Gamma}(\eta^{*}- \eta^{(n)})+\int_{\beta_{1}}\eta^{*}-\int_{\beta_{2}}\eta^{*}. \tag{21}\] Since \(X^{(n)}\to X_{*}\in\ker(\eta^{*})\) then we have \[\lim_{n\to\infty}\int_{\beta_{1}}\eta^{*}=\lim_{n\to\infty}\int_{\beta_{2}} \eta^{*}=0 \tag{22}\] From (3) we have \(\eta^{(n)}\to\eta^{*}\) which implies \[\lim_{n\to\infty}\int_{\Gamma}(\eta^{*}-\eta^{(n)})=0 \tag{23}\] Substituting (20), (22) and (23) into (21) we get the result. We observe that for every \(q=\mathcal{W}_{n}(s,u_{1},\cdots,u_{m})\) we have that \[T_{q}\mathcal{W}_{n}=span\{X^{(n)},De^{sX^{(n)}}\frac{d\gamma_{s,1}}{du}, \cdots,De^{sX^{(n)}}\frac{d\gamma_{s,m}}{du}\}\] Therefore using the above claim we have and the fact that \(De^{sX^{(n)}}\) is converging from Lemma 13 then we have \[T_{q}\mathcal{W}_{n}\to\ker(\eta^{*})\quad\text{ as }\quad n\to\infty.\] Thus \(\mathcal{W}_{n}\) converges to a submanifold that is tangent to \(\ker(\eta^{*})\). This implies that \(\ker(\eta^{*})\) is integrable and since \(F^{*}\) and \(F\) are topologically equivalent, we get Theorem B. Special synchronisation equation The purpose of this section is to rewrite the quantities involved in (9) in terms of the time change \(\psi\) and the \(1\)-form in the first item of Definition 14. Throughout this section, we suppose that we have a codimension-one Anosov flow \(F=\{f^{t},t\in\mathbb{R}\}\) generated by a \(C^{1}\) vector field \(Z\) whose stable bundle is generated by a \(C^{1}\) vector field \(X\). We recall that we have a globally defined \(1\)-forms \(\eta\) with the properties that \[(f^{t})^{*}\eta=\eta\quad\text{ and }\quad\eta(Z)\equiv 1\quad\forall t\in \mathbb{R}. \tag{24}\] Since the stable bundle is one dimensional and generated by a vector field \(X\) then the weak unstable bundle \(\mathbb{E}^{cu}\) is defined by a \(1\)-form \(\omega\) with the properties that \[\ker\omega=\mathbb{E}^{cu},\quad\text{ and }\quad\omega(X)\equiv 1. \tag{25}\] We also recall that the weak unstable bundle being of codimension-one is \(C^{1}\) and since \(X\) is also a \(C^{1}\) vector filed therefore the \(1\)-form \(\omega\) is \(C^{1}\). Given a \(C^{2}\) positive function \(\psi:M\to(0,\infty)\), we define the function \(h_{\psi}:M\to\mathbb{R}\) as \[h_{\psi}:=\frac{1}{\psi}\int_{0}^{\infty}d(\psi\circ f^{t})(X)dt.\] By (4), the stable bundle of the flow \(F_{\psi}\) is given by \[X_{\psi}=X+h_{\psi}Z.\] **Lemma 21**.: _There exists a continuous line bundle \(L\in\mathbb{E}^{u}\) and a \(C^{1}\) differential \(1\)-form \(\alpha_{0}\) with \(\alpha_{0}(Z)\equiv 1\) such that \(|\alpha_{0}|_{L}|>\frac{1}{2}\)._ Proof.: Let \(L\in\mathbb{E}^{u}\) be a \(C^{\theta}\) line bundle; this can be obtained by first considering a vector field \(\widetilde{Z}\) in \(\mathbb{E}^{cu}\) that approximates \(Z\) but nowhere equal and then project this vector field onto \(\mathbb{E}^{u}\). Let \(p\in M\), in a neighborhood of \(p\), we can find \(\alpha_{p}\) solve the following equation \[\mathfrak{L}_{Z}\alpha_{p}\equiv 0\quad\alpha_{p}(Z)(p)=1\quad\alpha_{p}(v^{u}) (p)=1\quad\text{ and }\quad\alpha(\widetilde{Z})(p)=0\] where \(v^{u}\in L(p)\) defined by \(Z(p)=\widetilde{Z}(p)+v^{u}\). Since \(Z\) is \(C^{1}\), \(\alpha_{p}\) is also \(C^{1}\). This implies that there exists \(\delta_{p}>0\) such that \(\alpha_{p}(v^{u})>\frac{1}{2}\) for \(v^{u}\in L(q)\) with \(Z(q)=\widetilde{Z}(q)+v^{u}\) and \(q\in B(p,\delta_{p})\). This defines an open cover \(\{B_{1},\cdots,B_{n}\}\) such that in each \(B_{i}\) there exists a \(C^{1}\) differential \(1\)-form \(\alpha_{i}\) such that \(\|\alpha_{i}\|_{L}\|>\frac{1}{2}\). We take a partition of unity \(\{\rho_{1},\cdots,\rho_{n}\}\) subject to the partition and define \[\alpha=\sum_{i=1}^{n}\rho_{i}\alpha_{i}\] It is easy to see that \(\|\alpha|_{L}\|>\frac{1}{2}\) which implies the result. Let \(\alpha_{0}\) given by the above Lemma, and we define \(\alpha\) by \[\alpha:=\alpha_{0}-\alpha_{0}(X)\omega.\] Since \(X\) and \(\omega\) are \(C^{1}\) then \(\alpha\) defines a \(C^{1}\) form on \(TM\). It is easy to see that \[\alpha(Z)\equiv 1\quad\text{ and }\quad X\in\ker(\alpha). \tag{26}\] Given a \(C^{2}\) positive function \(\psi\), we let \(\alpha^{\psi}\) be the 1-form defined by \[\alpha^{\psi}:=\psi(\alpha-h_{\psi}\omega).\] Then it easy to see that \(X_{\psi}\in\ker(\alpha^{\psi})\) and \(\alpha^{\psi}(Z_{\psi})\equiv 1\). The main purpose of proving the Verjovsky conjecture is to solve the following equation \[\mathfrak{L}_{X_{\psi}}\alpha^{\psi}=i_{X_{\psi}}\circ d\alpha^{\psi}\equiv 0. \tag{27}\] **Remark 22**.: We first observe that since \(X_{\psi}\in\ker(\alpha^{\psi})\) then we have \(\mathfrak{L}_{X_{\psi}}\alpha^{\psi}(X_{\psi})\equiv 0.\) In addition, since the stable bundle of the Anosov flow generated by \(Z_{\psi}=Z/\psi\) is given by \(X_{\psi}\) then \([X_{\psi},Z_{\psi}]=a_{\psi}X_{\psi}\) for some function \(a_{\psi}\), which using Cartan Formula and the fact that \(\alpha^{\psi}(Z_{\psi})\equiv 1\) imply \(\mathfrak{L}_{X_{\psi}}\alpha^{\psi}(Z_{\psi})=0\). Therefore to construct \(\psi\) such that \(\mathfrak{L}_{X_{\psi}}\alpha^{\psi}\equiv 0\), we can focus on anihilating \(\mathfrak{L}_{X_{\psi}}\alpha^{\psi}|_{\mathbb{E}^{u}}\). By the definition of the Lie derivative, writing \(\beta^{\psi}:=\alpha-h_{\psi}\omega\) gives \[\mathfrak{L}_{X_{\psi}}\alpha^{\psi}=X_{\psi}(\psi)\cdot\beta^{\psi}+\psi \mathfrak{L}_{X_{\psi}}\beta^{\psi}\] Using Cartan Formula and the fact that \(X_{\psi}\in\ker(\beta^{\psi})\) we have \(\mathfrak{L}_{X_{\psi}}\beta^{\psi}=i_{X_{\psi}}\circ d\beta^{\psi}\). From the definition of \(\beta^{\psi}\), we have \(d\beta^{\psi}=d\alpha-dh_{\psi}\wedge\omega-h_{\psi}d\omega\). Therefore we have \[\mathfrak{L}_{X_{\psi}}\beta^{\psi}(v^{u})=d\beta^{\psi}(X_{\psi},v^{u})=dh_{ \psi}(v^{u})+h_{\psi}\cdot\left(d\alpha(Z,v^{u})-d\omega(X,v^{u})\right)+d \alpha(X,v^{u})\] Thus, solving (27), is equivalent to finding a positive \(C^{2}\) function \(\psi\) such that \[\left(\frac{X_{\psi}(\psi)}{\psi}\alpha+dh_{\psi}+h_{\psi}\cdot\left(i_{Z} \circ d\alpha-i_{X}\circ d\omega\right)+i_{X}\circ d\alpha\right)|_{\mathbb{E} ^{u}}\equiv 0 \tag{28}\] For notational purposes we write \[\Theta(\psi):=X_{\psi}(\psi)\alpha+\psi\cdot dh_{\psi}+\psi\cdot h_{\psi} \cdot\left(i_{Z}\circ d\alpha-i_{X}\circ d\omega\right)+\psi i_{X}\circ d\alpha\] Observe that (27) is equivalent to finding a \(C^{2}\) positive function with \[\Theta(\psi)|_{\mathbb{E}^{u}}\equiv 0. \tag{29}\] To simplify the notations, we write \(g_{\psi}=\psi\cdot h_{\psi}=\int_{0}^{\infty}d(\psi\circ f^{t})dt\) to have \[\Theta(\psi)=X(\psi)\alpha+dg_{\psi}+g_{\psi}\cdot(-\frac{d\psi}{\psi}+\frac{ Z(\psi)}{\psi}\alpha+i_{Z}\circ d\alpha-i_{X}\circ d\omega)+\psi i_{X} \circ d\alpha\] We will also need the following notation \[\overline{\Theta}(\psi):=dg_{\psi}+g_{\psi}\cdot(-\frac{d\psi}{\psi}+\frac{Z( \psi)}{\psi}+i_{Z}\circ d\alpha-i_{X}\circ d\omega)+\psi i_{X}\circ d\alpha. \tag{30}\] **Remark 23**.: We emphasize that, for the purpose of proving the Verjovsky Conjecture, we will not solve (27) in this strong formulation. However we will construct a sequence of function that nearly solve the equation near pieces of sections as required in the definition of special synchronisability. ## 5 The time change This section is devoted to prove the following **Proposition 24**.: _Let \(M\) be a smooth manifold of dimension at least four and \(F=\{f^{t}:t\in\mathbb{R}\}:M\to M\) be a \(C^{1}\) codimension-one volume preserving Anosov flow such that the stable bundle is generated by a \(C^{1}\) vector field \(X\) with the property \(Df^{t}X=e^{-t}X\) for every \(t\in\mathbb{R}\). Then \(F\) is specially synchronisable._ The proof of Proposition 24 is done in Section 5.2 under a more general and technical result in Proposition 25. For the rest of this section, we suppose that we have an Anosov flow that satisfies the condition of Proposition 24, i.e., that has the Parry's synchronisation. ### General strategy We first remark that Equation (27) is not a pointwise equation as the term \(g_{\psi}\) contains information of along the forward orbit. This makes the equation difficult to be solved locally. (27) can be written as \[X(\psi)\alpha+dg_{\psi}+g_{\psi}\cdot(-\frac{d\psi}{\psi}+\frac{Z(\psi)}{\psi }\alpha+i_{Z}\circ d\alpha-i_{X}\circ d\omega)+\psi i_{X}\circ d\alpha\equiv 0. \tag{31}\] The special synchronisability condition suggests that it is enough to solve the above equation on the certain pieces of sections of the flow. As we mentioned previously the term \[dg_{\psi}+g_{\psi}\cdot(-\frac{d\psi}{\psi}+\frac{Z(\psi)}{\psi}\alpha+i_{Z} \circ d\alpha-i_{X}\circ d\omega) \tag{32}\] makes it difficult to solve the problem locally. In other words, solving the equation \[X(\psi)\alpha+\psi i_{X}\circ d\alpha\equiv 0 \tag{33}\] on a piece of section of the flow is relatively much easier as the solution can be given by the general characteristic method of solving partial differential equation. The only difficulty would be that it is an overdetermined equation as we need it to be satisfied on every vector. The way we overcome this difficulty is to observe that since \(X(\psi)\alpha+\psi i_{X}\circ d\alpha\) is a 1-form, it has a kernel therefore basically there exists only one direction that we should care about. This is roughly saying that if we try to solve (33) inductively by approximation; say we define a sequence \(\psi_{n}\) that approximate the solution, at the step \(n+1\), we can only consider the equation when applied to the vector that is off the kernel of \(X(\psi_{n})\alpha+\psi_{n}i_{X}\circ d\alpha\): this is the main motivation of Lemma 26. In order to approximate a solution of (31) with the above described method, we observe that since we only need the solution near the pieces of sections, given a function \(\psi\), we can make a small perturbation \(\widetilde{\psi}\) near the section such that \(\widetilde{\psi}\) is closer to a solution of (31) while the quantites in (32) are controlled as the perturbation is done in a very short tubular neighborhood of the section. This is done in Proposition 25. In Lemma 29, we give an estimate of the terms in (32) that suggests that (31) and (33) are almost equivalent when we only need an approximation of a solution on pieces of sections. One more, we emphasize that we are not going to solve (31), but the idea is to use an approximation that nearly solves it and is good enough to defined the 1-form that gives integrability of the stable and unstable bundle of the limit flow. ### Time change on sections In this section, we suppose that we are given an admissible section \(\mathcal{S}=\{\mathcal{D}_{i}\}_{i=1}^{N}\) and for notational purposes, for \(\tau>0\) we write \[\mathcal{S}^{(\tau)}=\{\mathcal{U}_{i}^{(\tau)},i=1,\cdots,N\}\quad\text{ where }\quad\mathcal{U}_{i}^{(\tau)}:=\bigcup_{|t|<\tau}f^{t}\mathcal{D}_{i}.\] **Proposition 25**.: _Given a function \(\psi\in\mathfrak{D}\), there exists a constant \(C_{2}>0\) that only depends on \(M,\mathcal{S},F,\psi,\alpha\) such that for every \(\varepsilon>0\), there exists \(T_{\varepsilon},\tau>0\) such that for \(T>T_{\varepsilon}\) there exists \(\widetilde{\psi}_{T}\in\mathfrak{D}\) such that_ \[e^{-T}\leq\tau^{2}\quad\text{ and }\quad\|\psi-\widetilde{\psi}_{T}\|_{ \mathfrak{D}},\|\Theta(\widetilde{\psi}_{T})|_{f^{T}\mathcal{S}^{(\tau/2)}}\| \leq C_{2}\varepsilon. \tag{34}\] Proof Proposition 24.: The first requirement in the definition of special synchronisation is satisfied by the 1-form \(\alpha\) that is constructed in (26). We let \(\theta\in(0,1)\). We define \(\psi_{0}\equiv 3/2\) and let \(\psi_{1}\) be the function defined by Proposition 25 for \(\psi=\psi_{0}\) and \(\varepsilon=C_{2}^{-1}(\psi_{0})\varepsilon_{1}\). Then (8) is trivially satisfied. Equation (9) follows from the observation in Section 4 that \(i_{X_{\psi}}\circ d\alpha^{\psi}=\boldsymbol{\Theta}(\psi).\) Inductively given \(\psi_{n-1}\), we construct \(\psi_{n}\) from Proposition 25 with \(\varepsilon=C_{2}^{-1}(\psi_{n-1})\theta^{n}\). As in the first step the sequence of functions \(\{\psi_{n},n\geq 1\}\) satisfies the definition of special synchronisation. The rest of this section is devoted to the proof of Proposition 25. In Section 5.2.1, we give the definition of \(T_{\varepsilon}\) and \(\widetilde{\psi}_{T}\). In Section 5.2.2, we prove Proposition 25. #### 5.2.1 Definition of the perturbation Since \(\boldsymbol{\Theta}(\psi)\) is a 1-form then, to control its norm, it is enough to control is in certain direction that is off the kernel. The motivation of the following Lemma is to define the direction that we need to care about in order to estimate \(\|\boldsymbol{\Theta}(\psi)\|\). **Lemma 26**.: _There exists \(\delta>0\) such that for every \(\psi\in\mathfrak{D}\), there exists a continuous vector filed \(V\in\mathbb{E}^{u}\) with \(\alpha(V)=1\) and \(\|V\|\leq\delta^{-1}\) such that_ \[\measuredangle(V(q),\ker(\overline{\Theta}(\psi))_{q})\geq\delta\quad\text{ whenever }\quad\mathbb{E}^{u}_{q}\not\subset\ker(\overline{\Theta}(\psi))_{q}.\] Proof.: First of all, we recall that from Lemma 21, there exists a line bundle \(L\subset\mathbb{E}^{u}\) such that \(|\alpha|_{L}|>1/2\). Then there exists \(\delta>0\) such that \(|\alpha|_{C_{\delta,L}}|>1/2\) where \(C_{\delta,L}\) is a cone in the unstable bundle of angle \(2\delta\) around \(L\). We let \[N=\{q\in M:\mathbb{E}^{u}_{q}\not\subset\ker(\overline{\Theta}(\psi))_{q}\}.\] It is easy to see that since we can choose a vector field \(\widetilde{V}\in\mathbb{E}^{u}\) such that \(\measuredangle(\widetilde{V},\ker(\overline{\Theta}(\psi)))>\delta\). We define define \(V_{1}\) as a projection of \(\widetilde{V}\) onto the cone \(C_{\delta,L}\). This gives a required vector field in \(N\). We then extend \(V_{1}\) as a vector field in the closure of \(N\). Recall that this vector field \(V_{1}\) satisfies \(\alpha(V_{1})=1\) and \(\|V_{1}\|\leq\delta^{-1}\). This vector field is therefore easily extend to continuous vector field in \(M\) since in \(M\setminus N\), we can take any extension. **Corollary 1**.: _For every \(q\in M\), there exists a set of vectors \(\{v_{1},\cdots,v_{m}\}\) that spans \(\mathbb{E}^{u}_{q}\) such that_ \[\alpha(v_{i})=1,\quad\|v_{i}\|\leq\delta^{-1},\quad\overline{\Theta}(\psi)(V- v_{i})=0\quad\text{ and }\quad\measuredangle(v_{i},v_{j})>\delta/2\quad\text{ fori}\neq j.\] Proof.: If \(\mathbb{E}^{u}_{q}\subset\ker(\overline{\Theta}(\psi))_{q}\) then any set of vectors in \(C_{\delta}\) whose pairwise angle is bigger than \(\delta/2\) such that the results holds. If \(\mathbb{E}^{u}_{q}\not\subset\ker(\overline{\Theta}(\psi))_{q}\) then since \(\measuredangle(V,\ker(\overline{\Theta}(\psi)))>\delta/2\), we can choose \(m-1\) vectors in \(v_{2},\cdots,v_{m}\in C_{\delta,L}\) such that \(\overline{\Theta}(\psi)(V-v_{i})=0\) with the properties that \(\measuredangle(v_{i},v_{j}),\measuredangle(V,v_{i})>\delta\). We then choose \(v_{1}=V(q)\) get the required family of vectors. Definition of \(T_{\varepsilon}\):Since the vector field \(V\) is continuous and \(\psi\in\mathfrak{D}\) then \(\Theta(\psi)(V)\) is continuous. By the classical method of mollifying non smooth function, we can choose \(\iota>0\) small enough such that if \(\Phi\) is the \(\iota\)-mollification of the function \(-\Theta(\psi)(V)\) then we have \[\|\Phi+\Theta(\psi)(V)\|<\varepsilon. \tag{35}\] Moreover, there exists a constant \(C_{3}>0\) that only depend on \(M\) such that \[\|\Phi\|_{C^{r}}\leq C_{3}\iota^{-r}. \tag{36}\] We recall that from the definition of admissible section, the closure of the local sections are pairwise disjoint then there exists \(\sigma>0\) such that the \(\sigma\)-neighborhoodof the pieces of sections are pairwise disjoint, i.e., \[\{q\in M:d(q,\mathcal{D}_{i})<\sigma\}\cap\{q\in M:d(q,\mathcal{D}_{j})<\sigma \}=\emptyset\quad\text{ for }\quad i\neq j. \tag{37}\] For every \(i=1,\cdots,\ell\), we define a \(\sigma\)-thickening of the local section \(\mathcal{D}_{i}\) as and local section \(\widetilde{\mathcal{D}}_{1}\) with the following properties: 1. \(\mathcal{D}_{i}\subset\widetilde{\mathcal{D}}_{i}\), 2. \(\widetilde{\mathcal{D}}_{i}\) is admissible disk, 3. \(d(\partial\widetilde{\mathcal{D}}_{i},\partial\mathcal{D}_{i})\geq\sigma\). Let \(\lambda_{u}>1\) be defined such that \(\|Df^{-t}|_{\mathbb{E}^{u}}\|\leq\lambda_{u}^{-t}\) for all \(t>0\). We also recall that using the fact that \(F\) preserves a volume form and \(\dim(\mathbb{E}^{s})=1<\dim(\mathbb{E}^{u})\) then we have the following bunching condition \[e^{-t}\|Df^{t}|_{\mathbb{E}^{u}}\|\leq e^{-\zeta t} \tag{38}\] for some \(\zeta>0\) and \(t\) large enough. Let \(\tau>0\) and \(T_{\varepsilon}\) such that \[e^{-T_{\varepsilon}}<\lambda_{u}^{-T_{\varepsilon}}<e^{-\zeta T_{\varepsilon }}<\tau^{2}<\min\{\iota^{4},\sigma^{4}\} \tag{39}\] For each \(i\), we define a flow box around \(\mathcal{D}_{i}\) of height \(\tau\) by: \[\widetilde{\mathcal{U}}_{i}^{(\tau)}:=\bigcup_{|t|<\tau}f^{t}\widetilde{ \mathcal{D}}_{i}.\] Since \(\tau<\sigma\), by (37) we have \[\widetilde{\mathcal{U}}_{i}^{(\tau)}\cap\widetilde{\mathcal{U}}_{j}^{(\tau)} =\emptyset.\] We consider the following ordinary differential equation \[X(\overline{\rho}_{h,i})=e^{-T}\Phi\circ f^{T}\text{ on }\widetilde{\mathcal{U}}_{i }^{(\tau/2)}\quad\text{ and }\quad\overline{\rho}_{h,i}(\widetilde{\mathcal{L}}_{i})\equiv 0 \tag{40}\] or, equivalently \[X(\overline{\rho}_{h,i}\circ f^{-T})=\Phi_{i}\text{ on }f^{T}\widetilde{ \mathcal{U}}_{i}^{(\tau/2)}\quad\text{ and }\quad\overline{\rho}_{h,i}(\widetilde{\mathcal{L}}_{i})\equiv 0. \tag{41}\] where \[\widetilde{\mathcal{L}}_{i}:=\bigcup_{|t|<\tau/2}f^{t}\mathcal{L}_{i}.\] Equation (40) is a first order linear PDE that has a unique solution given by the classical characteristic method of solving PDE: \[\overline{\rho}_{h,i}\circ e^{sX}(q)=e^{-T}\int_{0}^{s}\Phi\circ f^{T}\circ e ^{\sigma X}(q)d\sigma\quad\text{ for }\quad q\in\widetilde{\mathcal{L}}_{i}\text{ and }s\in[-1,1].\] or, equivalently, \[\overline{\rho}_{h,i}\circ f^{-T}\circ e^{se^{-T}X}(f^{T}q)=e^{-T}\int_{0}^{s }\Phi\circ e^{\sigma e^{-T}X}(f^{T}q)d\sigma\quad\text{ for }\quad q\in \widetilde{\mathcal{L}}_{i}\text{ and }s\in[-1,1]. \tag{42}\] The submanifold \(\mathcal{L}_{i}\) being transversal to the vector field \(X\) gives a definition of a function \(s:\widetilde{\mathcal{U}}_{i}^{(\tau/2)}\to\mathbb{R}\) by \[e^{-s_{q}X}(q)\in\widetilde{\mathcal{L}}_{i}.\] With this function, we can rewrite the solution of Equaiton (40) as \[\overline{\rho}_{h,i}(q)=e^{-T}\int_{0}^{s_{q}}\Phi\circ e^{(s-s_{q})e^{-T}X}(f^{ T}q)ds. \tag{43}\] Since \(\widetilde{\mathcal{L}}_{i}\) is \(C^{1}\) then there exists a constant \(C_{4}>0\) that only depends on \(X\) and \(\mathcal{L}_{i}\) such that \[\|s_{(.)}\|_{C^{1}}\leq C_{4}. \tag{44}\] Let \(\beta_{i}:\widetilde{\mathcal{U}}_{i}^{(\tau/2)}\to[0,1]\) be the standard bump function such that \[\beta_{i}\equiv 1\quad\text{ in }\quad\mathcal{D}_{i}\quad\text{ and }\quad\beta_{i}\equiv 0\quad\text{ in }\quad\widetilde{\mathcal{D}}_{i}\setminus\mathcal{D}_{i}\] Moreover, there is a constant \(C_{5}>0\) that only depend on \(M\) such that \[\|\beta_{i}\|_{C^{r}}\leq C_{5}\sigma^{-r}\quad\text{ for every }\quad r>0. \tag{45}\] We define \(\rho_{h,i}:\mathcal{U}_{i}^{(\tau/2)}\to\mathbb{R}\) by \[\rho_{h,i}:=\beta_{i}\cdot\overline{\rho}_{h,i}.\] **Remark 27**.: _The motivation of using the bump function is to define a function that is supported in \(\widetilde{\mathcal{D}}_{i}\) and_ \[X(\rho_{h,i})=e^{-T}\Phi\circ f^{T}\quad\text{ in }\quad\mathcal{U}_{i}^{( \tau/2)} \tag{46}\] Let \(\rho_{v}:\mathbb{R}\to[0,1]\) be the standard bump function that satisfies \[\text{supp}(\rho_{v})\in(-\tau,\tau)\quad\rho_{v}(0)=1\quad\text{ in }\quad(-\tau/2,\tau/2)\quad\text{ and }\quad\|\rho_{v}\|_{C^{r}}\leq C_{5}\tau^{-r}. \tag{47}\] For every \(i\), we define a function \(\rho_{v,i}:\widetilde{\mathcal{U}}_{i}^{(\tau)}\to[0,1]\) by \[\rho_{v,i}(q)=\rho_{v}(t_{q})\] where \(t_{q}\) is defined by \(f^{-t_{q}}q\in\widetilde{\mathcal{U}}_{i}^{(\tau/2)}\). We observe that the function \(t_{(.)}:\widetilde{\mathcal{U}}_{i}^{(\tau)}\to(-\tau,\tau)\) is \(C^{1}\) and there exists a constant \(C_{6}>0\) that only depend on \(\widetilde{\mathcal{D}}_{i}\) and \(Z\) such that \[\|t_{(.)}\|_{C^{1}}\leq C_{6}. \tag{48}\] We define the function \(\rho:M\to\mathbb{R}\) by \[\rho:=\sum_{i}\rho_{i}\quad\text{ where }\quad\rho_{i}:=\rho_{h,i}\rho_{v,i}.\] We define \(\widetilde{\psi}_{T}:M\to\mathbb{R}\) by \[\widetilde{\psi}_{T}=\psi+\rho\circ f^{-T}. \tag{49}\] To simplify the notations, in all estimates in the following, we write \(\widetilde{\psi}\) to mean \(\widetilde{\psi}_{T}\). #### 5.2.2 Estimates of the perturbation This section is devoted to estimate the function \(\rho\) defined in the previous section. **Lemma 28**.: _There exists a constant \(C_{7}>0\) such that for every \(i=1,\cdots,N\) we have the following_ \[\|X(\rho_{i}\circ f^{-T})\|_{C^{0}}\leq C_{7}(\|\Theta(\psi)(V)\|_{C^{0}}+ \varepsilon),\quad\|Z(\rho_{i}\circ f^{-T})\|_{C^{0}}\leq C_{7}\tau^{-1}e^{-T} (\|\Theta(\psi)(V)\|_{C^{0}}+\varepsilon) \tag{50}\] _If \(V\in\mathbb{E}^{u}\) is a unit vector then we have_ \[\|V(\rho_{i}\circ f^{-T})\|_{C^{0}}\leq C_{7}\tau^{-1}\lambda_{u}^{-T} \tag{51}\] \[\|VX(\rho_{i}\circ f^{-T})\|_{C^{0}}\leq C_{7}\iota^{-1}. \tag{52}\] Proof.: For every \(i\), by definition of \(\rho_{i}\), we have the following \[X(\rho_{i}\circ f^{-T})(f^{T}q) =e^{T}(X(\rho_{v,i})\cdot\rho_{h,i}+\rho_{v,i}\cdot X(\rho_{h,i})) (q)\] \[=e^{T}(X(t_{q})\cdot\rho_{v,i}^{\prime}(t_{q})+\rho_{v,i}(t_{q}) \cdot X(\rho_{h,i})(\pi_{i}q))\] Since \(X\) is tangent to \(\widetilde{\mathcal{D}}_{i}\) we have \(X(t_{q})=0\) then, using that \(\rho_{h,i}=\beta_{i}\cdot\overline{\rho}_{h,i}\) we have \[\|X(\rho_{i}\circ f^{-T})\|_{C^{0}}\leq e^{T}(\|\beta_{i}\|_{C^{1}}\cdot\| \overline{\rho}_{h,i}\|_{C^{0}}+\|\beta_{i}\|_{C^{0}}\cdot\|X(\overline{\rho} _{h,i})\|_{C^{0}})\] Using (45), (40) and (43), we have \[\|X(\rho_{i}\circ f^{-T})\|_{C^{0}}\leq 2C_{5}\sigma^{-1}\|\Phi\|\leq 2C_{5} \sigma^{-1}(\|\Theta(\psi)(V)\|_{C^{0}}+\varepsilon) \tag{53}\] where the last inequality uses (35). To estimate \(\|Z(\rho_{i}\circ f^{-T})\|_{C^{0}}\), since \(Z(\rho_{h,i})=0\), we can write \[\|Z(\rho_{i}\circ f^{-T})\|\leq\|\rho_{v,i}\|_{C^{1}}\|\rho_{h,i}\|_{C^{0}} \leq C_{5}\tau^{-1}e^{-T}(\|\Theta(\psi)(V)\|_{C^{0}}+\varepsilon) \tag{54}\] where the last inequality uses (47) and (43). Equations (53) and (54) give the estimate of (50) for any \(C_{7}>\max\{C_{5},2C_{5}\sigma^{-1}\}\). For \(V\in\mathbb{E}_{f^{T}q}^{u}\) is a unit vector for some \(q\in\mathcal{U}_{i}^{(\tau)}\), then \(V\in T_{f^{T}q}f^{T}\mathcal{U}_{i}^{(\tau)}\). In particular there exists a unit vector \(V_{0}\in\mathbb{E}_{q}^{u}\) such that \(V=Df^{T}V_{0}/\|f^{T}V_{0}\|\) then we have \(V_{T}=Df^{-T}V=V_{0}/\|f^{T}V_{0}\|\) to have \[V(\rho_{i}\circ f^{-T})(f^{T}q) =(V_{T}(\rho_{v,i})\cdot\rho_{h,i}+\rho_{v,i}\cdot V_{T}(\rho_{h, i}))(q)\] \[=V_{T}(t_{q})\cdot\rho_{v,i}^{\prime}(t_{q})+\rho_{v,i}(t_{q}) \cdot D\pi_{i}V_{T}(\rho_{h,i})(\pi_{i}q)\] Since \(V\in\mathbb{E}^{u}\), we have \(\|V_{T}\|\leq\lambda_{u}^{-T}\) and using (48) we have \(\|V_{T}(t_{q})\|\leq C_{6}\lambda_{u}^{-T}\). To estimate \(D\pi_{i}V_{T}(\rho_{h,i})\), we use its definition to have \[\|D\pi_{i}V_{T}(\rho_{h,i})\|\leq\lambda_{u}^{-T}(\|D\pi_{i}V_{0}(\beta_{i})\|_ {C^{0}}\|\overline{\rho}_{h,i}\|_{C^{0}}+\|\beta_{i}\|_{C^{0}}\|D\pi_{i}V_{0} (\overline{\rho}_{h,i})\|).\] From the definition of \(\overline{\rho}_{h,i}\) in (43), if \(Y\) is a unit vector field tangent to \(\mathcal{D}_{i}\) then we have \(\|Y(\overline{\rho}_{h,i})\|\leq e^{-T}\|s_{(.)}\|_{C^{1}}\|\Phi\|_{C^{0}}+e^{- T}\|Y_{T}\|\|\Phi\|_{C^{1}}\leq 1\) where the last inequality uses the bunching condition in (38) and the relationship between the constants in (39). Substituting into the above gives \[\|V(\rho_{i}\circ f^{-T})\|_{C^{0}}\leq\lambda_{u}^{-T} \tag{55}\] Differentiating the first displayed line in this proof and using \(X(t_{q})=0\) gives \[VX(\rho_{i}\circ f^{-T})(f^{T}q)=e^{T}(V_{T}(t_{q})\rho_{v,i}^{\prime}(t_{q}) \cdot X(\rho_{h,i})(\pi_{i}q)+\rho_{v,i}(t_{q})\cdot D\pi_{i}V_{T}X(\rho_{h,i}) (\pi_{i}q))\] Using (40) we have \(e^{T}\|X(\rho_{h,i})\|_{C^{0}}\leq\|\Phi\|_{C^{0}}\), by the previous argument we have \(|V_{T}(t_{q})\rho_{v,i}^{\prime}(t_{q})|\leq C_{6}\lambda_{u}^{-T}t^{-1}\) and \(\|D\pi_{i}V_{T}X(\rho_{h,i})\|_{C^{0}}\leq\|\Phi\|_{C^{1}}\). Substituting this into the previous displayed line and using the relationship between the constants in (39) we have the desired estimate. **Lemma 29**.: _There exists a constant \(C_{8}>0\) such that if \(V\in\mathbb{E}^{u}\) is a unit norm vector field we have_ \[\|g_{\rho\circ f^{-T}}\|_{C^{0}},\|Vg_{\rho\circ f^{-T}}\|_{C^{0}}\leq C_{8} \sqrt{\tau}. \tag{56}\] Proof.: We recall all the quantities we are going to estimate need information along the full forward orbit. To do the estimate, for \(q\in M\), we consider the splitting of the forward orbit of \(q\) by \[0\leq t_{1}<t_{1}+2\tau<t_{2}<t_{2}+2\tau<\ldots<t_{n}<t_{n}+2\tau<\ldots\] such that \[f^{t_{i}}\in\bigcup_{j}\widetilde{\mathcal{U}}_{j}^{(\tau)}\quad\text{ and }\quad f^{t}\notin\bigcup_{j}\widetilde{\mathcal{U}}_{j}^{(\tau)}\quad\text{ for }\quad t\in(t_{i}+\tau,t_{i+1})\] The times \(t_{i}^{\prime}\)s correspond to the return times to some \(\mathcal{U}^{(\tau)}\) and any piece of the the orbit of \(q\) for \(t\in(t_{i}+2\tau,t_{i+1})\). We observe that by (39), we have \[|t_{i+1}-t_{i}-2\tau|>\sigma/2.\] Since \(\rho\circ f^{-T}\) is supported in \(\bigcup_{j}\widetilde{\mathcal{U}}_{j}^{(\tau)}\) then we have \[g_{\overline{\rho}}(q)=\sum_{i}\int_{t_{i}}^{t_{i}+\tau}e^{-t}X(\overline{ \rho}\circ f^{-T})\circ f^{t}(q)dt=\sum_{i}e^{-t_{i}}\int_{0}^{\tau}e^{-t}X( \overline{\rho}\circ f^{-T})\circ f^{t_{i}+t}(q)dt.\] And similarly if \(V\in\mathbb{E}_{q}^{u}\), we have \[V(g_{\overline{\rho}})(q)=\sum_{i}\int_{t_{i}}^{t_{i}+\tau}e^{-t}V_{t}X( \overline{\rho}\circ f^{-T})\circ f^{t}(q)dt=\sum_{i}e^{-t_{i}}\int_{0}^{\tau }e^{-t}V_{t_{i}+t}X(\overline{\rho}\circ f^{-T})\circ f^{t_{i}+t}(q)dt.\] where \(V_{t}=Df^{t}V\). By definition of \(\rho\) we have \[X(\rho\circ f^{-T})=\sum_{j}\rho_{v,i}\circ f^{-T}X(\tilde{\rho}_{h,i}\circ f ^{-T})\] Using (50) and (39) we have \[\|g_{\overline{\rho}}(q)\|\leq 2\tau(C_{7}\sigma^{-1}\|\Theta(\psi)\|_{C^{0}}+ \varepsilon)\sum_{i}e^{-t_{i}}\leq 2\tau\frac{C_{7}\sigma^{-1}\|\Theta(\psi)\|_{C^{0} }+\varepsilon}{1-e^{-\sigma}}\leq C_{8}\sqrt{\tau}.\] That gives the estimate of \(\|g_{\rho\circ f^{-T}}\|_{C^{0}}\). For the \(C^{1}\) norm, we take any unit norm vector field \(V\in T_{q}M\) and use the same splitting as above to have: \[V(g_{\overline{\rho}})(q)=\sum_{i}\int_{t_{i}}^{t_{i}+\tau}e^{-t}V_{t}X( \overline{\rho}\circ f^{-T})\circ f^{t}(q)dt=\sum_{i}e^{-t_{i}}\int_{0}^{\tau} e^{-t}V_{t_{i}+t}X(\overline{\rho}\circ f^{-T})\circ f^{t_{i}+t}(q)dt.\] where \(V_{t}=Df^{t}V\). Then, similar to the previous estimate, using the bunching condition (38) we have \[\|V(g_{\overline{\rho}})(q)\|\leq 2\tau\iota^{-2}\sum_{i}e^{-\zeta t_{i}}\leq \frac{2\tau\iota^{-2}}{1-e^{-\zeta\sigma}}\leq C_{8}\sqrt{\tau}.\] **Corollary 2**.: _There exists a constant \(C_{9}>0\) such that if \(V\in\mathbb{E}^{u}\) is a unit vector field then we have_ \[\left\|\overline{\Theta}(\widetilde{\psi})(V)-\overline{\Theta}(\psi)(V) \right\|\leq C_{9}\sqrt{\tau} \tag{57}\] Proof.: We recall that \[\overline{\Theta}(\psi):=dg_{\psi}+g_{\psi}\cdot(-\frac{d\psi}{\psi}+\frac{Z( \psi)}{\psi}\alpha+i_{Z}\circ d\alpha-i_{X}\circ d\omega)+\psi i_{X}\circ d\alpha. \tag{58}\] Then we have \[\begin{split}\overline{\Theta}(\widetilde{\psi})-\overline{ \Theta}(\psi)&=(dg_{\widetilde{\psi}}-dg_{\psi})+g_{\psi}(- \frac{d\widetilde{\psi}}{\widetilde{\psi}}+\frac{d\psi}{\psi}+\frac{Z( \widetilde{\psi})}{\widetilde{\psi}}\alpha-\frac{Z(\psi)}{\psi}\alpha)\\ &+(g_{\widetilde{\psi}}-g_{\psi})(-\frac{d\psi}{\psi}+\frac{Z( \psi)}{\psi}+(i_{Z}\circ d\alpha-i_{X}\circ d\omega))+(\widetilde{\psi}-\psi) i_{X}\circ d\alpha.\end{split} \tag{59}\] We are going to estimate each term of the above equality. We first observe that by (56), the first term is estimated by \[\left\|(dg_{\widetilde{\psi}}-dg_{\psi})(V)\right\|=\|dg_{\rho}(V)\|\leq C_{8 }\sqrt{\tau}. \tag{60}\] Similarly, using that \(\psi>1\) and (56) we have \[\left\|(g_{\widetilde{\psi}}-g_{\psi})(-\frac{d\psi}{\psi}+\frac{Z(\psi)}{ \psi}+(i_{Z}\circ d\alpha-i_{X}\circ d\omega))\right\|\leq C_{8}\sqrt{\tau}(2 \|\psi\|_{\mathfrak{D}}+\|\alpha\|_{C^{1}}+\|\omega\|_{C^{1}}) \tag{61}\] Using (43) we have \[\|(\widetilde{\psi}-\psi)i_{X}\circ d\alpha\|\leq e^{-T}\|\Phi\|\|\alpha\|_{C ^{1}} \tag{62}\] for the remaining term, we write \[\frac{d\widetilde{\psi}}{\widetilde{\psi}}-\frac{d\psi}{\psi}=\frac{\psi d \widetilde{\psi}-\widetilde{\psi}d\psi}{\psi\widetilde{\psi}}=\frac{\psi d( \widetilde{\psi}-\psi)+(\psi-\widetilde{\psi})d\psi}{\psi\widetilde{\psi}}\] and \[\frac{Z(\widetilde{\psi})}{\widetilde{\psi}}-\frac{Z(\psi)}{\psi}=\frac{\psi Z (\widetilde{\psi})-\widetilde{\psi}Z(\psi)}{\psi\widetilde{\psi}}=\frac{\psi Z (\widetilde{\psi}-\psi)+(\psi-\widetilde{\psi})Z(\psi)}{\psi\widetilde{\psi}}\] Using (50), (51) and the relationship between the constants in (39) we have \[\left\|\frac{d\widetilde{\psi}}{\widetilde{\psi}}-\frac{d\psi}{\psi}\right\| \leq 2\|V(\rho)\|_{C^{1}}+\|\rho\circ f^{-T}\|_{C^{0}}\|\psi\|_{\mathfrak{D}}\leq\tau \tag{63}\] and \[\left\|\frac{Z(\widetilde{\psi})}{\widetilde{\psi}}-\frac{Z(\psi)}{\psi} \right\|\leq 2\|Z(\rho)\|_{C^{1}}+\|\rho\circ f^{-T}\|_{C^{0}}\|\psi\|_{ \mathfrak{D}}\leq\tau \tag{64}\] Substituting (60), (61) (62) (63) and (64) into (59) gives the desired estimate. Proof of Proposition 25.: The estimate of the \(\|\psi-\widetilde{\psi}\|_{\mathfrak{D}}\) follows directly from (50), (51), (56) and the relationship between the constants (39). To estimate \(\|\Theta(\widetilde{\psi})\|\), we will estimate \(|\Theta(\widetilde{\psi})(v_{i})|\) where \(\{v_{1},\cdots,v_{m}\}\) is the basis given by Corollary 1. Using that \(X(\rho)(f^{T}q)=\Phi\) for \(q\in\mathcal{S}^{(\tau/2)}\) and the fact that \(\alpha(v_{i})=1\) we have \[\Theta(\widetilde{\psi})(v_{i}) =X(\psi)+\Phi+\overline{\Theta}(\widetilde{\psi})(v_{i})\] \[=\Theta(\psi)(V)+\Phi+\overline{\Theta}(\widetilde{\psi})(v_{i}) -\overline{\Theta}(\psi)(V)\] \[=\Theta(\psi)(V)+\Phi+\overline{\Theta}(\widetilde{\psi})(v_{i}) -\overline{\Theta}(\widetilde{\psi})(v_{i})+\overline{\Theta}(\psi)(v_{i}-V)\] Then using (35), (57) and Corollary 1 we have \[\left|\Theta(\widetilde{\psi})(v_{i})\right|\leq 2C_{9}\varepsilon \tag{65}\] Using Corollary 1 once again we have \[\|\Theta(\widetilde{\psi})\|\leq 2C_{9}\delta^{-1}\varepsilon^{2}\max_{i}| \alpha(v_{i})|\leq 2C_{9}\delta^{-2}\varepsilon\] which gives the estimate of the proposition.
2309.05391
Career Path Recommendations for Long-term Income Maximization: A Reinforcement Learning Approach
This study explores the potential of reinforcement learning algorithms to enhance career planning processes. Leveraging data from Randstad The Netherlands, the study simulates the Dutch job market and develops strategies to optimize employees' long-term income. By formulating career planning as a Markov Decision Process (MDP) and utilizing machine learning algorithms such as Sarsa, Q-Learning, and A2C, we learn optimal policies that recommend career paths with high-income occupations and industries. The results demonstrate significant improvements in employees' income trajectories, with RL models, particularly Q-Learning and Sarsa, achieving an average increase of 5% compared to observed career paths. The study acknowledges limitations, including narrow job filtering, simplifications in the environment formulation, and assumptions regarding employment continuity and zero application costs. Future research can explore additional objectives beyond income optimization and address these limitations to further enhance career planning processes.
Spyros Avlonitis, Dor Lavi, Masoud Mansoury, David Graus
2023-09-11T11:42:28Z
http://arxiv.org/abs/2309.05391v1
# Career Path Recommendations for Long-term Income Maximization: A Reinforcement Learning Approach ###### Abstract This study explores the potential of reinforcement learning algorithms to enhance career planning processes. Leveraging data from Randstad The Netherlands, the study simulates the Dutch job market and develops strategies to optimize employees' long-term income. By formulating career planning as a Markov Decision Process (MDP) and utilizing machine learning algorithms such as Sarsa, Q-Learning, and A2C, we learn optimal policies that recommend career paths with high-income occupations and industries. The results demonstrate significant improvements in employees' income trajectories, with RL models, particularly Q-Learning and Sarsa, achieving an average increase of 5% compared to observed career paths. The study acknowledges limitations, including narrow job filtering, simplifications in the environment formulation, and assumptions regarding employment continuity and zero application costs. Future research can explore additional objectives beyond income optimization and address these limitations to further enhance career planning processes. Career Path Recommendation, Machine Learning, Career Planning, Income Optimization, Employee Development, Markov Decision Process (MDP), Reinforcement Learning (RL) + Footnote †: ### Reinforcement Learning Reinforcement Learning (RL), as characterized by Sutton and Barto [1], is a decision-making paradigm adept at handling tasks with potential delayed outcomes, such as career planning. Unlike other learning strategies, RL operates on trial-and-error, aiming to optimize a specific metric without any direct instruction. It involves an agent navigating an environment to maximize a cumulative reward over time. The RL system incorporates six primary elements: the **Agent** that interacts with the environment based on its policy, the **Environment** providing feedback, the **State** and **Action** representing the environment, and the choices available, the **Reward** as a numerical feedback, and the **Policy** directing the agent's actions. #### 2.1.1 Markov Decision Processes The application of RL to career planning necessitates formulating the problem as a Markov Decision Process (MDP), as suggested by Puterman [2]. This enables us to leverage established RL research and precise theoretical results. MDPs formalize sequential decision-making where actions influence not only immediate rewards but also future states, and by extension, future rewards. The inherent Markov property in an MDP posits that the transition probabilities to a new state depend solely on the current state and action. ### Recommender Systems in Human Resources Historically, research into workforce mobility and career development has utilized traditional data sources such as surveys and censuses, as noted by Topel and Ward [3] and Long and Ferrie [4]. However, the rise of Online Professional Networks (OPN) has allowed for the employment of data-driven machine learning methods. The focus has increasingly shifted towards modelling career paths to predict mobility and aid in career development. This has proved valuable for both employers and employees, facilitating strategic decision-making in hiring and career progression. Several studies have taken varied approaches to this issue. For instance, Paparrizos et al. [5] employed a naive Bayes model to predict job transitions, while Wang et al. [6] used a proportional hazards model to estimate when employees might decide to change jobs. Further, Liu et al. [7] explored career path prediction using social network data, while Li et al. [8] introduced the NEMO model for predicting future company and job titles using Long Short-Term Memory (LSTM) networks. The advent of more complex models has also been witnessed. Meng et al. [9] used a hierarchical neural network with an embedded attention mechanism, and Xu et al. [10] performed a talent flow analysis for predicting the increments in a dynamic job transition network. Other models, like the one proposed by Liu and Tan [11], utilized logistic regression to predict career choices, while Al-Dossari et al. [12] proposed a recommendation system for IT graduates based on skill similarity. A separate line of research rejects the notion that frequently observed paths are necessarily the most beneficial. Lou et al. [13] recommended the shortest career path using a Markov Chain model, whereas Oentaryo et al. [14] focused on achieving the best payoff trade-off in career path planning. Shahbazi et al. [15] optimized towards the career development of employees rather than productivity. Other approaches have included the use of skill graphs for transition pathway recommendations as demonstrated by Gugnani et al. [16] and Dawson et al. [17], and the use of reinforcement learning for dynamic career path recommendations as presented by Kokkodis and Ipeirotis [18]. Most recently, Guo et al. [19] proposed a reinforcement learning variant for optimizing career paths. The research presented in this paper is similar to previous work such as that of Oentaryo et al. [14], Kokkodis and Ipeirotis [18], and Guo et al. [19]. Unlike Kokkodis and Ipeirotis [18], which studied online freelancers and projects, this paper focuses on long-term employment relationships. In contrast to the work of Oentaryo et al. [14] and Guo et al. [19], which do not incorporate monetary rewards, the focus here is to chart the optimal path for the highest long-term income. Also, where Guo et al. [19] posits any transition between jobs as possible, this study takes a more realistic approach and models transitions as a stochastic process learned from the data. Oentaryo et al. [14] also model transitions as a stochastic process but assume it to be memoryless, making a person's next job dependent only on their current job. In contrast, this study introduces two settings: a _naive setting_ that makes the same assumption, and a _standard setting_ that leverages employees' past experiences to predict their next career move. Figure 1: The agent-environment interaction in a Markov decision process. ## 3 The Proposed Career Path Recommendation Model We consider the problem of recommending a sequence of jobs -- a career path -- to the candidates that, if followed, would maximize their earnings during their foreseeable future. Formally, given \(C=\{c_{1},...,c_{n}\}\) as \(n\) candidates and \(J=\{j_{1},...,j_{m}\}\) as \(m\) jobs, we define \(R_{c}\) as the recommended career path generated for candidate \(c\). We denote \(W_{c}=\{w_{c,1},...,w_{c,k}\}\) as the work experience of candidate \(c\), \(k\) different jobs that \(c\) worked in the past with \(w_{c,1}\) being her first job and \(w_{c,k}\) being her last (current) job. Each work experience contains information about the period (start date and end date) and the area (or role) that the candidate worked on in that job. The work area represents a high-level categorization for the jobs. In our experiments, we define it as a combination of an occupation and an industry. Examples are a Data Science role in Insurance or a Data Science role in Banking. Refer to Section 4.1 for more details. We also denote \(\mathit{App}_{c}=\{app_{c,j_{1}},...,app_{c,j_{l}}\}\) as \(l\) jobs that candidate \(c\) applied for in the past in which their outcomes are either hired or rejected. Finally, we denote \(V\) as all the vacancies posted on the market. Given these three input data (work experience \(W\), job applications \(\mathit{App}\), and vacancies \(V\)), our career path recommendation model comprises four distinct modules, as depicted in Figure 2. The first three modules--**Plausible Jobs**, **Transitions**, and **Rewards**--simulate the job market environment. The fourth module employs Reinforcement Learning (**RL**) to learn an optimal strategy for navigating these environments. Plausible Jobs ModuleAn employee's state at any given time is characterized by his current job, which is defined as a combination of occupation and industry, along with their work history. This concept forms the basis of the state space defined by this module, which comprises the set of available jobs and industries the agent can occupy. However, due to the constraints imposed by the dataset and to ensure a computationally feasible environment, our experiments are restricted to the 142 most prevalent jobs present in our dataset. Transition ModuleJob applications do not always have deterministic outcomes; similarly, the actions taken by the agent should not always have deterministic outcomes. When the agent applies for a job and succeeds in being hired, it transitions from the current state \(\mathbf{s}\) to a new state \(\mathbf{s}^{\prime}\). If unsuccessful, it remains in the current state \(\mathbf{s}\). This transition occurs with probability \(P(\mathbf{s}^{\prime}|\mathbf{a},\mathbf{s})\). A Random Forest binary classifier is trained on the Job Application data and is used to predict the aforementioned probabilities. In other words, this module computes the transition probability between different jobs within the environment. We consider the following approaches for computing the transition probabilities: * **Last Job State Representation:** We assume that state \(\mathbf{s}^{\prime}\) only contains information about the last job of a person, implying that the probability of being hired depends solely on their latest job. * **Full History State Representation:** Conversely, in the alternative approach, we assume that the state contains information about a person's entire work history. This second approach Figure 2: The architecture of the proposed career path recommendation system. is closer to reality, but it also greatly increases the size and complexity of the state space, which could make learning more challenging and could potentially suffer from a lack of data. Reward ModuleAfter each transition, this module is used to compute the reward earned from that transition. We define the reward in the form of the estimated salary that the individual earns after the transition. We use a Random Forest regressor trained on \(V\) (i.e. all vacancies in the market) to predict the salary corresponding to each job. Given each \(v\in V\) consists of the textual job description, and annual salary information, for our experiments we perform this prediction on a yearly basis for each pair of job role and industry. Reinforcement Learning ModuleLastly, the RL module uses RL algorithms to learn policies that can yield optimal rewards. After training, these models can be used to recommend high-income-yielding career paths to employees. We experiment with and compare multiple algorithms during the training of the RL module. The details of the algorithms are described in section 4.3. ## 4 Methodology ### Datasets We conducted our experiments on anonymized data provided by Randstad as follows: Work Experience DatasetThis tabular dataset consists of work experience items that employees may submit to Randstad either online or through consultants, or are directly taken from the administration of job placements made through Randstad. Relevant attributes for this research include: 1) Employee ID, 2) Job start and end dates, 3) ISCO code1 (occupation identifier), and 4) SBI code2 (industry identifier). Footnote 1: ISCO Wikipedia page: [https://en.wikipedia.org/wiki/International_Standard_Classification_of_Occupations](https://en.wikipedia.org/wiki/International_Standard_Classification_of_Occupations) Almost all the work experience items (99.99%) pertain to Randstad placements, as these are jobs employees secured through Randstad. Most of the previous experiences (before using Randstad's services) are missing essential attributes. Vacancies DatasetThis dataset includes salary ranges for about six million vacancies, including their ISCO and SBI codes, posted on various Dutch websites. We use this dataset to estimate expected salaries for each occupation. Job Applications DatasetThis dataset contains information on job applications made by candidates to Randstad's vacancies, with the outcome of each application (hired or rejected) also available. ### Data Preprocessing During preprocessing, we filtered out employees with missing data, jobs with durations less than a week, and employees with more than fifty work experience items from the work experience dataset, yielding 200K employees with 400K work experience items. In line with Randstad's business model, most placements are short-term or temporary jobs common in staffing, resulting in a mean job duration of 161 days and a median duration of 95 days. The average annual salary in the vacancies dataset is approximately 42K euros, with a median salary of 38K euros. ### Reinforcement Learning Algorithms Our experiments employ various Reinforcement Learning (RL) algorithms, which are primarily categorized into tabular methods and approximate RL methods. #### 4.3.1 Tabular Methods Tabular methods are a class of RL algorithms that work well with a discrete, small state-action space. They maintain a table of values, with each entry in the table representing the value of each possible state-action pair. State-action-reward-state-action (Sarsa)Introduced by Rummery et al. [20], Sarsa is an on-policy, tabular, temporal difference (TD) method. TD learning, which is a hybrid of Monte Carlo and dynamic-programming ideas, can learn directly from raw experience without a model of the environment's dynamics. Like dynamic programming, TD methods update estimates based in part on other learned estimates, without waiting for a final outcome (they bootstrap). The _Sarsa_ algorithm aims to learn an action-value function \(q_{\pi}(s,a)\), providing the expected reward starting from state \(s\), taking action \(a\), and following the policy \(\pi\). Q-LearningIntroduced by Watkins and Dayan [21], Q-Learning is another tabular TD method. However, Q-Learning is an off-policy method, where the learned action-value function, Q, directly approximates \(q_{*}\), the optimal action-value function, regardless of the policy followed (behavior policy). #### 4.3.2 Approximate RL Methods While tabular methods perform well in environments with a small number of state-action pairs, they face challenges when the state-action space becomes large or continuous. They are not able to efficiently store the value of every possible state-action pair, nor can they generalize the value of unvisited state-action pairs effectively. This is where approximate RL methods come in to help with the Full History State Representation. These methods use function approximation, typically employing neural networks, to estimate the value of state-action pairs, allowing them to handle environments with larger or more complex state spaces more effectively. Deep Q-Learning (DQN)DQN is an off-policy approach introduced by Mnih et al. [22]. DQN is the first successful deep-learning model to learn control policies directly from high-dimensional sensory input using reinforcement learning. It utilizes a convolutional neural network trained with a variant of Q-learning, taking raw pixels as input and estimating future rewards through a value function. Advantage Actor-Critic (A2C)A2C is an approximate solution RL method that utilizes deep reinforcement learning for function approximation. Unlike DQN, A2C is an on-policy method. Introduced by Mnih et al. [23], A2C is an actor-critic method, where the policy function is represented independently of the value function. The "critic" model estimates the value function and the "actor" learns the target policy. Both the Critic and Actor functions are parameterized with neural networks. As explained by Mnih et al. [23], the main advantage of A2C over DQN is its faster training speed. ### Baselines Besides the above RL algorithms, we also perform experiments using two naive action selection approaches as baselines: Greedy Most Common TransitionIn this approach, the agent always applies for the job with the highest transition probability, that is the most likely job to be hired. In the case of multiple jobs with the same ranking, a random selection is made. Greedy Highest Expected RewardIn this strategy, the agent applies for the job with the maximum expected reward, defined as the product of the transition probability and the immediate salary after the transition. In reality, this signifies the job with the highest likelihood of both being attained and yielding the highest immediate income. As before, in the case of multiple top-ranking jobs, a random selection is made. ### Evaluation Metrics We assess the effectiveness of our methods based on the income difference between _observed career paths (factuals)_ and _recommended career paths (counterfactuals)_. Observed Career PathsUsing the Work Experience dataset, we generate a list of observed career paths and their corresponding income. Given that workers can hold multiple jobs simultaneously or have periods of unemployment, the dataset requires processing to align with the requirements of our simplified environment. Our models assume people only have one job at a time and there is no unemployment. Therefore, in cases of simultaneous employment, we estimate each job's monthly salary and assume the worker earned the mean salary. For periods of unemployment, we consider the salary from the worker's last job to be ongoing. Counterfactual Career PathsAfter training each RL method, we sample observed career paths to generate their counterfactuals. These are the paths each model recommends, starting from the observed path's initial job, and lasting the same duration. Reported metricsFor each model under consideration, we report two primary quantities - the _Mean Factual_ and _Mean Counterfactual_ accumulated rewards. These metrics represent the mean income accumulated by employees in reality versus the projected income they would have earned in a counterfactual scenario respectively. For an employee \(e\) their factual income denoted as \(FI\), over their career of \(M\) months is calculated as \[FI(e)=\sum_{m=1}^{M}I(J_{e},m) \tag{1}\] where \(I(J_{e},m)\) is a function that returns the salary that the employee \(e\) had earned by performing job \(J\) during the month \(m\). Similarly, the counterfactual income is calculated as \[CFI(e)=\sum_{m=1}^{M}I(J^{\prime}_{e},m) \tag{2}\] where \(J^{\prime}\) is the job that employee \(e\) would have performed if she had followed the recommendations of our system. Finally, the mean of these quantities is calculated over a sample of 20,000 observed (factual) and generated (counterfactual) career paths. Following this, we present the _Change_ %, illustrating the percentage change between the factual and counterfactual means. To determine the statistical significance of the observed difference, we calculate a _p-value_ using a two-sided permutation test with an alpha level of 0.05. Furthermore, we detail the proportion of employees experiencing an income rise in the counterfactual world, referred to as _Gainers_, along with the average magnitude of their income change. Similarly, we present data for those experiencing a decline, termed as _Losers_, including the mean change in their income. ### Experimental results This subsection presents a detailed analysis of the experiment results. We juxtapose our factual and counterfactual career paths in terms of the mean income they generate. Additionally, we assess the effectiveness of our models by examining the percentage of gainers and losers as well as the magnitude of their respective income changes. Table 1 presents the results for the Last Job State Representation, where the job seekers' state depends only on the last held job. From this table, we can observe that baselines do not perform significantly differently than the factual career paths, with differences under 1% (at -0.7% and -0.94% for Most Common and Highest Expected Reward baselines, respectively). However, Q-Learning and Sarsa models perform well with a notable percentage of income gainers (27.53% and 32.84% respectively) and a reasonable mean gain percentage of 13.81% and 11.5% respectively. Table 2 exhibits the outcomes for the Full History State Representation, where the state of a job seeker contains their full work history. The Highest Expected Reward baseline model stands out with a significant mean income change (79.18%) and a large percentage of gainers (96.01%). As we will discuss later, this is caused by Transitions module biases. Deep Q-Learning and A2C also show promising results but fail to outperform the baselines. ## 5 Results and Discussion In this chapter, we delve into the outcomes derived from our experimental setup featuring two unique versions of the transitions module - the Last Job State Representation and Full History State Representation. We start our discussion with findings from the two baseline methods outlined in Section 4.6. Subsequently, we elaborate on the results achieved through our efforts to learn an efficient policy. ### Implications of the Findings Last Job State RepresentationIn the Last Job State Representation, our RL approaches learned policies that resulted in career paths with higher incomes than the observed career paths. In both the last job state representation and full history state representation, the learned policies improved the mean accumulated income by around \(5\%\). While not a drastic increase, this change is significant over longer time scales, such as an individual's career. Notably, these improvements surpassed those of the baseline models. However, there was also a significant amount, approximately 12%, of agents for which the recommended paths performed worse than the observed. Full History State RepresentationHowever, the Full History State Representation demonstrated a different pattern. While the DQN and A2C models also found policies improving counterfactual incomes, the baselines showed significantly larger improvements, particularly \begin{table} \begin{tabular}{l r r r r r r r r} \hline \hline Model & Mean \(FT\)\(\mathbf{\upxi}\) & Mean \(CF1\)\(\mathbf{\upxi}\) & Change \% & p-value & Gainers \% & Mean Gain \% & Losers \% & Mean Loss \% \\ \hline Baseline: Most Common & 90,283.42 & 89,644.81 & -0.7 & 0.69 & 8.85 & 8.17 & 11.62 & -8.40 \\ \hline Baseline: Highest Exp. Reward & 90,283.42 & 89,434.75 & -0.94 & 0.59 & 8.39 & 7.50 & 12.52 & -8.48 \\ \hline Q-Learning & 90,283.42 & 95,077.13 & **5.3** & 0.01 & 27.53 & 13.81 & 12.56 & -7.63 \\ \hline Sarsa & 90,283.42 & 94,836.08 & **5.04** & 0.01 & 32.84 & 11.50 & 10.95 & -7.46 \\ \hline \hline \end{tabular} \end{table} Table 1: Last Job State Representation: Factual vs Counterfactual career paths. Metrics described in Section 4.5. \begin{table} \begin{tabular}{l r r r r r r r r} \hline \hline Model & Mean \(FT\)\(\mathbf{\upxi}\) & Mean \(CF1\)\(\mathbf{\upxi}\) & Change \% & p-value & Gainers \% & Mean Gain \% & Losers \% & Mean Loss \% \\ \hline Baseline: Most Common & 90,386.16 & 95,871.78 & **6.18** & 0.02 & 71.07 & 17.27 & 25.15 & -13.39 \\ \hline Baseline: Highest Exp. Reward & 90,386.16 & 161,774.45 & **79.18** & 0.00 & 96.01 & 80.37 & 0.04 & -11.44 \\ \hline Deep Q-Learning & 90,283.42 & 94,547.94 & **4.7** & 0.01 & 67.91 & 16.95 & 27.87 & -13.71 \\ \hline A2C & 90,283.42 & 95,616.29 & **5.9** & 0.00 & 70.82 & 17.22 & 25.35 & -13.64 \\ \hline \hline \end{tabular} \end{table} Table 2: Full History State Representation: Factual vs Counterfactual career paths. Metrics described in Section 4.5. the _Highest Expected Reward_ baseline. This raises questions about the validity of the environment. After careful investigation, we found out that this environment can be easily exploited by the Highest Expected Reward baseline due to the small-but-substantial transition probabilities predicted by the Transitions module. As we can see in Figure 3, regardless of the agent's starting state, it always applies and eventually succeeds to be hired for the highest-paying job of our dataset. By looking deeper into the classifier trained to predict the transition probabilities of the Full History State Representation, we see that regardless of the agents' prior experience there is always a small but significant probability of employment. After analysis of the training data, we believe that this is due to missing data in the prior experiences of employees hired in senior positions. Therefore, the training dataset is incomplete and depicts a world where someone can be hired, for example, as a Senior Finance professional with no experience in Finance. Comparison Between EnvironmentsIt's important to note that the results from the Last Job State Representation and Full History State Representation cannot be directly compared. Each model learns a policy to exploit the unique dynamics of the environment it is trained on, therefore, the ground truth differs for each environment. As such, results should be compared within the specific environment they originated from. ### Limitations of the Study Job FilteringThe experiments relied on a narrowed field of 142 most common jobs. The decision to do so was driven by the challenges posed by the vast state space and unreliable transition probability prediction for less common jobs. The Cost of an ActionThe research assumes no monetary cost for applying to a job, which is typically not the case in reality, where applications cost both time in interviewing or preparing, and perhaps other forms of Figure 3: **Full History State Representation - Greedy Highest Expected Reward Baseline: Starting and final distributions for the 12 most common functions and industries. The data were generated by running 1000 episodes of 40 time steps (10 years) each.** preparation (e.g., studying). Considering the real-world costs in future studies could bring the environment formulation closer to reality. Continuous EmploymentThe assumption that individuals are always working doesn't account for potential breaks in employment. These breaks could result from various factors, including vacations, relocation, or further education, and should be considered in a more realistic formulation of the job market as an MDP. Rivalrous marketAnother notable limitation is that the real-world job market is inherently rivalrous -- a job offered to one applicant becomes unavailable to others. Recommending the most highly paying jobs to all users could potentially lead to an overwhelming influx of applications for those positions, resulting in many disappointed users due to the increased competition. Our approach focuses on income optimization, but it's crucial to recognize that a well-balanced approach considering job availability, individual preferences, and market dynamics is necessary to avoid an undue concentration of applications in specific roles. Future research should delve into strategies that account for these challenges while still aiming for income optimization to create a more comprehensive and realistic career planning model. State Space and the Markov PropertyThe models used in this research made different assumptions about state space and respected the Markov Property in different ways. The Last Job State Representation model simplified the state to be a job, assuming an employee's future depends solely on their last job. On the other hand, the Full History State Representation model considered employees' whole work experience as part of the state. The latter approach is closer to reality but can create states with too many dimensions, slowing policy learning. An option suggested in the literature for similar challenges is learning low-dimensional embeddings and reducing the state space size. ## 6 Conclusion In conclusion, this research explored the use of artificial intelligence, specifically reinforcement learning, in the field of career planning. By harnessing data on employee work experience and job applications, the research aimed to recommend career paths that maximize long-term income for individuals. The findings of this study showed promising results in both the Last Job State Representation and Full History State Representation approaches. The reinforcement learning models, particularly Q-Learning and Sarsa, were able to learn policies that improved the counterfactual incomes of individuals. In the Last Job State Representation, the mean accumulated income increased by around 5%, surpassing the performance of the baseline models. However, it is important to acknowledge the limitations and failures of the Full History State Representation. The baseline models exhibited greater improvements in counterfactual incomes compared to the reinforcement learning models. This discrepancy was due to inaccuracies in the transition probability predictions, which allowed the Highest Expected Reward baseline to exploit the system. These limitations indicate the need for further research to refine and improve the Full History State Representation. Future studies could explore alternative methods for estimating transition probabilities and address the issue of missing prior experience data. By addressing these challenges, future research can work towards creating more robust and accurate environments that better reflect the complexities of real-world career planning.
2303.17916
Granger Causality Detection via Sequential Hypothesis Testing
Most of the metrics used for detecting a causal relationship among multiple time series ignore the effects of practical measurement impairments, such as finite sample effects, undersampling and measurement noise. It has been shown that these effects significantly impair the performance of the underlying causality test. In this paper, we consider the problem of sequentially detecting the causal relationship between two time series while accounting for these measurement impairments. In this context, we first formulate the problem of Granger causality detection as a binary hypothesis test using the norm of the estimates of the vector auto-regressive~(VAR) coefficients of the two time series as the test statistic. Following this, we investigate sequential estimation of these coefficients and formulate a sequential test for detecting the causal relationship between two time series. Finally via detailed simulations, we validate our derived results, and evaluate the performance of the proposed causality detectors.
Rahul Devendra, Ribhu Chopra, Kumar Appaiah
2023-03-31T09:24:08Z
http://arxiv.org/abs/2303.17916v1
# Granger Causality Detection via ###### Abstract Most of the metrics used for detecting a causal relationship among multiple time series ignore the effects of practical measurement impairments, such as finite sample effects, undersampling and measurement noise. It has been shown that these effects significantly impair the performance of the underlying causality test. In this paper, we consider the problem of sequentially detecting the causal relationship between two time series while accounting for these measurement impairments. In this context, we first formulate the problem of Granger causality detection as a binary hypothesis test using the norm of the estimates of the vector auto-regressive (VAR) coefficients of the two time series as the test statistic. Following this, we investigate sequential estimation of these coefficients and formulate a sequential test for detecting the causal relationship between two time series. Finally via detailed simulations, we validate our derived results, and evaluate the performance of the proposed causality detectors. Causality Detection, Granger Causality, Sequential Detection, Hypothesis testing ## I Introduction The problem of detecting causal relationships between two or more time series is an important research problem across various scientific disciplines [1, 2], including, but not limited to neuroscience [3, 4, 5, 6, 7, 8, 9, 10], physics [11, 12], climate science [13], econometrics [14], etc. It has been shown that if the time series of interest are Gaussian, then the underlying causative relationship can be inferred using the Granger Causality Index (GCI) [2, 15]. The GCI is the logarithm of ratio of the prediction error variances of caused time series with and without factoring in the effects of the causing time series. The GCI has been used extensively for causality detection across multiple research areas, including those listed above. Most of the present works dealing with causative relationships among time series assume exact knowledge of the second order statistics during the detection process. However, in making this assumption, these works ignore the effects of sampling noise and the finite sample effects due to the limited number of samples being used for estimating these statistics. Consequently, rather than the deterministic second order statistics, as assumed previously, the causality detector has to work with finite sample estimates of these quantities. The authors in [16] have addressed this problem by treating the available estimates of the second order statistics of the time series as random variables, and using these to derive the performance of the GCI in the presence of noise and finite sample effects. These performances have been derived in terms of the probabilities of detection and false alarm. The authors also derive two alternative analytically tractable test statistics in [16] and show that these result in a performance comparable to the GCI. However, these test statistics, while incorporating the finite sample effects, require the use of all the samples for calculating them. Hence, these test statistics are unsuitable in a real time setting where the samples from the time series may arrive sequentially. Therefore, in this paper we propose a novel sequential test for detecting causal relationships among time series. Our key contributions in this direction are listed as follows. 1. We propose to use the estimated vector autoregressive (VAR) coefficients of the two time series as a test statistic for detecting the presence of a causative relationship between the two time series. We then analyze the performance of this test statistic in terms of the probabilities of detection and false alarm as a function of the number of samples of the time series. 2. We then use the recursive least squares (RLS) algorithm to develop a sequential test for causality detection among the two time series. 3. Via detailed simulations on artificially generated and real life data, we analyze the performance of the proposed VAR coefficient based test statistic. Following this, we evaluate the performance of the proposed sequential test and prescribe parameters for its optimal performance. The results developed in this paper can potentially be used to develop a real time test for detecting causal relationships among noisy time series. We next describe the system model considered in this work. ## II System Model We consider the noisy measurements of two discrete time zero mean Gaussian random processes \(u[n]\) and \(v[n]\) having variances \(\sigma_{u}^{2}\) and \(\sigma_{v}^{2}\), respectively. We can jointly express \(u[n]\) and \(v[n]\) in the form of a bi-variate AR-\(K\) model as \[u[n]=\sum_{k=1}^{K}a_{uu,k}^{*}u[n-k]+\sum_{k=1}^{K}a_{uv,k}^{* }v[n-k]+\eta_{v}[n]\] \[v[n]=\sum_{k=1}^{K}a_{vu,k}^{*}u[n-k]+\sum_{k=1}^{K}a_{vv,k}^{* }v[n-k]+\eta_{v}[n], \tag{1}\] with \(a_{ij,k}\), \((i,j)\in\{u,v\}\) being the regression coefficients, \((\cdot)^{*}\) representing complex conjugation, and \(\eta_{u}[n]\) and \(\eta_{v}[n]\) being the innovation components corresponding to \(u[n]\) and \(v[n]\), respectively. We assume that \(u[n]\) and \(v[n]\) are sampled at the Nyquist rate, and the temporally white innovation process vector \(\boldsymbol{\eta}[n]=\left[\eta_{u}[n]\,,\eta_{v}[n]\right]^{T}\) is distributed as \[\boldsymbol{\eta}[n]\sim\mathcal{CN}\left(0,\left[\begin{array}{cc}\sigma_{ \eta_{v}}^{2}&0\\ 0&\sigma_{\eta_{v}}^{2}\end{array}\right]\right).\] For simplicity, we assume unidirectional coupling, i.e. \(a_{vw,k}=0\), for\(1\leq k\leq K\). Now letting \(\mathbf{u}_{K}[n]\triangleq[u[n],u[n-1],\ldots,u[n-K+1]]^{T}\), \(\mathbf{v}_{K}[n]\triangleq[v[n],v[n-1],\ldots,v[n-K+1]]^{T}\), \(\mathbf{a}_{ij}\triangleq[a_{ij,1},a_{ij,2},\ldots,a_{ij,K}]^{T}\), \((i,j)\in\{u,v\}\), we can write (1) as, \[\left[\begin{array}{c}u[n]\\ v[n]\end{array}\right]=\left[\begin{array}{cc}\mathbf{a}_{uv}^{H}&\mathbf{a }_{uv}^{H}\\ \mathbf{0}_{K}^{H}&\mathbf{a}_{vv}^{H}\end{array}\right]\left[\begin{array}{ c}\mathbf{u}_{K}[n-1]\\ \mathbf{v}_{K}[n-1]\end{array}\right]+\left[\begin{array}{c}\eta_{u}[n]\\ \eta_{v}[n]\end{array}\right], \tag{2}\] with \(\mathbf{0}_{K}\) being the \(K\) dimensional all zero vector. Defining \(r_{uu}[\tau]\triangleq E[u[n]u^{*}[n-\tau]]\), \(r_{uv}[\tau]\triangleq E[u[n]v^{*}[n-\tau]]\), \(r_{vu}[\tau]\triangleq E[v[n]u^{*}[n-\tau]]\) and \(r_{vv}[\tau]\triangleq E[v[n]v^{*}[n-\tau]]\), and letting \(\mathbf{r}_{uv,K}[\tau]=[r_{uv}[\tau],\ldots,r_{uv}[\tau-K+1]]^{T}\), etc., we can write, \[r_{uu}[\tau]=\mathbf{a}_{uu}^{H}\mathbf{r}_{uu,K}[\tau-1]+\mathbf{a}_{uv}^{H} \mathbf{r}_{vu,K}[\tau-1]+\sigma_{\eta_{u}}^{2}\delta[\tau]. \tag{3}\] We can similarly write, \[r_{uv}[\tau]=\mathbf{a}_{uu}^{H}\mathbf{r}_{uv,K}[\tau-1]+\mathbf{a}_{uv}^{H} \mathbf{r}_{vv,K}[\tau-1] \tag{4}\] and \[r_{vv}[\tau]=\mathbf{a}_{vv}^{H}\mathbf{r}_{vv,K}[\tau-1]+\sigma_{\eta_{v}}^{2 }\delta[\tau]. \tag{5}\] The above equations, along with the the information that \(r_{uu}[0]=\sigma_{u}^{2}\) and \(r_{vv}[0]=\sigma_{v}^{2}\), can be used to compute \(r_{uu}[\tau]\), \(r_{uv}[\tau]\), and \(r_{vv}[\tau]\) for different values of \(\tau\). Defining, \(\mathbf{R}_{uu,K}[\tau]\triangleq E\left[\mathbf{u}_{K}[n]\mathbf{u}_{K}^{H}[n -\tau]\right],\mathbf{R}_{uv,K}[\tau]\triangleq E\left[\mathbf{u}_{K}[n] \mathbf{v}_{K}^{H}[n-\tau]\right],\) and similarly, \(\mathbf{R}_{vu,K}[\tau]\) and \(\mathbf{R}_{vv,K}[\tau]\), it can be shown that [17], \[\left[\begin{array}{c}\mathbf{a}_{uu}\\ \mathbf{a}_{uv}\end{array}\right]=\left[\begin{array}{cc}\mathbf{R}_{uu,K}[ 0]&\mathbf{R}_{uv,K}[0]\\ \mathbf{R}_{vu,K}[0]&\mathbf{R}_{vv,K}[0]\end{array}\right]^{-1}\left[ \begin{array}{c}\mathbf{r}_{uu,K}[1]\\ \mathbf{r}_{uv,K}[1]\end{array}\right]. \tag{6}\] It is evident from (2) that the causal dependence of \(u[n]\) over \(v[n]\) depends on the coefficient vector \(\mathbf{a}_{uv}\); that is, there exists a causal relationship between \(u[n]\) and \(v[n]\) only if \(\mathbf{a}_{uv}\neq\mathbf{0}_{K}\). Therefore, similar to the GCI, the \(\ell_{2}\) norm of \(\mathbf{a}_{uv}\) can be used as a test statistic for the presence of a causal relationship between \(u[n]\) and \(v[n]\), with \(\|\mathbf{a}_{uv}\|=0\) indicating the absence of a causal relationship and \(\|\mathbf{a}_{uv}\|\neq 0\) indicating its presence. ## III Effects of Sampling Impairments Having described the test statistic previously, in this section we will investigate the effects of sampling impairments, viz. additive noise and finite sample effects, and use those to derive the underlying probabilities of detection and false alarm. ### _Additive Noise_ Incorporating the effects of additive noise, we can define \(x[n]\) and \(y[n]\) as the noise corrupted versions of \(u[n]\) and \(v[n]\) respectively, such that, \[x[n]=u[n]+\nu_{x}[n],\quad y[n]=v[n]+\nu_{y}[n] \tag{7}\] with \(\nu_{i}[n]\sim\mathcal{CN}(0,\sigma_{\nu_{i}}^{2})\), \(i\in\{x,y\}\). Hence, if a causal relationship exists between, \(u[n]\) and \(v[n]\), then a causal relationship also exists between \(x[n]\) and \(y[n]\). We can therefore express \(x[n]\) as, \[x[n]=\left[\mathbf{w}_{x}^{H}\,\mathbf{w}_{y}^{H}\right]\left[\begin{array}{ c}\mathbf{x}_{K}[n-1]\\ \mathbf{y}_{K}[n-1]\end{array}\right]+\varphi[n], \tag{8}\] with \(\varphi[n]\) representing the zero mean prediction error for \(x[n]\) in terms of \(\mathbf{x}_{K}[n-1]\) and \(\mathbf{y}_{K}[n-1]\), and \(\mathbf{w}_{x}^{H}\) and \(\mathbf{w}_{y}^{H}\) regression weights that minimize the mean squared value of \(\varphi[n]\). It is easy to show that [17], \[\left[\begin{array}{c}\mathbf{w}_{x}\\ \mathbf{w}_{y}\end{array}\right]=\left[\begin{array}{cc}\mathbf{R}_{xx,K}[0 ]&\mathbf{R}_{xy,K}[0]\\ \mathbf{R}_{yx,K}[0]&\mathbf{R}_{yy,K}[0]\end{array}\right]^{-1}\left[\begin{array}[] {c}\mathbf{r}_{xx,K}[1]\\ \mathbf{r}_{xy,K}[1]\end{array}\right] \tag{9}\] We can also use the results in [17] to show that the minimum mean squared value of \(\varphi[n]\) is given by \(\sigma_{\varphi}^{2}=\sigma_{\eta_{v}}^{2}+\sigma_{\nu_{y}}^{2}\). Now, if a causal relationship exists between \(x[n]\) and \(y[n]\), then \(\mathbf{w}_{y}\) will be a nonzero vector, otherwise it will be an all zero vector. Hence, the norm of \(\mathbf{w}_{y}\) can still be used as a test statistic for determining the existence of a causal relationship between two noisy time series. We next determine the effects of using estimated second order statistics on this test statistic. ### _Finite Sample Effects_ We now consider the case where \(N\) samples each of \(x[n]\) and \(y[n]\) are observed, and are used to calculate the test statistics. We note that in the absence of the availability of exact second order statistics, we cannot use the minimum mean squared error (MMSE) estimator for \(\mathbf{w}_{x}\) and \(\mathbf{w}_{y}\), and will have to use the least squares (LS) estimator [17]. Defining the data matrix \(\mathbf{A}[N]\) as, \[\mathbf{A}[N]=\left[\begin{array}{cccc}\mathbf{x}_{K}[N]&\ldots&\mathbf{x}_{ K}[K+1]\\ \mathbf{y}_{K}[N]&\ldots&\mathbf{y}_{K}[K+1]\end{array}\right]^{H}=\left[ \begin{array}{c}\mathbf{A}_{x}[N]\\ \mathbf{A}_{y}[N]\end{array}\right], \tag{10}\] we can write the estimate of the auto-correlation matrix as \[\boldsymbol{\Phi}[N] =\frac{1}{N-K}\mathbf{A}^{H}[N]\mathbf{A}[N]\] \[=\frac{1}{N-K}\left[\begin{array}{cc}\mathbf{A}_{x}^{H}[N] \mathbf{A}_{x}[N]&\mathbf{A}_{y}^{H}[N]\mathbf{A}_{y}[N]\end{array}\right]\] \[=\left[\begin{array}{c}\mathbf{\Phi}_{xx}[N]&\mathbf{\Phi}_{xy}[N] \\ \mathbf{\Phi}_{yx}[N]&\mathbf{\Phi}_{yy}[N]\end{array}\right] \tag{11}\] and the estimate of the cross-correlation vector as \[\boldsymbol{\psi}[N]=\frac{1}{N-K}\mathbf{A}^{H}[N]\mathbf{x}_{N-K}[N]\] \[=\frac{1}{N-K}\left[\mathbf{A}_{x}^{H}[N]\mathbf{A}_{y}^{H}[N] \right]\mathbf{x}_{N-K}^{H}[N]=\left[\begin{array}{c}\boldsymbol{\psi}_{x}[N]\\ \boldsymbol{\psi}_{y}[N]\end{array}\right]. \tag{12}\] Consequently, we can write the \(N\) sample LS estimate of the weight vector as \[ \[=\left[\begin{array}{cc}\mathbf{\Phi}_{xx}[N]&\mathbf{\Phi}_{xy}[N]\\ \mathbf{\Phi}_{yx}[N]&\mathbf{\Phi}_{yy}[N]\end{array}\right]^{-1}\left[ \begin{array}{c}\mathbf{\psi}_{x}[N]\\ \mathbf{\psi}_{y}[N]\end{array}\right]. \tag{13}\] Now, we can express \(\hat{\mathbf{w}}_{y}\) as, \[\hat{\mathbf{w}}_{y}=\mathbf{\Gamma}_{1}[N]\mathbf{\psi}_{x}[N]+\mathbf{ \Gamma}_{2}[N]\mathbf{\psi}_{y}[N], \tag{14}\] where, \(\mathbf{\Gamma}_{1}[N]=\left(\mathbf{\Phi}_{xy}[N]-\mathbf{\Phi}_{xx}[N] \mathbf{\Phi}_{yx}^{-1}[N]\mathbf{\Phi}_{yy}[N]\right)^{-1}\) and \(\mathbf{\Gamma}_{2}[N]=\left(\mathbf{\Phi}_{yy}[N]-\mathbf{\Phi}_{yx}[N] \mathbf{\Phi}_{yy}^{-1}[N]\mathbf{\Phi}_{xy}[N]\right)^{-1}\). Now, since \(\hat{\mathbf{w}}_{y}\), represents the LS estimate of \(\mathbf{w}_{y}\) and can be expressed as [17], \[\hat{\mathbf{w}}_{y}=\mathbf{w}_{y}+\tilde{\mathbf{w}}_{y}, \tag{15}\] where \(\tilde{\mathbf{w}}_{y}\) is the estimation error vector that can be shown to be distributed as \(\tilde{\mathbf{w}}_{y}\sim\mathcal{CN}\left(\mathbf{0}_{K},\frac{\sigma_{ \varphi}^{2}}{N-K}\mathbf{\Sigma}\right)\), with \(\mathbf{0}_{K}\) being the all zero vector of length \(K\) and \[\mathbf{\Sigma}=\left[\bar{\mathbf{\Gamma}}_{1}\ \bar{\mathbf{\Gamma}}_{2} \right]\left[\begin{array}{cc}\mathbf{R}_{xx,K}[0]&\mathbf{R}_{xy,K}[0]\\ \mathbf{R}_{yx,K}[0]&\mathbf{R}_{yy,K}[0]\end{array}\right]\left[\begin{array} []{c}\bar{\mathbf{\Gamma}}_{2}^{H}\\ \bar{\mathbf{\Gamma}}_{2}^{H}\end{array}\right], \tag{16}\] with \(\bar{\mathbf{\Gamma}}_{1}=\left(\mathbf{R}_{xy,N}[0]-\mathbf{R}_{xx,N}[0] \mathbf{R}_{yx,N}^{-1}[0]\mathbf{R}_{yy,N}[0]\right)^{-1}\) and \[\bar{\mathbf{\Gamma}}_{2}=\left(\mathbf{R}_{yy,N}[0]-\mathbf{R}_{yx,N}[0] \mathbf{R}_{yy,N}^{-1}[0]\mathbf{R}_{xy,N}[0]\right)^{-1}.\] Since the entries of \(\tilde{\mathbf{w}}_{y}\) and hence \(\hat{\mathbf{w}}_{y}\) are not independent and identically distributed (i.i.d.), it is not possible to determine the statistics of \(\|\hat{\mathbf{w}}_{y}\|\) in a closed form. This results in the analysis of \(\|\hat{\mathbf{w}}_{y}\|\) as a test statistic for the causal relationship between \(x[n]\) and \(y[n]\) becoming intractable. Defining \(\mathbf{v}=\sqrt{\frac{N-K}{\sigma_{\varphi}^{2}}}\mathbf{\Sigma}^{-\frac{1}{2 }}\hat{\mathbf{w}}_{y}\), we obtain \[\mathbf{v}=\frac{1}{\sigma_{\varphi}}\sqrt{N-K}\mathbf{\Sigma}^{-\frac{1}{2}} \mathbf{w}_{y}+\tilde{\mathbf{v}}, \tag{17}\] with \(\tilde{\mathbf{v}}\sim\mathcal{CN}\left(\mathbf{0}_{K},\mathbf{I}_{K}\right)\), and hence \(\mathbf{v}\sim\mathcal{CN}\left(\sqrt{\frac{N-K}{\sigma_{\varphi}^{2}}}\mathbf{ \Sigma}^{-\frac{1}{2}}\mathbf{w}_{y},\mathbf{I}_{K}\right)\). Since the entries of \(\mathbf{v}\) are i.i.d. Gaussian with unit variance, \(\|\mathbf{v}\|^{2}\) becomes a Chi-squared distributed random variable with \(2K\) degrees of freedom and a non centrality parameter \(\kappa=\frac{N-K}{\sigma_{\varphi}^{2}}\mathbf{w}_{y}^{H}\mathbf{\Sigma}^{-1} \mathbf{w}_{y}^{H}\). Now, if a causal relationship does not exist between \(x[n]\) and \(y[n]\), then \(\kappa\) becomes zero, making \(\|\mathbf{v}\|^{2}\) a central Chi-squared rv. Consequently, if we denote the absence of a causal relationship by the null hypothesis, \(\mathcal{H}_{0}\), and its presence by the alternate hypothesis \(\mathcal{H}_{1}\), then we can write the binary hypothesis test for detecting a causal relationship between \(x[n]\) and \(y[n]\) in terms of the test statistic \(T_{N}=\|\mathbf{v}\|^{2}\) as \[T_{N}\sim\left\{\begin{array}{cc}\chi_{2K}^{2}(0)&\mathcal{H}_{0}\\ \chi_{2K}^{2}(\kappa)&\mathcal{H}_{1}\end{array}\right.. \tag{18}\] Therefore, for a detection threshold \(\lambda\), the event corresponding to the correct detection of the causal relationship between \(x[n]\) and \(y[n]\) can be expressed as, \[P_{d}=\Pr\{T_{N}>\lambda|\mathcal{H}_{1}\}=Q_{K}(\sqrt{\kappa},\sqrt{\lambda}), \tag{19}\] with \(Q_{K}(\cdot,\cdot)\) being the Marcum Q-function of order \(K\). Similarly, the probability of detecting a causal relationship between \(x[n]\) and \(y[n]\) when there is none, becomes \[P_{fa}=\Pr\{T_{N}>\lambda|\mathcal{H}_{0}\}=Q_{K}(0,\sqrt{\lambda}). \tag{20}\] Since the Marcum \(Q\)-function is an increasing function of the first argument, an increase in the number of samples, that results in an increased value of \(\kappa\) improves the detection performance, which is consistent with intuition. However, this test statistic still processes the samples from the two time series as a block, and is unsuitable for sequential detection of a causal relationship between \(x[n]\) and \(y[n]\). Therefore, in the next section, we first describe an adaptive technique for sequentially updating the test statistic, and then use that to develop a sequential test for cauasality between \(x[n]\) and \(y[n]\). ## IV The Sequential Test We note that the weight vector \(\hat{\mathbf{w}}\) can be iteratively updated as \(\hat{\mathbf{w}}[n]\) at the \(n\)th instant using the recursive least squares (RLS) algorithm. For this, we first set \(\hat{\mathbf{w}}[0]=\mathbf{0}_{K}\), and \(\mathbf{P}[0]=\delta\mathbf{I}_{2K}\). Following this, at the \(n\)th instant, we can first calculate the prediction error in \(x[n]\) as, \[\tilde{x}[n]=x[n]-[\hat{\mathbf{w}}_{x}^{H}[n-1]\hat{\mathbf{w}}_{y}^{H}[n-1]] \left[\begin{array}{c}\mathbf{x}_{K}[n-1]\\ \mathbf{y}_{K}[n-1]\end{array}\right], \tag{21}\] followed by an update in the sample covariance matrix, as \[\mathbf{\Phi}[n]=\mu\mathbf{\Phi}[n-1]+\left[\begin{array}{c}\mathbf{x}_{K}[n -1]\\ \mathbf{y}_{K}[n-1]\end{array}\right]\left[\mathbf{x}_{K}^{H}[n-1]\quad\mathbf{y}_ {K}^{H}[n-1]\right], \tag{22}\] with \(0<\mu\leq 1\) being the forgetting factor. We can similarly update the cross correlation vector as, \[\boldsymbol{\psi}[n]=\mu\boldsymbol{\psi}[n-1]+\left[\begin{array}{c}\mathbf{ x}_{K}[n-1]\\ \mathbf{y}_{K}[n-1]\end{array}\right]\mathbf{x}_{K}^{*}[n], \tag{23}\] this is followed by the calculation of the RLS gain as \[\mathbf{g}[n]=\frac{\mathbf{\Phi}^{-1}[n-1]\left[\begin{array}{c}\mathbf{x}_{K}[n -1]\\ \mathbf{y}_{K}[n-1]\end{array}\right]}{\mu+\left[\mathbf{x}_{K}^{H}[n-1] \quad\mathbf{y}_{K}^{H}[n-1]\right]\mathbf{\Phi}^{-1}[n-1]\left[\begin{array}{c} \mathbf{x}_{K}[n-1]\\ \mathbf{y}_{K}[n-1]\end{array}\right]}. \tag{24}\] and the update of \(\mathbf{\Phi}^{-1}[n]\) as, \[\mathbf{\Phi}^{-1}[n]=\mu^{-1}\mathbf{\Phi}^{-1}[n-1]\\ -\mu^{-1}\mathbf{g}[n]\left[\mathbf{x}_{K}^{H}[n-1]\quad\mathbf{y}_{K}^{H}[n-1] \right]\mathbf{\Phi}^{-1}[n-1], \tag{25}\] finally leading to an updated estimate of the weight vector as, \[\hat{\mathbf{w}}[n]=\hat{\mathbf{w}}[n-1]+\mathbf{g}[n]\tilde{x}[n]. \tag{26}\] This updated weight vector can be used to calculate the test statistic at the \(n\)th instant as, \(T[n]=\frac{n-K}{\sigma_{\varphi}^{2}}\|\mathbf{\Sigma}^{-\frac{1}{2}}\hat{\mathbf{w}}[ n]\|^{2}\). We then declare \(x[n]\) to be caused by \(y[n]\) if \(T[n]>\lambda_{1}[n]\) and declare \(x[n]\) to be not caused by \(y[n]\) if \(T[n]<\lambda_{0}[n]\). We wait for the next sample if \(T[n]\in(\lambda_{0}[n],\lambda_{1}[n])\). ## V Numerical Results We now present with \(a=0.25\), and generate its samples for different additive noise variances according to (7). We then evaluate the probabilities of detection and false alarm by averaging over 10,000 independent realizations of the signals of interest. In Fig. 1, we compare our derived results against the simulated receiver operating characteristics (ROCs) for the example case discussed above for different numbers of samples at an SNR of 0 dB. It is observed that for a small number of samples, both the simulated and theoretical the ROCs are close to the \(P_{FA}=P_{D}\) line. However, the concavity of the ROC increases with an increase in the number of samples, as per intuition. In Fig. 2 we plot the ROC for our proposed test statistic on a real-world dataset with known ground truth, made available in [18] for a data SNR of 20 dB for different window sizes (\(N\)). The figure corresponds to data pair \(69\) of the database, containing temperature measurements from inside and outside a room taken every five minutes. This data set was selected out of the \(108\) available data sets because it represented time series data that fit well into our proposed system model, and the number of samples in these time series was sufficiently large to obtain probabilities of detection and false alarm with a fair amount of accuracy. We note that neither the underlying model parameters of the data nor the variance of the additive noise are known. As a consequence, we fit this data into a VAR-1 model similar to the example considered previously, and assume it to be inherently noiseless. We first process these time series to make them zero mean. As a next step, we artificiality inject additive white Gaussian noise to obtain a given signal to noise ratio for the said time series. Following this, the data pairs are divided into windows of length \(N\), and the test statistic is calculated and compared to the threshold. The causal relationship is marked as successfully detected if the test statistic exceeds the threshold, and the detection of the causal relationship is marked as missed otherwise. Simultaneously, we consider the noise injected into the time series separately and use it to generate the test statistic and compare it against the same threshold. In this case the test statistic exceeding the threshold implies a false alarm. The results of these comparisons are averaged over all the sample windows to obtain the probabilities of detection and false alarm. The jitters in the plots are due to the averaging over the relatively small number of samples that were available. In Fig. 3 we repeat the same experiment as in Fig. 2 for a window size \(N=50\) at different data SNRs. ## VI Conclusions In this work, we have derived, evaluated and demonstrated a novel test statistic for detecting causal relationships among time series. We have derived the underlying probabilities of detection and false alarm for the above mentioned test statistic and extended it to support sequential detection of causative relationships among time series. Via extensive numerical simulations we have shown that the derived results match well with simulated as well as real world data. Future work would consider the extension of these results to multiple undersampled time series.
2309.07586
Emo-StarGAN: A Semi-Supervised Any-to-Many Non-Parallel Emotion-Preserving Voice Conversion
Speech anonymisation prevents misuse of spoken data by removing any personal identifier while preserving at least linguistic content. However, emotion preservation is crucial for natural human-computer interaction. The well-known voice conversion technique StarGANv2-VC achieves anonymisation but fails to preserve emotion. This work presents an any-to-many semi-supervised StarGANv2-VC variant trained on partially emotion-labelled non-parallel data. We propose emotion-aware losses computed on the emotion embeddings and acoustic features correlated to emotion. Additionally, we use an emotion classifier to provide direct emotion supervision. Objective and subjective evaluations show that the proposed approach significantly improves emotion preservation over the vanilla StarGANv2-VC. This considerable improvement is seen over diverse datasets, emotions, target speakers, and inter-group conversions without compromising intelligibility and anonymisation.
Suhita Ghosh, Arnab Das, Yamini Sinha, Ingo Siegert, Tim Polzehl, Sebastian Stober
2023-09-14T10:40:43Z
http://arxiv.org/abs/2309.07586v1
# Emo-StarGAN: A Semi-Supervised Any-to-Many Non-Parallel Emotion-Preserving Voice Conversion ###### Abstract Speech anonymisation prevents misuse of spoken data by removing any personal identifier while preserving at least linguistic content. However, emotion preservation is crucial for natural human-computer interaction. The well-known voice conversion technique StarGANv2-VC achieves anonymisation but fails to preserve emotion. This work presents an any-to-many semi-supervised StarGANv2-VC variant trained on partially emotion-labelled non-parallel data. We propose emotion-aware losses computed on the emotion embeddings and acoustic features correlated to emotion. Additionally, we use an emotion classifier to provide direct emotion supervision. Objective and subjective evaluations show that the proposed approach significantly improves emotion preservation over the vanilla StarGANv2-VC. This considerable improvement is seen over diverse datasets, emotions, target speakers, and inter-group conversions without compromising intelligibility and anonymisation. Subhita Ghosh\({}^{1}\)+, Arnab Das\({}^{1,3}\)+, Yamini Sinha\({}^{2}\), Ingo Siegert\({}^{2}\), Tim Polzeh\({}^{3}\), Sebastian Stober\({}^{1}\)\({}^{1}\)Artificial Intelligence Lab (AILab), Otto-von-Guericke-University, Magdeburg, Germany \({}^{2}\)Mobile Dialog Systems, Otto-von-Guericke-University, Magdeburg, Germany \({}^{3}\)Speech and Language Technology, German Research Center for Artificial Intelligence (DFKI) {suhita.ghosh,yamini.sinha,ingo.siegert,stober}@ovgu.de {arnab.das,tim.polzehl}@dfki.de Footnote †: These authors contributed equally to this work **Index Terms**: speech anonymisation, voice conversion, StarGAN ## 1 Introduction The increasing use of cloud-based speech devices, such as smart speakers, raises concerns about the protection and confidentiality of the sensitive data being collected and used [1, 2]. In case of data compromise, the spoken data can be exploited to bypass the speaker verification systems or impersonate authorised users [3, 4]. This makes it crucial to anonymise the utterance before being shared across systems, such that the speaker cannot be traced. Voice conversion (VC) achieves anonymisation by modifying the utterance of the source speaker to sound like another target speaker while preserving at least linguistic content. In cases where the response of a speech device is driven by the end-user's emotional state, the preservation of emotion also becomes pertinent, e.g., a digital assistant responding with comforting words when the user sounds sad. Many VC approaches using parallel data have been proposed, such as parametric statistical modelling-based [5, 6], non-parametric exemplar-based [7, 8] and deep neural network-based [9]. Parallel data comprise utterances having the same linguistic content from both the source and target speakers, which is arluous and expensive to acquire. Therefore, recent works focus more on non-parallel data, as it is simpler to obtain and better represents real-life situations where any arbitrary speech requires anonymisation. A few non-parallel VC approaches [10, 11] use phonetic posteriorgrams (PPGs) as one of the inputs to the encoder-decoder framework to generate translated acoustic features. These methods tend to produce mispronuncatiations due to alignment issues [12], resulting in degraded prosody, which provides cues about emotion [13]. The non-parallel variational autoencoder (VAE) approaches [14, 15] typically disentangle the content and speaker embeddings using a reconstruction loss and relevant constraints to remove speaker information. The VAE-based approaches are prone to spectrum smoothing, which leads to a buzy-sounding voice, dampening the emotion [16]. A plethora of generative adversarial network (GAN) based VC approaches [16, 17] were proposed, which can use non-parallel data due to cycle-consistency loss [18]. GANs overcome the over-smoothing effect through a discriminator, which teaches the generator to produce natural sounding conversions. Recently, StarGANv2-VC [19], a non-parallel any-to-many GAN-based VC technique has been proposed. The method is attractive due to its fast real-time conversion and naturally sounding samples with high intelligibility. However, the model fails to preserve the source speaker's emotion, especially for diverse emotions and acoustic conditions such as high varying pitch. Thus, we propose the novel "Emo-StarGAN" in this paper, which is an any-to-many semi-supervised _emotion-preserving_ variant of StarGANv2-VC. Two kinds of emotion supervision are proposed: (i) _direct_: through an emotion classifier, which provides feedback to the generator when the emotion ground truth is available. (ii) _indirect_: through losses computed between source and conversions using emotion embeddings or acoustic descriptors correlated with emotion, improving the conversion quality for diverse target speakers. Extensive evaluation is conducted on three datasets, diverse target speakers, emotions, and various group conversions such as accent and gender. Both objective and subjective evaluations portray that Emo-StarGAN improves emotion preservation significantly over StarGANv2-VC for all cases, without hurting the naturalness, intelligibility and anonymisation. ## 2 StarGANv2-VC Architecture Our method is based on the StarGANv2-VC architecture, as shown in Figure 1. A _single_ generator \(G\) is trained to convert a source Figure 1: The proposed framework adapted from StarGANv2-VC [19]. The blue components do not belong to StarGANv2-VC. In voice conversation, the style encoder captures speaker embeddings. The same framework is used for emotion conversion, where the style encoder learns emotion embeddings. The dashed components are not used in the emotion embedding training. utterance \(X_{s}\) to the target utterance \(X_{s}\) to the target utterance \(Y_{r}\), conditioned on the speaker style embedding \(h_{sc}\). The speaker style embedding \(h_{sc}\) represents _speaker characteristics_, such as accent. The speaker style-encoder \(SE\) produces the speaker style embedding \(h_{sc}\) using the target speaker's mel-spectrogram \(X_{r}\) having the style information and, target speaker's code \(r\) (one-hot encoding). \(SE\) comprises multiple convolutional layers which are shared by all the speakers, followed by a speaker-specific linear projection layer, which outputs an embedding \(h_{sc}\) for each target speaker. A mapping network \(M\) having the same architecture as \(SE\) is trained along with it, which inputs a random latent vector instead of a reference mel-spectrogram, providing diverse style representation for all speakers. The converted sample produced by the generator \(Y_{r}=G(X_{s},h_{F0},h_{sc})\) captures the style of the target speaker-code \(r\) and has the linguistic content of the source utterance \(X_{s}\). In order to produce F0-consistent conversions, the generator is fed with source-pitch embedding \(h_{F0}\) along with source utterance \(X_{s}\) and style representation \(h_{sc}\). The pitch embedding \(h_{F0}\) is derived from the convolutional outputs of a pre-trained F0 network [20]. The framework consists of one discriminator \(D\) and one adversarial source speaker classifier \(C_{s}\). \(D\) is the typical adversarial discriminator, which encourages the generator to produce plausible conversions. \(C_{s}\) has the same architecture as \(D\), which is trained to enforce the generator to produce conversions having no details about the source speaker. ## 3 Emo-StarGAN Recent VC works [21] including StarGANv2-VC have primarily focused on generating naturally sounding voices with correct linguistic content, and not much on emotion preservation. The proposed Emo-StarGAN aims to anonymise an utterance by modifying the source speaker's timbre, while preserving the source's linguistic and _emotional_ content \(e_{s}\). ### Direct Emotion Supervision Our framework uses an additional emotion classifier \(C_{e}\) which provides _direct_ emotion supervision for utterances having emotion labels, as shown in Figure 1. \(C_{e}\) encourages the generator to produce _emotion-consistent_ samples, such that the source and target samples have the same emotion. When \(C_{e}\) is trained, the generator weights are fixed, and the emotion classifier is trained to ascertain the emotion of the source utterance through the classification loss \(L_{emod}\). \[L_{emod}\!=\!\mathbb{E}_{X_{s},e_{s}}\big{[}CrossEntropy(C_{e}(X_{s}),e_{s}) \big{]} \tag{1}\] In contrast, during the training of the generator, \(C_{e}\) weights are fixed, and the generator is encouraged to produce samples having the same emotion as the source through the loss \(L_{emog}\). \[L_{emog}\!=\!\mathbb{E}_{X_{s},e_{s},h_{sc}}\big{[}CrossEntropy(C_{e}(G(X_{s },h_{sc})),e_{s})\big{]} \tag{2}\] ### Indirect Emotion Supervision Incorporation of explicit emotion supervision for the converted samples becomes challenging due to the unavailability of the emotion labels. Therefore, it becomes pertinent to measure the emotion discrepancy between the source and the converted samples through representations of emotion. To this end, we propose two ways to measure discrepancies of the emotional content: acoustic features correlated to emotion and deep emotion embeddings. #### 3.2.1 Emotion-aware Acoustic Feature Loss We propose acoustic feature loss \(L_{af}\), an unsupervised loss computed between the acoustic descriptors of the source and converted samples, as shown in Equation 3, where \(AF\) denotes an acoustic feature. The acoustic features are correlated with emotion and require \[L_{af}\!=\!\mathbb{E}_{X_{s},h_{sc}}\big{[}\|AF(X_{s})\!-\!AF(G(X_{s},\!h_{sc}) )\|_{1}\big{]} \tag{3}\] being differentiable to provide feedback to the network. Based on [22], the acoustic descriptors can be categorised into two groups, spectral and non-spectral. Spectral features add additional information about higher-level harmonics to that already existing in pitch, which provides pertinent cues for the emotional state [23]. Many works [23, 24] report spectral features to be better discriminators in between emotions that have different degree of polarity (valence) but similar intensity (arousal), such as anger and happiness. The non-spectral features are energy or voicing-related, which are typically prosodic and arousal indicative [25]. We consider two descriptors from each of the two categories. All descriptors are extracted over voiced segments using 50% overlapping windows, to capture the local transients. * **Spectral centroid:** Higher spectral centroid values indicate emotions positioned in the upper-right quadrant of the valence-arousal 2D plane, such as _excited_ or _happy_[26]. Lower values indicate subdued emotions, such as _sad_. * **Spectral kurtosis:** Spectral kurtosis shows the existence of increased energy concentration within specific frequency ranges. Further, it can detect the series of transients [27], which can make it a good indicator of emotions, especially the ones having subtle intonation changes, such as in the emotion _surprise_. * **Loudness:** Loudness is an arousal indicative non-spectral feature, which correlates stronger to emotion than root-mean-square energy due to the perceptual A-weighting [28]. Louder sounds elicit stronger emotional responses (high arousal), and vice-versa. * **Change in F0 (\(\Delta\)F0):**\(\Delta\)F0 is a prosodic non-spectral feature, which captures change in intonation, where a considerable change implies stronger emotions, such as _anger_ or _excited_[28]. #### 3.2.2 Emotion Embedding Loss Another way of incorporating indirect emotion supervision is through latent emotion representations. The emotion embedding loss \(L_{embed}\) penalises the discrepancy between the latent emotional content of the source and converted samples. \[L_{embed}\!=\!\mathbb{E}_{X_{s},h_{sc}}\big{[}\|Emb(X_{s})\!-\!Emb(G(X_{s},\!h_{ sc}))\|_{1}\big{]} \tag{4}\] The emotion embedding is obtained by a two-stage training on categorical emotion-labelled data. At Stage I, the vanilla StarGANv2-VC model is trained for emotion conversion task rather than voice conversion, as shown in Figure 1. The emotion style-encoder learns \(N\!\times\!64\) embeddings of emotion classes, where \(N\) denotes the number of emotion classes. However, this framework cannot be used in the VC training, as an emotion label (code) is required to generate the emotion embeddings, which is unknown for the converted samples. Therefore, the pre-trained emotion style-encoder from Stage I is fine-tuned for automatic embedding extraction, as shown in Figure 2. At Stage II, the pre-trained emotion style-encoder is extended with fully-connected layers and a softmax Figure 2: Automatic emotion embedding extraction. N denotes the number of emotions classes. distribution is generated over all emotions. Further, the softmax score is element-wise squared to encourage sparsity. Finally, a dot product between the sparse \(1\times N\) score and the encoder output is performed to produce a \(1\times 64\) dimensional latent emotion representation. This fine-tuned model is used in the VC training to extract the emotion embeddings from both source and converted samples. ### Training Objectives The components in Emo-StarGAN are trained with the proposed emotion-aware losses along with the losses from StarGANv2-VC. The generator is trained with loss \(L_{G}\) (Equation 5) comprising the proposed emotion classification loss \(L_{emod}\), unsupervised emotion-aware losses (\(L_{af}\) and \(L_{embed}\)), and losses from StarGANv2-VC. \[L_{G}\!=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! the other acoustic features are, spectral centroid (24.1%), loudness (15.4%) and \(\Delta F0\) (20.1%), which portrays the spectral features to be more emotion preserving than the non-spectral ones, compliant with [23]. The ablation study (Table 2) shows that the unsupervised loss \(L_{embed}\) contributes the most to emotion preservation, even more than the direct supervision by emotion classifier \(C_{e}\), this might be attributed to \(C_{e}\) suffering from confirmation bias on noisy emotion labels. Further, we observe that each individual proposed technique preserves emotion more than the baseline. **Comparison with Baseline**: Our method Emo-StarGAN outperforms the baseline with respect to emotion preservation for all scenarios (Table 1), which is also statistically significant (p \(<\) 0.001 for paired t-test on PCC and Embedding MAE columns). The subjective evaluation (Table 3) also shows that our model is voted more emotion preserving (72%) compared to the baseline (28%). _Surprise_ is reported as one of the most difficult emotions in speech emotion recognition tasks [40]. Our method also achieves lower accuracy for'surprise' compared to other emotions, where Acc\({}_{\text{errn}}\) scores for ESD and RAVDESS are only 16% and 0% respectively. However, preservation seems much higher considering Acc\({}_{\text{errn}}\) scores, 97.9% ESD and 34% for RAVDESS. Our framework improves emotion preservation significantly for the cross-corpus (RAVDESS\(\rightarrow\)ESD) scenario with respect to all metrics, especially for _sad_, where the emotion preservation improves from 0% to 59% (Acc\({}_{\text{errn}}\)), 9% to 73% (Acc\({}_{\text{errn}}\)), 55.1 to 44.8 (Embedding MAE) and 76% to 81% (PCC). Considering inter-accent cases, our model produces a high Acc\({}_{\text{errn}}\) score of 89.9% for both American \(\rightarrow\) British and Canadian \(\rightarrow\) British conversions, and also improves other quality metrics. For both gender conversion cases, similar observations are made. Our method outperforms the baseline mostly with respect to voice quality, intelligibility, and anonymisation, which is further supported by the subjective results. The code and demo audio samples can be found online2. Footnote 2: [https://github.com/suittaghosh10/emo-stargan.git](https://github.com/suittaghosh10/emo-stargan.git) ## 5 Conclusions To the best of our knowledge, we propose the first emotion-preserving any-to-many semi-supervised voice conversion framework Emo-StarGAN. We introduce novel unsupervised acoustic descriptor-based and deep emotion losses, which can be used with any other framework. Extensive experiments show that Emo-StarGAN preserves emotion significantly better than the state-of-the-art VC method StarGANv2-VC over seen source speakers, cross-corpus conversions, different genders, accents and emotions. Subjective results show that our method even achieves higher MOS and anonymisation scores. As future work, we plan to improve the emotion preservation for complex emotions by incorporating losses beneficial to a specific emotion. Further, we would like to extend the method with emotion embeddings learned from multi-label and arousal-valence labelled datasets. ## 6 Acknowledgements This research has been partly funded by the Federal Ministry of Education and Research of Germany in the project Emonymous (project number S21060A) and partly funded by the Volkswagen Foundation in the project AnonymPrevent (AI-based Improvement of Anonymity for Remote Assessment, Treatment and Prevention against Child Sexual Abuse). \begin{table} \begin{tabular}{l|l|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline **Source - Target** & **Type** & \multicolumn{3}{c|}{**Acc\({}_{\text{errn}}\)[\(\%\)]**} & \multicolumn{3}{c|}{**Acc\({}_{\text{errn}}\)[\(\%\)]**} & \multicolumn{3}{c|}{**Embedding MAE [\(\uparrow\)0\({}^{\uparrow}\)]**} & \multicolumn{3}{c|}{**PMC [\(\uparrow\)0\({}^{\uparrow}\)]**} & \multicolumn{3}{c|}{**CCR [\(\uparrow\)]**} & \multicolumn{3}{c|}{**EER [\(\uparrow\)]**} \\ \cline{2-13} & **Baseline** & **Time-of-States** & **Baseline** & **Time-of-States** & **Time-of-States** & **Time-of-States** & **Time-of-States** & **Time-of-States** & **Time-of-States** & **Time-of-States** & **Time-of-States** & **Time-of-States** \\ \hline **All Conc.** & AI & 30.22 & **72.4** & 39.4 & **87.1** & 43.4(7.01) & **38.1(4.09)** & 78.9(1.92) & **84.3(10.06)** & 3.3(4.03) & **32.0(4.04)** & 3.4(2.03) & **2.27(.08)** & **49.63** & **49.64** \\ \hline **ESD** & AI & 19.1 & **69.9** & 20.1 & **94.7** & 43.4(7.03) & **31.5(4.04)** & 78.1(1.04) & **84.9(1.04)** & 3.75(0.43) & **35.9(0.44)** & 4.27(.08) & **3.56(.73)** & **45.68** & **45.45** \\ \hline **ESD** & Happy & 10.7 & **69.7** & 13.1 & **86.7** & 43.3(2.75) & **34.0(4.09)** & 78.3(1.76) & 3.0(0.45) & **33.3(0.41)** & 5.21(0.36) & **4.92(0.63)** & - & - \\ Sal & 15.7 & **94.2** & 15.7 & **56.2** & 42.7(1.58) & **23.4(4.52)** & 32.2(2.97) & **35.1(0.36)** & 3.3(0.36) & **4.92(0.47)** & 2.47(0.53) & **2.28(0.61)** & - & - \\ \(\downarrow\) & Superie & 0.6 & **10.0** & 2.1 & **95.9** & 41.7(1.55) & **28.5(1.96)** & 76.6(1.62) & **84.5(4.04)** & 3.3(0.43) & **4.3(0.43)** & 1.17(1.51) & **54.6(3.9)** & - & - \\ **ESD** & Avery & 10.6 & **94.0** & 10.6 & **66.9** & **46.9(1.73)** & **73.1(1.45)** & 16.3(1.48) & **85.6(1.49)** & **85.6(0.37)** & **3.3(0.36)** & **3.53(0.38)** & **2.67(1.01)** & - & - \\ Neural & 79.3 & **97.3** & **97.3** & **97.3** & 40.7(1.83) & **34.5(0.55)** & **78.0(0.08)** & **46.2(0.46)** & **85.7(0.37)** & **3.0(1.48)** & **14.03(1.52)** & **1.67(1.09)** & - & - \\ \cline{2-13} & Different gender & 18.6 & **63.3** & **96.2** & 50.5(0.57) & **38.4(0.55)** & 78.0(0.47) & **86.5(0.47)** & **86.5(0.47)** & **86.5(0.47)** & **86.5(0.47)** & **86.1(0.48)** & **14.03(1.57)** & **1.69(0.50)** & - & - \\ \cline{2-13} & Different gender & 18.6 & **63.3** & 19.6 & **94.2** & 50.6(0.57) & **38.4(0.55)** & 78.0(0.47) & **86.5(0.47)** & **86.5(0.37)** & **86.5(0.47)** & **86.5(0.47)** & **86.5(0.47)** & **86.5(0.37)** & **86.5(0.37)** & - & - \\ \cline{2-13} & Semi gender & 19.5 & **76.6** & 20.1 & **96.3** & 50.6(1.58) & **23.4(0.33)** & 79.0(0.47) & **86.5(0.37)** & **86.5(0.37)** & **86.5(0.38)** & **86.5(0.38)** & **86.3(0.38)** & **86.3(0.38)** & **86.3(0.37)** & **86.37** & **87.7(0.2)** & - & - \\ \hline \hline \multirow{4}{*}{**RAVDESS**} & AI & 27.8 & **40.2** & 41.4 & **76.0** & 52.5(0.41) & **44.7(2.68)** & 86.2(0.08) & **88.9(0.3)** & 3.44(0.4) & **3.4(0.41)** & 4.57(0.09) & **45.2(5.7)** & 50.19 & **50.44** \\ \cline{2-13} & Happy & 0.0 & **14.0** & 6.0 & **56.0** & 52.6(0.42) & **52.4(0.58)** & 89.0(0.02) & **50.4(0.35)** & **3.0(0.35)** & **3.0(0.35)** & **3.0(0.32)** & **2.56(0.40)** & - & - \\ Sal & 0.0 & **99.0** & 9.0 & **7.0** & 50.5(0.5) & **44.0(0.5)** & **85.7(1.06)** & **88.1(0.2)** & **88.1(0.2
2309.06698
Benchmarking Procedural Language Understanding for Low-Resource Languages: A Case Study on Turkish
Understanding procedural natural language (e.g., step-by-step instructions) is a crucial step to execution and planning. However, while there are ample corpora and downstream tasks available in English, the field lacks such resources for most languages. To address this gap, we conduct a case study on Turkish procedural texts. We first expand the number of tutorials in Turkish wikiHow from 2,000 to 52,000 using automated translation tools, where the translation quality and loyalty to the original meaning are validated by a team of experts on a random set. Then, we generate several downstream tasks on the corpus, such as linking actions, goal inference, and summarization. To tackle these tasks, we implement strong baseline models via fine-tuning large language-specific models such as TR-BART and BERTurk, as well as multilingual models such as mBART, mT5, and XLM. We find that language-specific models consistently outperform their multilingual models by a significant margin across most procedural language understanding (PLU) tasks. We release our corpus, downstream tasks and the baseline models with https://github.com/ GGLAB-KU/turkish-plu.
Arda Uzunoglu, Gözde Gül Şahin
2023-09-13T03:42:28Z
http://arxiv.org/abs/2309.06698v2
# Benchmarking Procedural Language Understanding for Low-Resource Languages: A Case Study on Turkish ###### Abstract Understanding procedural natural language (e.g., step-by-step instructions) is a crucial step to execution and planning. However, while there are ample corpora and downstream tasks available in English, the field lacks such resources for most languages. To address this gap, we conduct a case study on Turkish procedural texts. We first expand the number of tutorials in Turkish wikiHow from 2,000 to 52,000 using automated translation tools, where the translation quality and loyalty to the original meaning are validated by a team of experts on a random set. Then, we generate several downstream tasks on the corpus, such as linking actions, goal inference, and summarization. To tackle these tasks, we implement strong baseline models via fine-tuning large language-specific models such as TR-BART and BERTurk, as well as multilingual models such as mBART, mT5, and XLM. We find that language-specific models consistently outperform their multilingual models by a significant margin across most procedural language understanding (PLU) tasks. We release our corpus, downstream tasks and the baseline models with [https://github.com/GGLAB-KU/turkish-plu](https://github.com/GGLAB-KU/turkish-plu). ## 1 Introduction A procedural text typically comprises a sequence of steps that need to be followed in a specific order to accomplish a goal. For example, to care for an indoor plant, one must undertake tasks such as i) _selecting an appropriate location for the plant_, ii) _maintaining indoor humidity levels_, and iii) _selecting the right fertilizer_, usually in the given order. To accomplish a goal given with step-by-step instructions, a set of diverse skills that can be related to traditional NLP tasks such as semantic analysis (e.g., who did what to whom), commonsense reasoning (e.g., plant requires water), and coreference resolution (e.g., _it_ refers to the _plant_) are required. Hence, procedural language understanding (PLU) can be considered a proxy to measure the performance of models on a combination of these distinct skills. Previous work has immensely utilized the WikiHow tutorials, and proposed several downstream tasks on procedural text. For example, Zhang et al. (2020) introduced step and goal inference tasks where the objective is to predict the most likely _step_ given the _goal_ or vice versa. Similarly, Zellers et al. (2019) proposed predicting the _next event_ given the goal and the current step. All of these tasks are formulated as multiple-choice QA and require a partial understanding of step-goal relations in procedural documents. Furthermore, Zhou et al. (2022) proposed an information retrieval task where the goal is to link _steps_ to related _goals_ to create a wikiHow hierarchy. Finally, several other works Koupaee and Wang (2018); Ladhak et al. (2020) proposed an abstractive summarization task, that requires competitive language generation skills. Despite its importance, PLU has been largely ignored for the majority of the languages due to a lack of language-specific web corpora. Except from Ladhak et al. (2020), all the aforementioned tasks are only available in English. In addition to the scarcity of raw text, creating downstream task data is challenging and might require language-specific filtering techniques to ensure high quality. Finally, all previous works study the proposed tasks in isolation, which can only give a limited insight into the model's performance. Considering the uneven distribution of available procedural data across languages1, our objective is to inspire research efforts on PLU for other understudied languages from different language families. To achieve this, we design a case study focused on the Turkish language. Unlike previous works, we adopt a centralized approach and introduce a comprehensive benchmark that contains six downstream tasks on procedural documents. To address the scarcity of resources, we utilize automatic machine translation tools. We implement rigorous quality control measures for machine translation including human evaluation, and show that the data is indeed high-quality. Next, we survey and study several downstream tasks and create high-quality, challenging task data through language-specific filtering and manual test data annotation. Finally, we perform a comprehensive set of experiments on a diverse set of language models with different pretraining, fine-tuning settings, and architectures. We find that language-specific models mostly outperform their multilingual counterparts; however, the model size is a more important factor than training language, i.e., large enough multilingual models outperform medium sized language-specific models. We show that tasks where we can perform rigorous language-specific preprocessing such as goal inference, are of higher-quality, thus more challenging. Finally, we find that our best-performing models for most downstream tasks, especially reranking, goal inference, and step ordering, are still far behind their English counterparts, suggesting a large room for improvement. We release all the resources--including the structured corpus of more than 52,000 tutorials, data splits for six downstream tasks and the experimented baseline models-- at [https://github.com/GGLAB-KU/turkish-plu](https://github.com/GGLAB-KU/turkish-plu). ## 2 Related Work WikiHow is an eminent source for studying procedural text, allowing for a broad range of NLP tasks to be proposed and studied, such as linking actions Lin et al. (2020); Zhou et al. (2022), step and goal inference Zhang et al. (2020); Yang et al. (2021), step ordering Zhang et al. (2020); Zhou et al. (2019), next event prediction Nguyen et al. (2017); Zellers et al. (2019), and summarization Koupaee and Wang (2018); Ladhak et al. (2020). While these works serve as a proxy to procedural text understanding, they are mostly limited to English. Exploiting machine translation tools is a common practice to generate semantic benchmarks for many resource-scarce languages. For instance, Mehdad et al. (2010) automatically translated hypotheses from English to French to generate a textual entailment dataset. Similarly, Real et al. (2018) created a Portuguese corpus for natural language inference (NLI), namely as SICK-BR, and Isbister and Sahlgren (2020) introduced the first Swedish benchmark for semantic similarity, by solely employing automatic translation systems. Moreover, Budur et al. (2020) and Beken Fikri et al. (2021) employed Amazon and Google translate to generate Turkish NLI and sentence similarity, datasets via automatically translating existing resources such as SNLI Bowman et al. (2015), MNLI Williams et al. (2018) and STS-B Cer et al. (2017). ## 3 Turkish PLU Benchmark To evaluate the procedural language understanding capacity of existing models and to improve upon them, we introduce i) a large procedural documents corpus covering a wide range of domains for Turkish, ii) a diverse set of downstream tasks derived from the corpus to evaluate distinct large language models and iii) strong baselines for each task. ### Corpus Following previous work Zhang et al. (2020), we utilize wikiHow, a large-scale source for procedural texts that contains how-to tutorials in a wide range of domains, curated by experts. We follow the format used by Zhang et al. (2020) and extract the title, methods/parts, steps, and additional information, such as the related tutorials, references, tips, and warnings. We focus on the categories with the least subjective instructions (e.g., Crafts) and ignore subjective categories (e.g., Relationships). Our corpus creation process has two steps: i) scraping the original Turkish wikiHow, and ii) translating the English tutorials from the English wikiHow corpus Zhang et al. (2020). Scraping Turkish WikihowUsing the beautifulsoup library Richardson (2007), we scrape the Turkish wikiHow tutorials from the sitemap files. After the category filtering and deduplication process, we get over 2,000 tutorials. Translating the English WikihowTo automatize the translation process, we first develop an open-source _file-level_ translation tool: CeVeri. It is simply an easy-to-use Google Translate2 wrapper that utilizes recursive search to find, translate and replace nested text fields within a file (see Appendix D). After filtering the subjective categories, we translate over 50,000 tutorials using CeVeri. MT Quality ControlTo measure the translation quality of CeVeri, we translate the English counterparts of the original Turkish wikiHow tutorials and calculate a set of automatic evaluation metrics such as BLEU and COMET Papineni et al. (2002); Lin (2004); Banerjee and Lavie (2005); Rei et al. (2020); Popovic (2015) given in Table 1. Although we use conventional metrics such as BLEU to align well with the literature, we are aware of the concerns related to them Freitag et al. (2022). Therefore, we include metrics that better correlate with human evaluations, such as COMET Mathur et al. (2020); Freitag et al. (2021), and consider character-level information such as chrF Popovic (2015). Considering these, CeVeri achieving considerably high COMET and chrF scores indicate that the translation is, indeed, of high quality. We also conduct human validation with three native Turkish speakers fluent in English. We randomly sample 104 step triplets: a) the original Turkish step, b) the corresponding English step, and c) the translation of the English step with respect to the category distribution of our corpus. Each expert is asked to evaluate the triplets by i) scoring the translation quality with the English step and the translated Turkish step and ii) scoring the semantic similarity between the original and the translated Turkish steps both between 1 and 5 (inclusive; 5 is the best). As given in Table 2, the results are highly reassuring, indicating high average scores with substantial agreement Fleiss (1971). Additionally, we perform a pilot study to investigate the feasibility of using machine-translated data and find that silver data bring a noticeable improvement (see Appendix E). Therefore, we consider the automatically generated part of our corpus to be of high quality due to the results of both the automatic and manual quality controls and the pilot study. Corpus StatisticsOur final corpus has more than 52,000 tutorials from six wikiHow categories, which contain around 719K steps and around 127K methods, with an average of 13.83 steps and 2.43 methods per tutorial as given in Table 3. Computers and Electronics is the largest category, while the Cars and Other Vehicles is the smallest. We posit the number of tutorials for a category decreases as the level of expertise needed for writing tutorials for that category increases. Health category is an exception to this, as most of its articles do not really go into depth, and contain basic and simple instructions. Although average numbers of steps and methods per tutorial are consistent by categories, they vary by data creation methods. We believe the reason for such a difference is that the tutorials translated and added to Turkish wikiHow by editors are far more popular and gripping tutorials, which probably correlates with the level of ease, thus the descrip \begin{table} \begin{tabular}{c c c c c c} **BLEU** & **ROUGE** & **METEOR** & **COMET** & **chrF** & **chrF++** \\ \hline 23.51 & 52.25 & 44.32 & 88.12 & 67.91 & 62.08 \\ \end{tabular} \end{table} Table 1: BLEU, ROUGE, METEOR, COMET, chrF, and chrF++ scores calculated over 1734 translated English-Turkish article pairs. All of the metrics are mapped to the interval of [0, 100] for convenience. Higher score indicates better translation for each evaluation metric. \begin{table} \begin{tabular}{l c c c} **\hline \hline \multirow{2}{*}{**Source**} & \multirow{2}{*}{**\#Tutorials**} & \multicolumn{2}{c}{**\#Steps**} & \multicolumn{2}{c}{**\#Methods**} \\ & & **Avg** & **Steps** & **Avg** & **Methods** \\ \hline \multirow{2}{*}{C\&OV} & \multirow{2}{*}{2K} & 32K & 5K \\ & & & 13.42 & 2.33 \\ & & & 229K & 34K \\ \multirow{2}{*}{C\&E} & \multirow{2}{*}{16K} & 13.89 & 2.10 \\ & & & 154K & 31K \\ & & & 14.34 & 2.87 \\ \multirow{2}{*}{H\&C} & \multirow{2}{*}{9K} & 119K & 19K \\ & & & 13.37 & 2.20 \\ \multirow{2}{*}{H\&G} & \multirow{2}{*}{10K} & 133K & 25K \\ & & & 13.66 & 2.59 \\ \multirow{2}{*}{P\&A} & \multirow{2}{*}{4K} & 53K & 11K \\ & & & 13.75 & 2.86 \\ \hline \multirow{2}{*}{Original} & \multirow{2}{*}{2K} & 38K & 7K \\ & & & 19.15 & 3.35 \\ \multirow{2}{*}{Translated} & \multirow{2}{*}{50K} & 681K & 120K \\ & & & 13.61 & 2.40 \\ \hline \multirow{2}{*}{**Total**} & \multirow{2}{*}{**52K**} & **719K** & **127K** \\ & & & **13.83** & **2.43** \\ \hline \hline \end{tabular} \end{table} Table 2: Results of the expert human validation on automatic machine translation quality control. Agree 5 and +4 respectively represent the percentage of the experts who agree that the score must be 5 or 4 and more. \begin{table} \begin{tabular}{l c c c} **\hline \hline \multirow{2}{*}{**Source**} & \multirow{2}{*}{**\#Tutorials**} & \multicolumn{2}{c}{**\#Steps**} & \multicolumn{2}{c}{**\#Methods**} \\ & & **Avg** & **Steps** & **Avg** & **Methods** \\ \hline \multirow{2}{*}{C\&OV} & \multirow{2}{*}{2K} & 32K & 5K \\ & & & 13.42 & 2.33 \\ & & & 229K & 34K \\ \multirow{2}{*}{C\&E} & \multirow{2}{*}{16K} & 13.89 & 2.10 \\ & & & 154K & 31K \\ & & & 14.34 & 2.87 \\ \multirow{2}{*}{H\&C} & \multirow{2}{*}{9K} & 119K & 19K \\ & & & 13.37 & 2.20 \\ \multirow{2}{*}{H\&G} & \multirow{2}{*}{10K} & 13.66 & 2.59 \\ & & & 53K & 11K \\ \multirow{2}{*}{P\&A} & \multirow{2}{*}{4K} & 13.75 & 2.86 \\ \multirow{2}{*}{Original} & \multirow{2}{*}{2K} & 38K & 7K \\ & & & 19.15 & 3.35 \\ \multirow{2}{*}{Translated} & \multirow{2}{*}{50K} & 681K & 120K \\ & & & 13.61 & 2.40 \\ \hline \multirow{2}{*}{**Total**} & \multirow{2}{*}{**52K**} & **719K** & **127K** \\ & & & **13.83** & **2.43** \\ \hline \hline \end{tabular} \end{table} Table 3: Final corpus statistics. C&OV: Cars and Other Vehicles, C&E: Computers and Electronics, HE: Health, H&C: Hobbies and Crafts, H&G: Home and Garden, P&A: P&A: P&A: P&A: P&A: P&A: P&A: P&A: P&A: P&A: P&A: P&A: P&A: P&A: P&A:A&: P&A: P&A:P&A: P&A:P&A:P&A: P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:A&P:A&P:A&P:A&P:A&P:A&P:A&P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:A&P:A&A:P&A:P&A:P&A:A&P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A&:P&A:P&A:P&A:P&A:A&P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:A&:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:A&A:P&A:P&A:P&A:P&A:P&A:A&:P:A&A:P&A:P&A:P&A:P&A:P&A:A&P:A&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:A&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:A:P&A:P&A:P&A:A&A:P&A:P&A:P&A:A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P&A:P:A&A:P&A:P:A&A:P&A:P&A:P:A&A:P&A:P:A&A:P:A&P:A&A:P:A&A:P&A:P:A&A:P&A:P:A&A:P&A:P:A&A:P:A&A:P:A&A:P:A&A:P:A&A:P:P&A:A:P&A:P&A:A:P&A:P:A&A:P:A&A:P&A:P:A&A:P:A:P&A:P:A&A:P:A&A:P:A:P&A:P:A:P&A:P:A&A:P:A:P&A:A:P:A siveness, of the tutorials. We hypothesize that they are prioritized in the translation line by wikiHow editors, as they attract more attention. ### Downstream Tasks Next, we inspire from previous works that studied a single downstream task created on wikiHow and combine them under a single benchmark, summarized in Table 4 and explained below. Linking ActionsThe task is defined as detecting the links between the steps and the goals across articles as shown in Figure 1. The steps provided in the tutorials, along with their hyperlinked goals, serve as the ground-truth data for the linking actions task. Goal InferenceThe goal inference task is simply defined as predicting the most likely goal, given a step. This task is structured as a multiple-choice format (Zhang et al., 2020). For instance, when the prompt step is "Kiyafetlerini sk, boylece daha hzlu kuruyacakttr. (Squeeze your clothes, they would get dry quicker this way.)" and the candidate goals are: A. Lavanta Nasil Kurutulur? (How to Dry Lavender) B. Kiyafetler Elde Nasil Yikanr? (How to HandWash Clothes) C. Kiyafetler Cabucak Nasil Kurutulur? (How to Dry Clothes Quickly) D. Islak Bir iPhone Nasil Kurutulur? (How to Dry a Wet iPhone) then the answer would be **C**. We collect the positive step-goal pairs by iteratively picking them from each tutorial. For the negative candidate sampling, we consider both the semantic similarity with the positive candidate and the contextual plausibility for the step. We first encode each step in our corpus by averaging the BERT embeddings (Devlin et al., 2019) of the verb, noun, and proper noun tokens 3 contrary to Zhang et al. (2020), which only considers the verb tokens. The reason why we include the additional POS tags is that most of the steps and goals in our corpus contain auxiliary verbs, which are common to Turkish such as _"yemek yapmak"_ (to cook)4. Although contextualized embeddings help distinguish such differences to a certain extent, we observe that the incorporation of the additional parts brings a significant improvement in our negative candidate sampling strategy. Using FAISS (Johnson et al., 2021) with the our vector representations, we choose the top-3 goals with the highest cosine similarity to the positive candidate as the negative candidates. After the positive and negative candidate sampling, we randomly reassign one of the candidates as positive and correct the labels accordingly with a probability of 0.15 to avoid the model learning the sampling strategy. Lastly, we apply a set of hand-crafted filters (Zhang et al., 2020) to ensure the quality of the task-specific data. Footnote 3: We conduct the POS tagging with the nlpturk library. [https://github.com/nlpturk/nlpturk](https://github.com/nlpturk/nlpturk) Step InferenceSimilar to the goal inference task, step inference is defined as predicting the most likely goal for the given step. It is also formulated as a multiple choice task (Zhang et al., 2020). For instance, when the prompt goal is "Makas Nasil Bileylenir? (How to Whet a Scissors)" and the candidate steps are: A. Camt temizle. (Clean the glass/windows.) B. Makasi sil. (Wipe the scissors.) C. Tuvaleti sil. (Wipe the toilet.) D. Karton kes. (Cut the cardboard.) the answer would be **B**. We follow the same steps as in goal inference to sample positive and negative candidates by simply reversing the roles of the goals and the steps in the sampling process. Step OrderingHere, the goal is to predict the preceding step out of the two given steps that help achieve a given goal. Similarly, it is formulated \begin{table} \begin{tabular}{l r r r} \hline \hline **Task** & **Train** & **Validation** & **Test** \\ \hline Linking Actions & 1319 & — & 440 \\ Goal Inference & 255K & 5K & 837 \\ Step Inference & 124K & 5K & 612 \\ Step Ordering & 539K & 10K & 1021 \\ Next Event Prediction & 82K & 5K & 656 \\ Summarization & 113K & 6K & 6K \\ \hline \hline \end{tabular} \end{table} Table 4: Downstream tasks and dataset split sizes. Figure 1: An example step with a hyperlink redirecting it to a tutorial. (Step says “Connect your printer to your computer” and the redirected tutorial has the title of “How to Connect a Printer to a Computer”) as a multiple-choice task. For instance, when the prompt goal is "YouTube'da Nasl Yorum Brakhlr? (How to Leave a Comment on Youtube)" and the candidate steps are: A. Bir video arayn. (Search for a Video.) B. YouTube'u actn. (Open Youtube.) **B** would be the answer since it must precede A. For this task, we use the sampling strategy of [22]. In wikiHow, some tutorials follow an ordered set of steps, while others contain alternative steps parallel to each other. Out of the ordered portion of our corpus, obtained in Appendix B, we use each goal as a prompt to sample step pairs with a window size of 1 and do not include any non-consecutive steps. We also randomly shuffle the pairs to prevent any index biases. Next Event PredictionThis task aims to produce the following action for a given context. It can be formulated as either a text generation task [20, 22] or a multiple-choice task [20, 21]. Following the formulation of the SWAG dataset [20], we approach next event prediction task as a multiple-choice task, in which a model needs to predict the most likely continuation to a given setting out of the candidate events. For instance, when the prompt goal is "Sabit Disk Nasl Cikarlir? (How to Remove a Hard Drive)", the prompt step is "Bilgisayarin kasasim ac. (Open the Computer Case.)" and the candidate steps are: A. Bilgisayar kasasnnm icinde sabit diski bul. (Locate the hard drive inside the computer.) B. Bilgisayarnn verilerini yedekle. (Back up your computer's data.) C. Masaustu anakartunla uyumlu bir sabit disk satn al. (Buy a hard drive that is compatible with your desktop motherboard.) D. Windows yuklu bir masaustu bilgisayarnn oldugundan emin ol. (Make sure that you have a Windows desktop computer.) then the answer would be **A**. With the subgroup of our corpus labeled as ordered, we iteratively collect the prompt goals and two consecutive steps to use the prior step as the prompt step and the later step as the positive candidate. After obtaining the positive candidate, we use a similar sampling strategy that we used for goal inference. Unlike in goal inference, we additionally take pronoun token embeddings into account in order not to break the coreference chains. SummarizationSimilar to ladhak2020automatic, koupace2018automatic, we formulate it as an abstractive summarization. We follow the data format proposed by koupace2018automatic and build on the WikiLingua's [1] contributions to performing summarization over Turkish procedural text. Within the wikiHow platform, every step is composed of a concise headline resembling a summary and a descriptive paragraph providing detailed information about the step. In cases where tutorials lack methods or parts, we use the descriptions and headlines of the steps to form two distinct text bodies. These text bodies are then utilized to generate document-summary pairs. In the tutorials containing methods or parts, we follow a similar approach at the method or part level. An illustration of a step from the tutorial "Giysiden Kuf Nasl Cikarlir? (How to Get Mold Out of Clothing)" is presented in Figure 2. ### Test Split Construction via Expert Annotation Despite being synthetic, we incorporate examples from the machine-translated portion of our corpus into the test splits of our datasets. This decision stems from the limited availability of intersecting how-to tutorials on similar topics within the original Turkish wikiHow. Consequently, sampling negative candidates with high semantic similarity becomes challenging, leading to easily distinguishable positive candidates. Due to the automated nature of our dataset creation process, some noise is present in the multiple choice task datasets. This noise includes false negative candidates and translations that are incorrect or ambiguous. For instance, consider the step "Yarayt edavi etmeden once we sonra uygun el yikama yapun. (Perform proper hand washing before and after treating the wound.)" which has a positive candidate of "Drenaj Yarasim Tedavi Etmek (Treat Figure 2: An example step from the “How to Get Mold Out of Clothing” tutorial. The bolded part is the step headline, used as the summary, while the step description serves as the text to be summarized. The step description does not include the step headline, formulating the summarization task as the abstractive summarizaton. a Draining Wound)" and a negative candidate of "Yatak Yaralarnin Tedavi Etmek (Treat Bedsores)." While the negative candidate is sampled due to its high semantic similarity with the positive candidate, it is also a plausible option for the given step. To address this issue, we employ expert annotation to validate the test splits of the multiple choice datasets and eliminate such noisy examples. We randomly sample 1000 examples for each of goal inference, step inference, and next event prediction tasks and 1500 examples for step ordering tasks, to be annotated by two experts. Firstly, the experts verify if there are multiple plausible candidates for each example. Secondly, the experts examine whether the translation has altered the meaning of any candidate. The annotation process results in approximately 60-80% of the randomly sampled examples, which are later utilized as the test splits, as illustrated in Table 4. ## 4 Models Due to the distinct formulation of each task, we describe them individually below. For each task, we define the overall methodology. The implementation settings are described in Appendix G. ### Linking Actions We employ the retrieve-then-rerank strategy proposed by Zhou et al. (2022). As the name suggests, retrieve-then-rerank approach consists of two stages: i) Retrieval: the steps and goals are encoded in the dense embeddings space to perform semantic similarity search, and ii) Reranking: the top-n candidate goals are reranked for a given step by jointly encoding them. During the retrieval stage, we initially encode the steps and goals individually. By obtaining embeddings of the steps and goals, we proceed to calculate the cosine similarity between pairs of goals and steps. Leveraging these computed cosine similarities, we employ semantic similarity search with FAISS Johnson et al. (2021) to retrieve the top-n most similar candidates for each step. We experiment with both dense and sparse retrieval (e.g., BM25 Robertson and Zaragoza (2009)). For dense retrieval, we experiment with various sentence embedding models with different architectures (e.g., bi-encoder, cross-encoder), different fine-tuning data (e.g., NLI, STS, or both), and different pretraining data (e.g., Turkish or multilingual) described in details at Appendix A.1. In addition to existing sentence embeddings, we inspire by the recent success of the SimCSE architecture Gao et al. (2021), and train our own Turkish-specific sentence embedding model, SimCSE-TR, in several training stages utilizing the text from Turkish Wikipedia and Turkish NLI (see Appendix C). Since each step has only one ground-truth goal, we use the standard recall metric to evaluate the retrieval models. Encoding steps and goals independently is efficient; however, might result in information loss. Therefore, we rerank the top-n candidate list for each step, considering the step itself, the candidate goal, and the step's context, which includes surrounding steps or its goal. To accomplish this, we concatenate and input them into another model, utilizing the [CLS] token in the final hidden state to calculate a second similarity score. By reordering the top-n candidates based on the second similarity scores, we obtain the final list. ### Multiple Choice Tasks Since the goal inference, step inference, step ordering, and next event prediction tasks share a consistent formulation and adhere to the data format of the SWAG Zellers et al. (2018) dataset, we employ an identical methodology across these tasks. The models we investigate utilize a common strategy for the aforementioned multiple choice tasks. We provide the models with a question--the goal text for step inference and step ordering, the step text for goal inference, and both for next event prediction. Alongside the question, the models are given a candidate answer from the multiple options and generate a logit for that particular candidate. During the training process, we employ the cross-entropy loss to fine-tune our models, aiming to predict the correct candidate. We experiment with both Turkish-specific (i.e. BERTurk and DistilBERTurk Schweter (2020)) and multilingual (i.e. XLM Conneau et al. (2020)) Transformer encoder models, as described in Appendix A.2. We use the standard metric, accuracy, to measure the performance. In addition to fine-tuning, we employ the models in a zero-shot setting. ### Summarization Safaya et al. (2022) introduces large pre-trained text generation models fine-tuned on the Turkish news summarization datasets, presenting out-of-domain baselines for summarization. We further fine-tune the aforementioned models to generate the short descriptions (summaries) of the procedural tutorials (longer text bodies). We then test both the out-of-domain and in-domain procedural summarization models. Similarly, we experiment with both language-specific decoder models such as TR-BART Safaya et al. (2022), and multilingual decoder models such as mBART Liu et al. (2020) and mT5 Xue et al. (2021), described in Appendix A.3. We use the standard ROUGE metrics for evaluation. ## 5 Results and Discussion ### Linking Actions We give the main results for both the retrieval and reranking models in Table 5. We observe that our SimCSE-TR models discussed in Appendix C outperform other baselines by a large margin. Furthermore, multilingual models generally perform worse than Turkish-specific models, which is expected. Similarly, XLM-R based models trained on parallel data for 50 languages Conneau et al. (2020) generally perform worse than BERTurk-based models. Finally, we find that BM25 cannot be used in practical scenarios due to its low performance. In the reranking stage, we introduce the ground-truth goal into the candidates' list, initially generated by the top-performing retrieval model. This addition occurs randomly after the 10th candidate, allowing us to assess the impact of reranking models. This modification significantly enhances the R@10 metric. However, it is noteworthy that DistilBERTurk exhibits a decline in R@1 performance, indicating that while it can distinguish the ground truth goals from other candidates, its improvement is limited to R@10. Conversely, BERTurk demonstrates a boost in both R@1 and R@10 performances. The top-performing Turkish retrieval model achieves a comparable performance to the best-performing English retrieval model examined in Zhou et al. (2022). We attribute this similarity to the fact that the effectiveness of semantic similarity search remains consistent when the data and model quality levels are comparable across languages. However, it is worth noting that the best-performing Turkish reranking model exhibits a noticeable decline in performance compared to its English counterpart. We speculate that two factors contribute to this discrepancy: firstly, English dataset is significantly larger than Turkish dataset (21K vs. 1.7K), and secondly, the best-performing English reranking model, DeBERTa He et al. (2021), is larger in size compared to the best-performing Turkish reranking model, BERTurk. ### Multiple Choice Tasks We observe a common pattern for the goal inference, step inference, and next event prediction tasks5: BERTurk performs the best, XLM-R is a close runner-up to the BERTurk, and DistilBERTurk performs slightly worse than XLM-R, as given in Table 6. In step ordering, DistilBERTurk performs slightly better than XLM-R. Footnote 5: While we manually check the performances of models with different random seeds, we only report the best run for all models, since the observed variances among different runs are small and would not cause any change in the rankings. Zero-shot performances of these models are on par with the random chance of guessing correctly, which means they cannot inherently understand the relationships between goal and step pairs, as well as step and step pairs. Furthermore, zero-shot performances of XLM-R are noticeably worse than those of BERTurk and DistilBERTurk. We believe this is due to the multilingual nature of XLM-R, which is not specialized in Turkish, unlike BERTurk and DistilBERTurk. Significant improvements are observed with the fine-tuned models. The fine-tuned XLM-R model outperforms the fine-tuned DistilBERTurk model in all multiple choice tasks, except for step ordering. This observation suggests that the XLM-R model not only enhances its ability to select the correct \begin{table} \begin{tabular}{l r r r} \hline \hline **Model** & **R@1** & **R@10** & **R@30** \\ \hline XLM-R+NLI+STS & 0.2 & 0.9 & 1.1 \\ BM25 & 4.5 & 13.4 & 18.4 \\ BERTurk+NLI+STS & 9.3 & 17.3 & 24.3 \\ Unsup. SimCSE-TRXLm-R & 11.6 & 24.5 & 33.9 \\ XLM-R-XL-Paraphrase & 15.9 & 33.0 & 41.1 \\ S-XLM-R+NLI+STS & 17.0 & 31.6 & 40.7 \\ LaBSE & 19.8 & 32.0 & 40.0 \\ Sup. SimCSE-TRxlm-r & 25.9 & 42.7 & 54.1 \\ S-BERTurk+NLI+STS & 27.3 & 47.7 & 55.7 \\ Unsup. SimCSE-TRberturk & 31.4 & 52.0 & 61.4 \\ **Sup. SimCSE-TRberturk** & **33.4** & **55.7** & **67.3** \\ \hline + DistilBERTurk & 30.7 & 74.8 & — \\ + **BERTurk** & **40.5** & **78.9** & — \\ \hline \hline \end{tabular} \end{table} Table 5: The R@n indicates the percentage of the ground-truth goal being in the top-n candidates for a given step. The last two rows show the performances of the reranker models after including the gold goals in top-30 candidates generated by the best performing model, while the rest is retrieval only. We discuss the baseline models in Appendix A. candidate but also improves its understanding of the Turkish language through fine-tuning. When comparing the performance of language-specific models trained on Turkish data to those trained on English data, noticeable differences are observed. Turkish models exhibit significantly lower performances in goal inference and step ordering tasks. We attribute these variations to the dissimilarity in our sampling strategy, as explained in SS3.2. Our sampling strategy considers a broader range of parts of speech compared to the approach used by Zhang et al. (2020), resulting in candidates that are more similar at the embedding level and thereby increasing the difficulty. Additionally, while the performance decreases in goal inference, there is a slight improvement in step inference. This can be attributed to the fact that goals typically consist of less diverse parts of speech, mostly composed of a noun and a verb. As a result, the candidates sampled for goal inference tend to be more similar at the embedding level compared to step inference candidates, which often include additional parts of speech such as adjectives and adverbs. Although we do not practice adversarial filtering to create our next event prediction dataset, we believe our sampling strategy also presents its own challenges. While the results shared in Zellers et al. (2018, 2019) are significantly lower than those of our models, the leaderboards for SWAG6 and HelaSwag7 datasets show that the challenge adversarial filtering brings can be overcome. Considering these, our results given in Table 6 are significantly lower than their English counterparts, suggesting a large room for improvement. Footnote 6: [https://leaderboard.allenai.org/swag/submissions/public](https://leaderboard.allenai.org/swag/submissions/public) Footnote 7: [https://rowanzellers.com/hellaswag/](https://rowanzellers.com/hellaswag/) Additionally, we evaluate out-of-domain performances of some best-performing models to better understand their abilities in procedural tasks and find out their performances are generalizable to a certain extent, as discussed in Appendix F. ### Summarization The results are given in Table 7. As anticipated, in the summarization task, models that are fine-tuned on procedural summarization data outperform their out-of-domain fine-tuned counterparts. However, the performance improvement observed is relatively modest. We attribute this to the fact that the out-of-domain models still possess a robust capability acquired through their prior training on news summarization tasks. Additionally, the multilingual out-of-domain models demonstrate superior performance compared to the single Turkish-specific model, TR-BART. However, in the procedural summarization task, TR-BART exhibits a higher performance boost and performs marginally better than procedural mT5. Both out-of-domain and procedural mBART models outperform other models. We attribute this to substantial size difference of mBART, which gives it an advantage over the other models. When taking into account the model sizes and their multilingual capabilities, we conclude that both the specialization to Turkish and larger model sizes contribute to the overall performance improvement. However, our analysis reveals that a substantial difference in size can compensate for the multilingual aspect. This is evident in the comparison between out-of-domain and procedural TR-BART and mBART models, as presented in Table 7. ## 6 Conclusion PLU tasks encompass various skills such as semantic analysis, commonsense reasoning, and coreference resolution. However, PLU has been primarily explored in English and the scarcity of language-specific resources limits its study in other \begin{table} \begin{tabular}{l c c c c} \hline \hline Task & Goal & Step & Step & Next Event \\ & Inference & Inference & Ordering & Prediction \\ \hline Random & 25.00 & 25.00 & 50.00 & 25.00 \\ \hline XLM-R ZS (125M) & 22.70 & 23.86 & 42.90 & 25.65 \\ DistilBERTurk ZS (66M) & 25.81 & 24.51 & 47.01 & 27.02 \\ **BERTurk ZS (110M)** & **26.52** & **27.45** & **49.46** & **32.82** \\ \hline DistilBERTurk FT (66M) & 66.19 & 85.78 & 70.13 & 83.66 \\ XLM-R FT (125M) & 69.30 & 87.42 & 68.17 & 85.95 \\ **BERTurk FT (110M)** & **72.40** & **91.34** & **72.09** & **88.55** \\ \hline \hline \end{tabular} \end{table} Table 6: Zero-Shot and Fine-Tuned performances of XLM-R, DistilBERTurk, and BERTurk models on multiple choice tasks, evaluated using accuracy. FT indicates that the model is fine-tuned on the task-specific data and ZS indicates zero-shot performance. \begin{table} \begin{tabular}{l c c c} \hline \hline **Model** & **ROUGE-1** & **ROUGE-2** & **ROUGE-L** \\ \hline TR-BART OOD (120M) & 16.28 & 4.21 & 12.35 \\ mTS-base OOD (220M) & 17.09 & 4.53 & 13.05 \\ mBART OOD (680M) & 18.30 & 5.12 & 13.82 \\ \hline TR-BART PRO (120M) & 19.59 & 5.64 & 13.68 \\ mTS-base PRO (220M) & 19.30 & 5.33 & 14.42 \\ **mBART PRO (680M)** & **22.62** & **6.43** & **15.69** \\ \hline \hline \end{tabular} \end{table} Table 7: Out-of-Domain Fine-Tuned, and Procedural Fine-Tuned performances of TR-BART, mBART, and mT5-base models in summarization task. languages. To address this gap, we present a case study in Turkish and introduce a centralized benchmark comprising six downstream tasks on procedural documents. We leverage machine translation tools and implement stringent quality control measures. We curate high-quality task data through language-specific filtering and manual annotation. Our experiments reveal that language-specific models tend to outperform multilingual models, but the model size is a critical factor. Tasks that involve rigorous language-specific preprocessing, such as goal inference, prove to be more challenging. Despite advancements, our best-performing models still lag behind their English counterparts, indicating large room for improvement. We release all resources publicly for further research. ### Limitations Our corpus creation method heavily relies on the success of the machine translation systems. However, such systems might have downfalls in specific cases. Local contexts and metrics are examples of such downfalls. We observe that some tutorials from the original Turkish wikiHow are localized, not directly translated. For instance, the Turkish counterpart of the tutorial titled "How to Lose 10 Pounds in 10 Days" is "10 Gunde Nasil 5 Kilo Verilir?" (How to Lose 5 Kilograms in 10 Days). In our case, Google Translate cannot distinguish these nuances. Since the translated portion of our corpus makes up the majority, our models might pick up the translation artifacts, which, in turn, diminishes their success in actually learning their objective tasks. mBART and mT5 models might generate biased summarizations, since they are previously trained on multilingual data and then fine-tuned on news summarizations before being fine-tuned on procedural documents. The heavyweight fine-tuning and inference of mBART and mT5 sets a natural limitation to their usage. However, we overcome this limitation by practicing lightweight alternative solutions, such as half precision floating point format (FP16) training, optimization libraries, and gradient accumulation and checkpointing8. Footnote 8: To the best of our knowledge, mT5 models currently cannot be trained with gradient checkpointing. Lastly, the method we propose for the creation of procedural corpora in low-resource languages is implicitly dependent on the amount of resources for a language. This is because machine translation systems might not work in some low-resource languages as well as they work for Turkish. ### Ethics Statement We use the content of wikiHow, which allows for the usage of its content under limited specific circumstances within the Creative Commons license. We abide all the conditions required by the Creative Commons license. The requirements of the Creative Commons also make the usage of English wikiHow corpus that we translate possible. Since the source of the majority of our corpus and datasets are from translated tutorials, they might contain implicit biases due to the translation. Consequently, models trained on such data are also vulnerable to these biases. ## Acknowledgements This work has been supported by the Scientific and Technological Research Council of Turkiye (TUBITAK) as part of the project "Automatic Learning of Procedural Language from Natural Language Instructions for Intelligent Assistance" with the number 121C132. We also gratefully acknowledge KUIS AI Lab for providing computational support. We thank our anonymous reviewers and the members of GGLab who helped us improve this paper. We especially thank Shadi Sameh Hamdan for his contributions to setting up the implementation environment.
2305.19479
A Shallow Water Model Exploration of Atmospheric Circulation on Sub-Neptunes: Effects of Radiative Forcing and Rotation Period
Sub-Neptune type exoplanets are abundant in our galaxy yet have no solar system analogs. They exist in a broad range of stellar forcing and rotational regimes that are distinctly different from solar system planets and more commonly studied hot Jupiters. Here we present simulations that explore global atmospheric circulation of sub-Neptunes generated with a two-dimensional shallow-water model, SWAMPE. We explore the circulation regimes of synchronously rotating sub-Neptunes with a focus on the interaction of planetary rotation rate and radiative timescale in a variety of stellar insolations. In highly irradiated, short-timescale regimes, our models exhibit high day-night geopotential contrasts. As the timescales become longer, the geopotential contrasts and longitudinal variability decrease, while temporal variability increases. The transition from day-to-night flow to jet-dominated flow is primarily driven by the radiative timescale. Strong- and medium-forcing regimes exhibit transitions between day-to-night flow and jet-dominated flow at similar points in the parameter space. Weak-forcing regime differs due to comparatively stronger rotational effects. Planetary rotation period dominates in determining equator-to-pole geopotential contrast. Our simulations exhibit higher time variability when either radiative timescale or rotation period is long.
Ekaterina Landgren, Alice Nadeau, Nikole Lewis, Tiffany Kataria, Peter Hitchcock
2023-05-31T01:14:02Z
http://arxiv.org/abs/2305.19479v1
A Shallow Water Model Exploration of Atmospheric Circulation on Sub-Neptunes: Effects of Radiative Forcing and Rotation Period ###### Abstract Sub-Neptune type exoplanets are abundant in our galaxy yet have no solar system analogs. They exist in a broad range of stellar forcing and rotational regimes that are distinctly different from solar system planets and more commonly studied hot Jupiters. Here we present simulations that explore global atmospheric circulation of sub-Neptunes generated with a two-dimensional shallow-water model, SWAMPE. We explore the circulation regimes of synchronously rotating sub-Neptunes with a focus on the interaction of planetary rotation rate and radiative timescale in a variety of stellar insolations. In highly irradiated, short-timescale regimes, our models exhibit high day-night geopotential contrasts. As the timescales become longer, the geopotential contrasts and longitudinal variability decrease, while temporal variability increases. The transition from day-to-night flow to jet-dominated flow is primarily driven by the radiative timescale. Strong- and medium-forcing regimes exhibit transitions between day-to-night flow and jet-dominated flow at similar points in the parameter space. Weak-forcing regime differs due to comparatively stronger rotational effects. Planetary rotation period dominates in determining equator-to-pole geopotential contrast. Our simulations exhibit higher time variability when either radiative timescale or rotation period is long. 0000-0002-8870-788X]Ekaterina Landgren 0000-0002-4880-788X]Alice Nadeau 0000-0002-8870-788X]Nikole Lewis 0000-0002-8880-788X]Tiffany Kataria 0000-0002-8880-788X]Peter Hitchcock ## 1 Introduction Despite being some of the most common planet types in our galaxy (Deeg & Belmonte, 2018), sub-Neptunes, planets 1.6-4 times the radius of Earth, have no analogs in the solar system. This population of planets, which can be broken into two sub-groups, rocky super-Earths and gaseous mini-Neptunes, are prime candidates for atmospheric modeling that can reveal a variety of physical phenomena. In order to answer questions about potential habitability of terrestrial exoplanets, it is important to develop a robust understanding of the dynamical processes that can take place in the atmospheres of these small planets. Sub-Neptunes exist in a wide phase space in terms of temperature (most of them falling in the 400-1200 K equilibrium temperature range (Crossfield & Kreidberg, 2017)), and are predicted to have a broad range of atmospheric chemical compositions (Kite et al., 2020). Investigating the atmospheric circulation and the mechanisms underlying global scale flows of planets across the entire sub-Neptune parameter space and providing observational predictions can be time-consuming with high-complexity three-dimensional (3D) general circulation models, especially those that try to treat the full range of chemical and physical processes taking place in their atmospheres. On the other end of model complexity, many one-dimensional (1D) models can be insufficient in capturing the inherently 3D processes taking place in their atmospheres (e.g. Feng et al., 2016; Kataria et al., 2016; Line & Parmentier, 2016). For these reasons, two-dimensional (2D) shallow-water models are useful tools that can capture key dynamical processes despite being more simplified than 3D models. Such models represent the dynamics of a thin layer of a fluid of constant density and variable thickness. The shallow-water equations are a coupled system of horizontal momentum and mass conservation, which govern the evolution of the zonal and meridional velocity components as well as the layer thickness. These equations are typically solved numerically as a function of longitude, latitude, and time. For a description of the shallow-water model focused on application to exoplanetary atmospheric circulation, see, e.g. Showman et al. (2010). There is a rich heritage of shallow-water models that have been used to study solar system planets. For example, shallow-water 2D GCMs have been used to study topography variation on Earth (Ferrari & Saleri, 2004), (Kraucunas & Hartmann, 2007), planetary-scale waves on Venus (Iga & Matsuda, 2005), latitudinal transport and superrotation on Titan (Luz & Hourdin, 2003), and polar vortex dynamics on Jupiter, Saturn, Uranus, and Neptune (Brueshaber et al., 2019) as well as on Mars (Rostami & Zeitlin, 2017). Shallow-water models have been used to model the atmospheric phenomena of objects beyond the solar system as well. Perez-Becker & Showman (2013) explore the interaction of radiative and drag timescales using a 2D model, Zhang & Showman (2014) explore the circulation of brown dwarfs, Penn & Vallis (2018) apply the shallow-water framework to terrestrial exoplanets, Hammond & Pierrehumbert (2018) use it to consider tidally locked exoplanets, and Ohno & Zhang (2019) model nonsynchronized, eccentric-tilted exoplanets. The atmospheric circulation of sub-Neptunes have been probed in several studies using mainly 3D GCMs. Zhang & Showman (2017) use a 3D GCM to explore the effects of bulk composition on the atmospheres of sub-Neptunes, and find that the effect of molecular weight dominates in determining day-night temperature contrast and the width of the superrotating jet. Wang & Wordsworth (2020); Christie et al. (2022) study the atmospheric circulation of the sub-Neptune GJ 1214 b using LMDZ GCM and Met Office's UM GCM, respectively, and find that a long convergence time is needed to simulate long radiative timescale, and that high metallicity and clouds with large vertical extents are needed to match synthetic observations. Innes & Pierrehumbert (2022) investigate tidally locked, temperate (with Earth-like insolation) sub-Neptunes using the ExoFMS GCM and find that these atmospheres exhibit weak horizontal temperature gradients, high-latitude cyclostrophic jets, and equatorial superrotation at low pressures. However, these previous studies of sub-Neptunes are limited in their scope, either to singular planets or to narrow phase spaces. Using 2D models to explore the circulation of sub-Neptunes would enable a wider exploration of this parameter space. In this study, we use an open-source 2D shallow water model, SWAMPE(Landgren & Nadeau, 2022), to investigate the atmospheric dynamics of sub-Neptunes. We contribute to the growing literature on sub-Neptunes by systematically investigating the interaction of planetary rotation period and radiative forcing and their effect on planetary atmospheric dynamics. Our goal is to explore the atmospheric circulation of sub-Neptunes and to identify the key differences in the dominant atmospheric circulation patterns of this type of planets compared to hotter tidally-locked Jovian atmospheres. While idealized, the shallow-water framework is well-suited for the analysis of the dynamics of large-scale atmospheric flow, which is the focus of this work. The paper is organized as follows. In Section 2.1, we introduce our dynamical model and formulate radiative forcing. In Section 2.2, we validate our model implementation against a hot Jupiter regime based on the planetary parameters for HD 189733b. In Section 2.3, we motivate the phase space we choose to explore in this investigation of sub-Neptunes. In Section 3, we present our simulation results and examine the global atmospheric flows and horizontal winds for strongly forced, medium forced, and weakly forced sub-Neptunes. In Section 4, we discuss our simulation results, highlighting general trends in heating patterns and zonal winds as well as phenomena of interest such as idiosyncratic spin-up behavior, oscillations, and temporal variability. Finally, we conclude in Section 5. ## 2 Methods ### Model We study atmospheric circulation on sub-Neptunes using an idealized shallow-water model, SWAMPE(Shallow-Water Atmospheric Model in Python for Exoplanets, Landgren and Nadeau (2022)). The atmosphere is assumed to be a fluid that is incompressible and hydrostatically balanced. For a derivation of the shallow water equations from the continuity equation and the equation of motion, see, e.g. Kaper and Engler (2013). As the name suggests, the shallow-water simplification of the Navier-Stokes equations assumes that the fluid (in our case, the planetary atmosphere) is spread in a thin layer (i.e. the horizontal scale is much larger than the vertical scale). The simplification of the full dynamics comes with several limitations. The model omits baroclinic dynamics, which affect heat redistribution, especially for fast rotators (Penn and Vallis, 2017). The shallow-water system employs a simple forcing term to account for stellar insolation and lacks the complexity of a more realistic radiative transfer scheme: the details of absorption, emission, and scattering are omitted. Additionally, the model does not incorporate frictional diffusivity, thermal radiation to space, or stratification. This idealized system can nevertheless capture large-scale phenomena on a sphere in response to gravitational and rotational accelerations. Within this framework, the wind patterns exhibited by the model can be explained by considering the interaction of stationary, planetary-scale waves with the mean flow (e.g. Showman and Polvani, 2011; Showman et al., 2013). These interactions are robust and the dynamics captured by the shallow-water model are likely to translate into the full dynamics of the atmosphere. The Python code developed for this work is based on the method described by Hack and Jakob (1992) and is available for download on Github.1 Footnote 1: [https://github.com/kathlandgren/SWAMPE](https://github.com/kathlandgren/SWAMPE) The idealized shallow-water system models the atmosphere as two layers: a buoyant upper layer at constant density \(\rho_{\rm upper}\) and an infinitely deep convective layer at a higher constant density \(\rho_{\rm lower}\). Sub-Neptunes are likely to have deep atmospheres (\(>10^{3}\) km atmospheres, see e.g. Kite et al. (2020)), and so the shallow-water framework is appropriate for modeling these types of exoplanets. Vertical transport is simplified in the model, represented only as mass transfer between layers. Hence we focus on modeling horizontal velocities in the upper layer. The equations governing the top layer are: \[\frac{d\mathbf{V}}{dt}+f\mathbf{k}\times\mathbf{V}+\nabla\Phi =F_{\mathbf{V}}, \tag{1}\] \[\frac{d\Phi}{dt}+\Phi\nabla\cdot\mathbf{V} =F_{\Phi}, \tag{2}\] where \(\mathbf{V}=u\mathbf{i}+v\mathbf{j}\) is the horizontal velocity vector and \(\mathbf{i}\) and \(\mathbf{j}\) are the unit eastward and northward directions, respectively. The quantities \(F_{\mathbf{V}}\) and \(F_{\Phi}\) refer to the forcing terms in the momentum equation and mass continuity equation, respectively. In this work, these terms are tied to stellar insolation, but in general they represent any source or sink terms. The free surface geopotential is given by \(\Phi\equiv gh\) where \(g\) is the gravitational acceleration and \(h\) is geopotential height. The geopotential height represents the height of the pressure surface of the modeled upper layer. The geopotential can be separated into the constant component \(\overline{\Phi}=gH\) (where \(H\) is the scale height) and the time-dependent deviation \(\Phi\). We follow this convention here (following, e.g. Hack and Jakob, 1992; Perez-Becker and Showman, 2013). The geopotential forcing is represented by linear relaxation to the local radiative equilibrium state \(\Phi_{\rm eq}\), given by: \[\Phi_{\rm eq}=\begin{cases}\overline{\Phi}+\Delta\Phi_{\rm eq}\cos\lambda\cos \phi&\text{on the dayside},\\ \overline{\Phi}&\text{on the nightside},\end{cases} \tag{3}\] where \(\Delta\Phi_{\rm eq}\) is the maximal day-night contrast assuming the thickness of the upper layer in radiative equilibrium. The ratio \(\Delta\Phi_{\rm eq}/\overline{\Phi}\) serves as a proxy for the equilibrium day-night temperature contrast. The radiative equilibrium thickness is highest at the substellar point, which occurs at latitude \(0^{\circ}\), longitude \(0^{\circ}\). This equation represents the radiative heat the planet receives from its star, hence the forcing is only present on the dayside. Here we assume that sub-Neptune planets are orbiting close to their host stars and synchronously rotating with the rotational period equal to the orbital period (1:1). The bulk of the known sub-Neptune population has orbital periods of less than 30 days (Bean et al., 2021) that place them generally within the tidal locking distance of their host stars (e.g Barnes, 2017; Leconte et al., 2015). Hence the substellar point and the radiative equilibrium as a function of latitude and longitude are constant in time in our model. Linear relaxation to the local radiative equilibrium takes the following form: \[F_{\Phi}=\frac{\Phi_{\rm eq}-\Phi}{\tau_{\rm rad}}\equiv Q, \tag{4}\] where \(\tau_{\rm rad}\) is the radiative time scale. In this study, the timescale \(\tau_{\rm rad}\) is a free parameter. We give an overview of the values of \(\tau_{\rm rad}\) we sample in Section 2.3. If the geopotential is below the local radiative equilibrium, the forcing will act to increase it, and if the geopotential is above the local radiative equilibrium, the forcing will act to decrease it. In the shallow-water framework, regions of higher geopotential correspond to regions of heating, since mass is transferred into the upper layer in the presence of radiative forcing. In the momentum equation, the mass transfer between the two layers in the model is governed by the following expression: \[F_{\bf V}=\begin{cases}-\frac{Q{\bf V}}{\Phi}&Q>0,\\ 0&Q\leq 0.\end{cases} \tag{5}\] This term captures the effect of momentum advection from the lower layer on the upper layer. The term \(F_{\bf V}\) captures the mass transferred from the lower layer to the upper layer through heating (\(Q>0\)). The mass transfer from the upper layer to the lower layer due to cooling, on the contrary, does not affect the specific momentum of the upper layer, and so \(F_{\bf V}\) is zero wherever \(Q<0\). The expression for \(F_{\bf V}\) used here corresponds to the term \({\bf R}\) used in Perez-Becker and Showman (2013); Showman and Polvani (2011); Showman et al. (2012); Shell and Held (2004). We solve Equations 1 and 2 using a method based on Hack and Jakob (1992) implemented in Python. The model employs a spectral method with a global, spherical geometry, which performs differentiation using a truncated sequence of spherical harmonics. The non-linear terms are evaluated in physical space, and are then transformed into the wavenumber (spectral) space using a spectral transform, which is a composition of the Fast Fourier Transform and Gaussian quadrature. In wavenumber space, linear terms and derivatives are computed. Time-stepping is performed and prognostic variables are evaluated in spectral space. The prognostic variables are then transformed back into physical space, where the diagnostic and non-linear terms are evaluated. Following Hack and Jakob (1992), we use Gaussian latitudes (\(\mu=\sin\phi\)), uniformly spaced longitude, and \(U=u\cos\phi\), \(V=v\cos\phi\) for zonal and meridional winds, since Robert (1966) showed that the variables \(U\) and \(V\) are better suited to computations in spectral space. While the quantities of interest are the geopotential \(\Phi\) and the horizontal wind vector field \({\bf V}\), we rely on the equivalent vorticity/divergence form of the shallow-water equations, where the prognostic variables are geopotential \(\Phi\), absolute vorticity \(\eta\), and divergence \(\delta\). In this form, the shallow-water equations have terms that have simple representations in spectral space. The zonal and meridional wind components \(U\) and \(V\) become diagnostic variables, albeit readily determined from \(\eta\) and \(\delta\) at any time step. In physical space, the variables can be represented as functions of longitude, latitude, and time: prognostic variables \(\Phi(\lambda,\phi,t)\), \(\eta(\lambda,\phi,t)\), \(\delta(\lambda,\phi,t)\), and diagnostic variables \(U(\lambda,\phi,t)\), \(V(\lambda,\phi,t)\). Note that there is no explicit dependence on pressure, since the shallow-water system is a barotropic formulation. Readers interested in the derivation of the vorticity/divergence form and the diagnostic relationship should consult, e.g. Hack and Jakob (1992). We use the spectral truncation T42, corresponding to a spatial resolution of \(128\times 64\) in longitude and latitude, with \(2.81^{\circ}\) cell size. Given that we wish to study global-scale flows in sub-Neptune atmospheres, we opt to use the truncation which provides the best balance between computational efficiency and spatial resolution. Previous studies using GCMs and shallow-water models agree that the dynamics on tidally-locked exoplanets are global in scale (e.g. Showman and Polvani, 2011; Showman et al., 2015; Heng et al., 2011; Perez-Becker and Showman, 2013; Zhang and Showman, 2017). We provide scale analysis for the simulations in our ensemble in Section 4. The equations are integrated using a modified Euler's method time-stepping scheme, rather than a more common semi-implicit scheme, due to stability considerations (following Langton, 2008). We apply two filters to further ensure numerical stability: the modal splitting filter (Hack and Jakob, 1992), and a \(\nabla^{6}\) hyperviscosity filter (based on Gelb and Gleeson, 2001). Models are integrated from an initially flat layer (no deviation from \(\overline{\Phi}\)) and zero winds. We select a time step that meets the CFL condition (Boyd, 2001). In general, simulations of highly irradiated planets require a shorter time step due to higher wind speeds. The time steps used in our simulations range from 30 s to 180 s. Each model is integrated for 1000 Earth days. We monitor the RMS windspeeds of each simulation to ensure that they reach a steady state, which typically occurs between 50 and 100 simulated days, but may take longer for simulations of weakly forced planets. Our model is consistent with the shallow-water framework as it has been previously applied to hot Jupiters. Specifically, the momentum advection term \(F_{\mathbf{V}}\) matches the term \(\mathbf{R}\) used in Shell & Held (2004); Showman & Polvani (2011); Showman et al. (2013). The momentum equation (1) and the mass-continuity equation (2) are consistent with the form given in Vallis (2017). These equations also reduce to the form given in Perez-Becker & Showman (2013) for a regime where the atmospheric drag timescale \(\tau_{\rm drag}=\infty\), equivalent to the absence of atmospheric drag. Perez-Becker & Showman (2013) have explored how the addition of atmospheric drag interacts with the radiative timescale. Since the focus of this work is on the interplay of radiative timescale and rotation period, which has not yet been explored, we have chosen to omit the drag term. We assume that the modeled layer is sufficiently deep that there is no interaction with the planetary surface. Note also that while Perez-Becker & Showman (2013) use the geopotential height as the prognostic variable, the form used in our model is equivalent since the geopotential height and the geopotential differ only by a factor of reduced gravity \(g\). To demonstrate the agreement of our model with previous implementations, we model a hot Jupiter in the following section. ### Model validation In order to validate our model, we replicate three hot Jupiter regimes from Perez-Becker & Showman (2013), shown in Figure 1, which matches the three middle panels in the top row of Figure 3 in Perez-Becker & Showman (2013). While the model we are replicating includes the effects of atmospheric drag, in this regime, \(\tau_{\rm drag}=\infty\), which corresponds to the absence of drag. We have chosen the color bar to match that of Figure 3 in Perez-Becker & Showman (2013). The other parameters here are \(\Delta\Phi_{\rm eq}/\overline{\Phi}=1\) (high contrast regime), geopotential at scale height \(\overline{\Phi}=gH=4\times 10^{6}\) m\({}^{2}\) s\({}^{-2}\), the planetary radius \(a=8.2\times 10^{7}\) m and the rotation frequency \(\Omega=3.2\times 10^{-5}\) s\({}^{-1}\) (equivalent to \(P_{\rm rot}=2.27\) days) were selected in Perez-Becker & Showman (2013) based on those of HD 189733b. Our simulations match the expected behavior, replicating the transition from shorter \(\tau_{\rm rad}\), where the planetary behavior resembles the radiative forcing profile accompanied by a day-to-night flow, to longer \(\tau_{\rm rad}\), where the longitudinal gradients are small and the structure exhibits a hot equatorial band. ### Parameter regimes The modeling phase space of sub-Neptunes is vast. For example, the population of sub-Neptunes detected by _Kepler_ exhibits orbital periods ranging from 0.4 to 300 Earth days (Bean et al., 2021). These planets exhibit a variety of Figure 1: Recreation of a hot Jupiter regime in Figure 3 in Perez-Becker & Showman (2013). Shown in the figure are maps of steady-state geopotential contours and the wind vector fields for the equilibrated solutions of the shallow-water model. The planetary parameters are as follows: planetary radius \(a=8.2\times 10^{7}\) m, planetary rotation rate \(\Omega=3.2\times 10^{-5}\) rad/s (equivalent to to \(P_{\rm rot}=2.27\) days), no drag, reference geopotential \(\overline{\Phi}=4\times 10^{6}\), for strong radiative forcing (\(\Delta\Phi_{\rm eq}/\overline{\Phi}=1\)). Three radiative time scales are depicted: \(\tau_{\rm rad=0.1}\) days (left panel), \(\tau_{\rm rad=1}\) day (center panel), and \(\tau_{\rm rad=10}\) days (right panel). We have subtracted the constant value of \(\overline{\Phi}\) from each panel. Panels share the same color scale for geopotential, but winds are normalized independently in each panel. The substellar point is located at \((0^{\circ},0^{\circ})\) for each panel. possible atmospheric compositions and orbit a range of host stars. Therefore we select to vary the radiative timescale to capture the breadth of possible atmospheric compositions, and vary the intensity of incoming stellar radiation to model the possible range of stellar host types and orbital distances. Under the constraint of synchronous rotation, which necessitates planets to be orbiting within the tidal locking distance, we focus on the interplay of rotation period with the radiative timescale and the strength of insolation. While sub-Neptune planets range in size from 1.6 to 4 times the radius of Earth, here we select a representative sub-Neptune planetary radius equal to three Earth radii: \(a=3a_{\oplus}=1.91\times 10^{7}\) m (Bean et al., 2021). The reduction in radius compared to a Jupiter-sized planet \(R_{J}=8.2\times 10^{7}\) m leads to significant changes in the atmospheric circulation by affecting the advective timescale and the vorticity gradient. The advective timescale \(\tau_{\rm adv}\approx a/U\) decreases as the planetary radius decreases, in turn increasing the efficiency of heat transport and resulting in lower overall temperature contrasts. While the Coriolis parameter \(f=2\Omega\sin\phi\) does not depend on the planetary radius \(a\), its meridional gradient \(\beta=2\Omega\cos\phi/a\) is inversely proportional to the planetary radius. The change in the meridional gradient affects the formation and velocity of planetary-scale Rossby waves. The Rossby waves form as a response to potential vorticity gradient, which is caused by differential rotation (different latitudes moving at different velocities). The phase speed of the Rossby waves in the zonal direction is westward relative to the mean zonal flow and directly proportional to the meridional gradient of the Coriolis parameter \(\beta\). The phase speed of the Rossby wave relative to the mean zonal flow is \(-\beta/(k^{2}+l^{2})\), where \(k\) and \(l\) are the zonal and meridional wavenumbers, respectively (see, e.g. Vallis, 2006). Therefore we expect that decreasing the radius would lead to the rise of faster westward Rossby waves compared to the hot Jupiter regime. The reference geopotential \(\overline{\Phi}\) comes from separating the geopotential into the time invariant spatial mean \(\overline{\Phi}\) and the time-dependent deviation from the mean \(\Phi\) (for a derivation from the continuity and momentum equations, see, e.g. Hack & Jakob (1992)). The reference geopotential \(\overline{\Phi}\) can additionally be thought of as geopotential at scale height \(H\), \(\overline{\Phi}=gH\), as in Perez-Becker & Showman (2013). We nominally select \(g=9.8\) ms\({}^{-12}\), but it should be noted that \(g\) is included as a factor in geopotential and does not appear separately in the model. Within the context of a shallow-water model, different values of \(\overline{\Phi}\) can be interpreted as probing atmospheric layers with different scale heights, which in turn depend on the mean molecular weight of the atmosphere and the average temperature of the layer. The sample of reference geopotential \(\overline{\Phi}\), varying from \(4\times 10^{6}\) to \(1\times 10^{6}\) m\({}^{2}\)s\({}^{-2}\), is calculated based on a representative sample of equilibrium temperatures ranging from 400 K to 1200 K and the mean molecular weights ranging from 2.3 to 5.5. This range of mean molecular masses corresponds to atmospheric metallicities ranging from 1 to 200 times solar values (e.g. Goyal et al., 2019), which is consistent with the expected range of values for exoplanets with masses between three and thirty Earth masses (Wakeford & Dalba, 2020; Welbanks et al., 2019). We give the details of this calculation in Section 2.4. Our model is agnostic of atmospheric composition, but we have sampled a broad range of values of \(\overline{\Phi}\), which incorporate the dependence on atmospheric composition and temperature via the scale height. Further, we have modeled those layers under a variety of forcing conditions, which are related to photospheric opacities and stellar insolation that set photospheric pressures and temperatures. We vary the planetary rotation period \(P_{\rm rot}\) to be 1, 5, and 10 Earth days to reflect a variety of plausible rotation periods of synchronously rotating sub-Neptunes (Yang et al., 2014). We are assuming \(P_{\rm rot}\) equal to the orbital period \(P_{\rm orb}\). Figure 2 illustrates the equilibrium temperatures in the sampled region as a function of planetary rotation periods. All the planets in our sample lie within the tidal locking distance (varying from 0.1 to 0.8 AU). Their short orbital periods imply that our simulations could be matched to plausible targets for atmospheric characterization observations with facilities like JWST. Our samples of reference geopotentials and rotation periods fall within a plausible window for a variety of potential insolations. Gravity is held constant at \(g=9.8\) ms\({}^{-1}\). While this value is based on Earth, there is reason to believe that this gravity value might be representative of the known exoplanet population (Ballesteros & Luque, 2016). This value is the reduced gravity, which measures the restoring force at the interface between the two layers. We follow Perez-Becker & Showman (2013) in their interpretation of \(g\) in the shallow-water context. The radiative timescale \(\tau_{\rm rad}\) is the time for gas parcels to reach local radiative equilibrium. We follow Showman & Guillot (2002), where \[\tau_{\rm rad}\sim\frac{\Delta p}{g}\frac{c_{p}}{4\sigma T^{3}}, \tag{6}\] where \(\Delta p\) is the thickness of the pressure layer in Pascals, \(c_{p}\) is the specific heat capacity at pressure \(p\), \(\sigma\) is the Stefan-Boltzmann constant, and \(T\) is the temperature of the layer. The radiative timescale represents how quickly the parcels adjust to radiative changes. While \(\tau_{\rm rad}\) is a free parameter in this study, varying the radiative timescale between 0.1, 1, and 10 Earth days corresponds to probing a variety of pressure level and temperature combinations. While the radiative time constant \(\tau_{\rm rad}\) decreases sharply with increasing temperature, note that the estimate breaks down in deep, optically thick atmospheres, e.g. deep inside hot Jupiter atmospheres, \(\tau_{\rm rad}\) is long. Our samples span the range from a few hours, typical for hot Jupiters (Showman et al., 2010) to a few days, approximately corresponding to Earth (Showman et al., 2008). Showman & Guillot (2002) predicted that on hot Jupiters, large day/night fractional differences will persist when the radiative timescale \(\tau_{\rm rad}\) is much shorter than the advective timescale \(\tau_{\rm adv}\), and small day/night fractional differences will appear when the radiative timescale is much longer than the advective timescale. Our sample of radiative timescales allows us to explore the relationship between radiative timescale and day/night contrast in the context of sub-Neptunes. We investigate three different relative radiative forcing amplitudes, which prescribe the radiative equilibrium temperature differences between the dayside and the nightside. We let \(\Delta\Phi_{\rm eq}/\overline{\Phi}=1\), 0.5, and 0.1, corresponding to hot, warm, and cool sub-Neptune planets. In the context of hot Jupiters, the high-contrast regime has been explored by Perez-Becker & Showman (2013), and the medium-contrast regime has been explored by Liu & Showman (2013). Both of the above studies also include a low-contrast regime, with \(\Delta\Phi_{\rm eq}/\overline{\Phi}\) varying from as low as 0.001 to 0.1, corresponding to the lowest forcing amplitude in our study. In conjunction with our choices of reference geopotential \(\overline{\Phi}\), the choice of three forcing amplitudes allows us to span the vast parameter space of potential stellar forcings and atmospheric scale heights with our simplified model. ### Motivation for the choices of \(\overline{\Phi}\) We have chosen the mean geopotential \(\overline{\Phi}\) by calculating the scale heights for a variety of equilibrium temperatures and planetary atmospheric compositions. While the shallow-water model is agnostic about atmospheric compositions, we have selected the sampled scale heights and forcing amplitudes to encompass a variety of plausible scenarios. We select the mean geopotential to correspond to scale height \(H\), which we compute as follows: \[\overline{\Phi}=gH=\frac{kT_{\rm eq}}{m}, \tag{7}\] where \(k\) is the Boltzmann constant, \(T_{\rm eq}\) is the equilibrium temperature, and \(m\) (kg/molecule) is the mean molar mass of the atmospheric constituents. The range of scale heights was computed using temperatures in the \(400-1200\) K range and mean molecular weights of 2.3-5.5 times the mass of a hydrogen atom, corresponding to 1-200 multiples Figure 2: Planetary equilibrium temperature as a function of rotation period for planets around K0 (purple), K5 (blue), M0 (yellow), and M5 (red) stars. The chosen grid of rotation periods of 1, 5, and 10 days and of planetary equilibrium temperatures of 400, 800, and 1200 K generates a realistic sample. The sample region is highlighted in grey. The stellar parameters were taken from Zombeck (1990). of solar metallicity. The resulting values for \(\overline{\Phi}\) were relatively evenly spread out between \(6\times 10^{5}\) m\({}^{2}\)s\({}^{-2}\) and \(4.3\times 10^{6}\) m\({}^{2}\)s\({}^{-2}\), so the samples of \(10^{6}\) m\({}^{2}\)s\({}^{-2}\), \(2\times 10^{6}\)m\({}^{2}\)s\({}^{-2}\), \(3\times 10^{6}\) m\({}^{2}\)s\({}^{-2}\), and \(4\times 10^{6}\) m\({}^{2}\)s\({}^{-2}\) efficiently sample the parameter space, despite combining two parameters into one. The models at high \(\overline{\Phi}\) can be interpreted as corresponding to planets with a hydrogen-dominated atmosphere, similar to that of Jupiter. A decrease in \(\overline{\Phi}\) can be thought of as a transition to a more metal-rich atmosphere. While the strength of stellar forcing is not independent of reference geopotential in our model, our simulations span three forcing regimes for each scale height, allowing us to explore the effects of scale height in the context of the shallow-water model. ## 3 Results ### Extension to Sub-Neptunes \begin{table} \begin{tabular}{|l|c|c|c|} \hline \multicolumn{4}{|c|}{Strong forcing} \\ \hline & \(\tau_{\rm rad}=0.1\) days & \(\tau_{\rm rad}=1\) day & \(\tau_{\rm rad}=10\) days \\ \hline & \({\bf V}_{\rm max}\approx 1700\) m\({}^{-1}\) & \({\bf V}_{\rm max}\approx 300\) m\({}^{-1}\) & \({\bf V}_{\rm max}\approx 160\) m\({}^{-1}\) \\ \(P_{\rm rot}=1\) day & \(\Delta\Phi\approx 1.1\times 10^{6}\) m\({}^{2}\)/s\({}^{2}\) & \(\Delta\Phi\approx 5.1\times 10^{3}\) m\({}^{2}\)/s\({}^{2}\) & \(\Delta\Phi\approx 3.4\times 10^{2}\) m\({}^{2}\)/s\({}^{2}\) \\ & Ro=0.62 & Ro=0.11 & Ro=0.06 \\ \hline & \({\bf V}_{\rm max}\approx 900\) m\({}^{-1}\) & \({\bf V}_{\rm max}\approx 570\) m\({}^{-1}\) & \({\bf V}_{\rm max}\approx 80\) m\({}^{-1}\) \\ \(P_{\rm rot}=5\) days & \(\Delta\Phi\approx 5.7\times 10^{5}\) m\({}^{2}\)/s\({}^{2}\) & \(\Delta\Phi\approx 2.4\times 10^{5}\) m\({}^{2}\)/s\({}^{2}\) & \(\Delta\Phi\approx 8.4\times 10^{2}\) m\({}^{2}\)/s\({}^{2}\) \\ & Ro=1.59 & Ro=1.03 & Ro=0.15 \\ \hline & \({\bf V}_{\rm max}\approx 600\) m\({}^{-1}\) & \({\bf V}_{\rm max}\approx 330\) m\({}^{-1}\) & \({\bf V}_{\rm max}\approx 145\) m\({}^{-1}\) \\ \(P_{\rm rot}=10\) days & \(U_{\rm max}\approx 520\) m\({}^{-1}\) & \(\Delta\Phi\approx 1.1\times 10^{5}\) m\({}^{2}\)/s\({}^{2}\) & \(\Delta\Phi\approx 7.1\times 10^{3}\) m\({}^{2}\)/s\({}^{2}\) \\ & \(\Delta\Phi\approx 7.2\times 10^{4}\) m\({}^{2}\)/s\({}^{2}\) & Ro=1.17 & Ro=0.53 \\ \hline & \multicolumn{3}{|c|}{Medium Forcing} \\ \hline & \(\tau_{\rm rad}=0.1\) days & \(\tau_{\rm rad}=1\) day & \(\tau_{\rm rad}=10\) days \\ \hline & \({\bf V}_{\rm max}\approx 1200\) m\({}^{-1}\) & \({\bf V}_{\rm max}\approx 160\) m\({}^{-1}\) & \({\bf V}_{\rm max}\approx 120\) m\({}^{-1}\) \\ \(P_{\rm rot}=1\) day & \(\Delta\Phi\approx 6.4\times 10^{5}\) m\({}^{2}\)/s\({}^{2}\) & \(\Delta\Phi\approx 7.1\times 10^{3}\) m\({}^{2}\)/s\({}^{2}\) & \(\Delta\Phi\approx 3.4\times 10^{2}\) m\({}^{2}\)/s\({}^{2}\) \\ & Ro=0.44 & Ro=0.06 & Ro=0.04 \\ \hline & \({\bf V}_{\rm max}\approx 600\) m\({}^{-1}\) & & \\ \(P_{\rm rot}=5\) days & \(U_{\rm max}\approx 675\) m\({}^{-1}\) & \({\bf V}_{\rm max}\approx 490\) m\({}^{-1}\) & \({\bf V}_{\rm max}\approx 80\) m\({}^{-1}\) \\ & \(\Delta\Phi\approx 7.2\times 10^{4}\) m\({}^{2}\)/s\({}^{2}\) & \(\Delta\Phi\approx 2.4\times 10^{4}\) m\({}^{2}\)/s\({}^{2}\) & \(\Delta\Phi\approx 2.0\times 10^{2}\) m\({}^{2}\)/s\({}^{2}\) \\ & Ro=1.42 & Ro=0.87 & Ro=0.15 \\ \hline & \({\bf V}_{\rm max}\approx 430\) m\({}^{-1}\) & \({\bf V}_{\rm max}\approx 280\) m\({}^{-1}\) & \({\bf V}_{\rm max}\approx 200\) m\({}^{-1}\) \\ \(P_{\rm rot}=10\) days & \(\Delta\Phi\approx 1.3\times 10^{4}\) m\({}^{2}\)/s\({}^{2}\) & \(\Delta\Phi\approx 8.7\times 10^{3}\) m\({}^{2}\)/s\({}^{2}\) & \(\Delta\Phi\approx 1.5\times 10^{4}\) m\({}^{2}\)/s\({}^{2}\) \\ & Ro=1.53 & Ro=0.99 & Ro=0.71 \\ \hline & \multicolumn{3}{|c|}{Weak Forcing} \\ \hline & \(\tau_{\rm rad}=0.1\) days & \(\tau_{\rm rad}=1\) day & \(\tau_{\rm rad}=10\) days \\ \hline & \({\bf V}_{\rm max}\approx 330\) m\({}^{-1}\) & \({\bf V}_{\rm max}\approx 50\) m\({}^{-1}\) & \({\bf V}_{\rm max}\approx 60\) m\({}^{-1}\) \\ \(P_{\rm rot}=1\) day & \(\Delta\Phi\approx 4.1\times 10^{4}\) m\({}^{2}\)/s\({}^{2}\) & \(\Delta\Phi\approx 1.4\times 10^{3}\) m\({}^{2}\)/s\({}^{2}\) & \(\Delta\Phi\approx 6\times 10\) m\({}^{2}\)/s\({}^{2}\) \\ & Ro=0.10 & Ro=0.02 & Ro=0.02 \\ \hline & \({\bf V}_{\rm max}\approx 440\) m\({}^{-1}\) & \({\bf V}_{\rm max}\approx 80\) m\({}^{-1}\) & \({\bf V}_{\rm max}\approx 100\) m\({}^{-1}\) \\ \(P_{\rm rot}=5\) days & \(\Delta\Phi\approx 5.3\times 10^{4}\) m\({}^{2}\)/s\({}^{2}\) & \(\Delta\Phi\approx-5\times 10\) m\({}^{2}\)/s\({}^{2}\) & \(\Delta\Phi\approx 2.4\times 10^{2}\) m\({}^{2}\)/s\({}^{2}\) \\ & Ro=0.79 & Ro=0.16 & Ro=0.20 \\ \hline & \({\bf V}_{\rm max}\approx 310\) m\({}^{-1}\) & \({\bf V}_{\rm max}\approx 160\) m\({}^{-1}\) & \({\bf V}_{\rm max}\approx 100\) m\({}^{-1}\) \\ \(P_{\rm rot}=10\) days & \(\Delta\Phi\approx 1.4\times 10^{4}\) m\({}^{2}\)/s\({}^{2}\) & \(\Delta\Phi\approx 8.0\times 10^{3}\) m\({}^{2}\)/s\({}^{2}\) & \(\Delta\Phi\approx 6\times 10\) m\({}^{2}\)/s\({}^{2}\) \\ & Ro=1.13 & Ro=0.82 & Ro=0.37 \\ \hline \end{tabular} \end{table} Table 1: Typical maximal wind speeds, day/night geopotential contrast, and Rossby number for a representative scale height \(\overline{\Phi}=4\times 10^{6}\) m\({}^{2}\)/s\({}^{2}\) based on temporal averages. The geopotential contrast is computed by subtracting the nightside average from the dayside average. When maximal wind speeds \({\bf V}_{ After benchmarking our model in the hot Jupiter regime, we transition the model toward a regime appropriate for sub-Neptunes by first decreasing the planetary radius \(a\). We decrease the planetary radius from that of Jupiter to equal three times Earth radius (\(a=1.9\times 10^{7}\) m) while keeping the rest of the parameters identical to our validation regime: a hot Jupiter based on HD 189733b, explored in Section 2.2. This change bridges the previously explored hot Jupiter regimes and the sub-Neptune regimes, which are the focus of this work. Compared to our replication of a hot Jupiter regime (shown in Figure 1), reducing the planetary radius preserves the transition from day-to-night flow for short radiative timescales (0.1 days) to a jet-dominated flow for long radiative timescales (10 days). However, the sub-Neptune simulations shown in Figure 3 exhibit lower geopotential height contrast both in the zonal direction (between the dayside and the nightside) and in the meridional direction (between the equator and the pole) than the hot-Jupiter simulations. We also observe that for short and medium values of \(\tau_{\rm rad}\) (0.1 and 1 days, left and center panels, respectively), the nightside cyclonic vortices are more confined in latitude. For the short timescale \(\tau_{\rm rad}=0.1\) days, the vortices extend from \(\approx 10^{\circ}\) to \(\approx 50^{\circ}\) latitude, with wind speeds \(|\mathbf{V}|\approx 10^{3}\) ms\({}^{-1}\) and the geopotential contrast of \(\approx 10^{6}\) within the vortices. For the medium radiative timescale \(\tau_{\rm rad}=1\) day, the vortices extend from \(\approx 15^{\circ}\) to \(\approx 70^{\circ}\) latitude, with wind speeds \(|\mathbf{V}|\approx 8\times 10^{2}\) ms\({}^{-1}\) and geopotential contrast \(\approx 6\times 10^{5}\) within the vortices. While the rotation period \(P_{\rm rot}\) was not changed from that one used in Perez-Becker & Showman (2013), the change in planetary radius leads to an inversely proportional change in the beta parameter \(\beta=\frac{\partial f}{\partial y}=2\Omega\cos\phi_{0}/a\). The beta parameter is in turn proportional to the speed of Rossby waves. In the left panel, for \(\tau_{\rm rad}=0.1\) days, the vortices are centered at \(\sim 30^{\circ}\) latitude. In the center panel, where \(\tau_{\rm rad}=1\) day, the vortices are larger and father from the equator, at \(\sim 40^{\circ}\) latitude, but exhibit lower geopotential contrast. The center panel simulation also exhibits a secondary, warmer pair of cyclonic vortices. This behavior is qualitatively different from the \(\tau_{\rm rad}=1\) day regime in Figure 1, where anticyclonic vortices appear on the dayside. In the right panel, where the radiative timescale \(\tau_{\rm rad}=10\) days, the simulation exhibits less equator-to-pole variation than the equivalent regime in Figure 1. Reducing the planetary radius by 75% has led to the emergence of significant changes in global scale atmospheric flow patterns and wind speeds, showing distinct differences in global circulation patterns between hot Jupiters and sub-Neptunes. We now proceed to model sub-Neptunes by performing a parameter sweep of rotation periods \(P_{\rm rot}\) and radiative timescales \(\tau_{\rm rad}\) in three forcing regimes: strong forcing (\(\Delta\Phi_{\rm eq}/\overline{\Phi}=1\), Section 3.2), medium forcing (\(\Delta\Phi_{\rm eq}/\overline{\Phi}=0.5\)), and weak forcing (\(\Delta\Phi_{\rm eq}/\overline{\Phi}=0.1\)). The effects in response to changing forcing strength are discussed in Section 3.3. In Table 1, we summarize the maximal wind speeds \(\mathbf{V}_{\rm max}\), and the geopotential contrast between the dayside and the nightside. The maximal equatorial zonal speeds \(U_{\rm max}\) are similar to the maximal overall wind speeds. While our parameter sweep includes four scale height values, a representative scale height corresponding to \(\overline{\Phi}=4\times 10^{6}\) was chosen to be represented in the table. ### Strong forcing simulations In this section, we explore what we are calling "strong forcing" simulations in the context of our ensemble. For these simulations the prescribed local radiative equilibrium day-night forcing contrast is equal to the the nightside geopotential at radiative equilibrium (\(\Delta\Phi_{\rm eq}/\overline{\Phi}=1\)). For strong forcing regimes, the equilibrium day-night contrast is of the same magnitude as the equilibrium nightside geopotential \(\overline{\Phi}\). Note that changing the scale height results in changing \(\Delta\Phi_{\rm eq}\), and so in our formulation, the stellar forcing depends on the value of \(\overline{\Phi}\). First we consider \(\overline{\Phi}=4\times 10^{6}\) m\({}^{2}\)s\({}^{-2}\), corresponding to a large scale height. While our simplified shallow-water model is agnostic to the exact specification of the planetary composition, a large scale height could correspond, for example, to a predominantly H/He atmosphere with \(\approx 1\times\) solar metallicity. The resulting geopotential profile for this regime is shown in Figure 4. The corresponding mean zonal wind profiles are shown in Figure 5. The shortest-timescale regime (with radiative timescale \(\tau_{\rm rad}=0.1\) days and rotation period \(P_{\rm rot}=1\) day) exhibits the highest day-to night contrast, highest equator-to-pole contrast, and highest wind speeds. The longest-timescale regime (with radiative timescale \(\tau_{\rm rad}=10\) days and rotation period \(P_{\rm rot}=10\) days) exhibits the lowest day-to night contrast, lowest equator-to-pole contrast, and lowest wind speeds. We discuss these transitions in more detail in the following paragraphs. In the short-timescales regime shown in the top left panel of Figure 4, where radiative timescale \(\tau_{\rm rad}=0.1\) days and rotation period \(P_{\rm rot}=1\) day, strong forcing is combined with strong rotational effects. This regime exhibits high day-night contrast combined with a high equator-to-pole contrast. We observe atmospheric superrotation with a strong eastward equatorial jet (the mean zonal wind speeds reach \(\sim 0.6\) km s\({}^{-1}\) at the equator) that shifts the dayside hotspot to the east. A combination of eastward Kelvin waves and westward Rossby waves results in equatorial superrotation that gives rise to a hotspot shape and offset from the substellar longitude characteristic for hot Jupiters (Showman et al., 2020). Indeed, this strongly forced, short radiative timescale regime is closest to a hot Jupiter regime among the ones we explore. While on hot Jupiters, the equatorial winds tend to be uniformly eastward, the simulation exhibits some regions of equatorial westward velocities, which are partly driven by the relative strength of the Coriolis force, which is higher for a smaller planet with \(P_{\rm rot}=1\) day. On the night side, the simulation exhibits stationary polar vortices. In contrast, in the right column of Figure 4, long radiative timescale \(\tau_{\rm rad}=10\) days results in the formation of a jet-dominated pattern with little longitudinal variation, regardless of the rotation period. The equator-to-pole variation is significantly lower than that of the prescribed radiative equilibrium. No vortices are present. For short rotation period \(P_{\rm rot}=1\), this regime exhibits an oscillation between a hot equatorial band and two hot tropical bands. While this oscillation is not visible in Figure 4, we discuss it in more detail in Section 4.5. When the radiative timescale is at its intermediate value in our ensemble, \(\tau_{\rm rad}=1\) day, we observe the transition between the regime dominated by the day-night flow, which occurs for shorter radiative timescales, and the one dominated by the jet flow, which occurs for longer radiative timescales. In the top center panel of Figure 4 (\(\tau_{\rm rad}=1\) day, \(P_{\rm rot}=1\) day), there is some longitudinal variation, but the hottest parts of the planet are no longer confined to the dayside. Polar vortices are present, extending to \(\approx 60^{\circ}\) latitude, centered at the terminator \(90^{\circ}\) east of the substellar point. While there is high equator-to-pole variation, the equator-to-pole contrast is significantly lower than that of the prescribed radiative equilibrium. The average zonal wind speeds exhibit peaks of \(\approx 150\) ms\({}^{-1}\) at \(\sim 40^{\circ}-50^{\circ}\) latitude, and these wind speeds are much lower than the \(\approx 600\) ms\({}^{-1}\) eastward equatorial jet exhibited by the \(\tau_{\rm rad}=0.1\) days regime. As \(P_{\rm rot}\) increases from 1 to 5 days, the eastward equatorial jet becomes weaker, with lower associated mean zonal wind speeds. The models shown in the middle row of Figure 4 exhibit a robust day-to-night flow pattern. For \(\tau_{\rm rad}=0.1\) days (left column), compared to simulations with a lower rotation period, the night-side cyclonic vortices are smaller and closer to the equator (centered at around \(\sim 20^{\circ}-30^{\circ}\) latitude). The cyclonic vortices correspond to the westward mean zonal winds around those latitudes. The equatorial zonal winds are eastward, and the amplitude is lower than that for shorter rotation period values. In this regime, we observe some asymmetry around the equator, with the cyclonic vortex in the northern hemisphere being larger than in the southern one. The presence of momentum advection across the equator in the presence of symmetrical forcing for a zero-obliquity planet seems to indicate that a pitchfork bifurcation is taking place. We discuss this phenomenon in more detail in Section 4.4. Figure 3: Recreation of a hot Jupiter regime in Figure 3 in Perez-Becker and Showman (2013), with planetary radius turned down to \(a=3a_{\oplus}=1.3\times 10^{7}\) m. Shown are the geopotential contour maps and the wind vector fields. The remaining planetary parameters are as follows: planetary rotation rate \(\Omega=3.2\times 10^{-5}\) rad/s (equivalent to \(P_{\rm rot}=2.27\) days), no drag, reference geopotential \(\overline{\Phi}=4\times 10^{6}\), for strong radiative forcing (\(\Delta\Phi_{\rm eq}/\overline{\Phi}=1\)). Three radiative time scales are depicted: \(\tau_{\rm rad=0.1}\) days (left panel), \(\tau_{\rm rad=1}\) day (center panel), and \(\tau_{\rm rad=10}\) days (right panel). We have subtracted the constant value of \(\overline{\Phi}\) from each panel. Panels share the same color scale for geopotential, but winds are normalized independently in each panel. The substellar point is located at \((0^{\circ},0^{\circ})\) for each panel. When \(P_{\rm rot}=5\) days and \(\tau_{\rm rad}=1\) day, in the center panel of Figure 4, we observe some time variability. There is a pair of nightside cyclonic vortices, which migrate in both north-south and east-west directions (although always confined to the nightside) in an oscillation with a period of \(\approx 25\) days. Further increasing the rotation period \(P_{\rm rot}\) to 10 days, in the bottom center panel, the simulation exhibits migrating nightside cyclonic vortices with a period of \(\approx 25\) days. Compared to shorter \(P_{\rm rot}\), the equator-to-pole contrast is lower. The presence of time variability and momentum advection across the equator in these regimes emerges in a particular regime of radiative forcing strength and rotation period. While high variability is generally associated with longer timescales, it should be noted that the longest-timescales regime (\(P_{\rm rot}=10\) days and \(\tau_{\rm rad}=10\) days) exhibits no time variability. Overall in the strong-forcing, high-scale-height regime, the shift from short radiative timescales to longer ones corresponds to the shift from a day-to-night flow with nightside vortices to a jet-dominated flow, as well as from a low time-variability to a higher time-variability. Increasing the rotation period typically decreases the equator-to-pole contrast. The shift from \(P_{\rm rot}=1\) day to \(P_{\rm rot}=5\) days creates a more dramatic change in the atmospheric circulation patterns than the shift from \(P_{\rm rot}=5\) days to \(P_{\rm rot}=10\) days. As expected, the wind speeds are strongest (\(\approx 600\) ms\({}^{-1}\)) when both radiative and rotational timescales are short, and weakest (\(\approx 10-20\) ms\({}^{-1}\)) when both timescales are longest. Most of our simulations in the high-forcing regime exhibit an equatorial jet pattern. However, when the radiative timescale is long and the rotation period is short (top center and top right in Figure 5), the average zonal wind speeds exhibit two tropical peaks. We discuss this transition in more detail in Section 4.2. Figure 4: Latitude-longitude maps of the steady-state geopotential anomaly and wind vector fields for the strong forcing regime (\(\Delta\Phi_{\rm eq}/\overline{\Phi}=1\)) at reference geopotential \(\overline{\Phi}=4\times 10^{6}\) m\({}^{2}\)s\({}^{-2}\), averaged over the last 100 days of the simulation. The constant value of \(\overline{\Phi}\) has been subtracted from each panel. Each panel in the grid corresponds to a unique combination of planetary rotation period \(P_{\rm rot}\) and radiative timescale \(\tau_{\rm rad}\). Panels share the same color scale for geopotential, but winds are normalized independently in each panel. The substellar point is located at \((0^{\circ},0^{\circ})\) for each panel. ### Response to changing scale height and forcing strength In this section, we present the trends in our simulations as we vary the scale height \(\sqrt{\overline{\Phi}}\) and the local radiative equilibrium amplitude \(\Delta\Phi_{\rm eq}/\overline{\Phi}\). While the complete ensemble includes a range of radiative timescales and rotation periods, here we chose to highlight the shortest-timescale regime (with radiative timescale \(\tau_{\rm rad}=0.1\) days and rotation period \(P_{\rm rot}=1\) day) in order to examine the effects of forcing strength.2. Figure 6 shows the geopotential contours and Figure 7 shows the mean zonal wind profiles for each combination of forcing and reference geopotential \(\overline{\Phi}\). Note that as we reduce the scale height (and hence \(\overline{\Phi}\)), the magnitude of forcing decreases, since \(\Delta\Phi_{\rm eq}/\overline{\Phi}\) depends on \(\overline{\Phi}\). Footnote 2: We show the geopotential and mean-zonal wind grids for all scale heights and forcing regimes in our Zenodo repository (The DOI for the repository will be provided after peer review) In the strong-forcing regime (left column of Figure 6), as we decrease the scale height, the patterns in atmospheric dynamics remain qualitatively similar to the one for \(\overline{\Phi}=4\times 10^{6}\), shown in Figure 4. However, the east/west and north/south gradients become more defined as the scale height decreases. The mean zonal winds for \(\overline{\Phi}=3\times 10^{6}\) m\({}^{2}\)s\({}^{-2}\) are similar in speed and amplitude to those occurring at higher scale height, ranging from \(\approx 10-20\) ms\({}^{-1}\) for the longest timescales to \(\approx 600\) ms\({}^{-1}\) for the shortest timescales. As we continue to decrease the scale height to \(\overline{\Phi}=2\times 10^{6}\) m\({}^{2}\)s\({}^{-2}\), the qualitative behavior remains the same while the geopotential amplitude and the mean zonal wind speed decrease further, ranging from \(\approx 10-20\) ms\({}^{-1}\) for the longest timescales to \(\approx 500\) ms\({}^{-1}\) for the shortest timescales. For the lowest scale height \(\overline{\Phi}=1\times 10^{6}\) m\({}^{2}\)s\({}^{-2}\), the wind speeds vary from \(\approx 300\) ms\({}^{-1}\) for the shortest-timescale regime to \(\approx 10-20\) ms\({}^{-1}\) for the longest timescales regime. As the scale height decreases, the hot spot in the short time scales regime (where \(\tau_{\rm rad}=0.1\) days, \(P_{\rm rot}=1\) day shown in the left column of Figure 6) Figure 5: Mean-zonal winds \(\overline{U}\) for the steady-state solution for the strong forcing regime (\(\Delta\Phi_{\rm eq}/\overline{\Phi}=1\)) at reference geopotential \(\overline{\Phi}=4\times 10^{6}\) m\({}^{2}\)s\({}^{-2}\), averaged over the last 100 days of the simulation. Each panel in the grid corresponds to a unique combination of planetary rotation period \(P_{\rm rot}\) and radiative timescale \(\tau_{\rm rad}\). changes its shape from one matching hot Jupiters (top left panel) to a more elongated in the east-west direction due to rotational effects (bottom left panel). This change is driven by a decrease in Rossby deformation radius, which is proportional to the scale height and inversely proportional to the Coriolis parameter. Overall, decreasing the scale height results in decreased geopotential contrast and more pronounced rotational effects, both due to the relationship between scale height and forcing. In the middle column of Figure 6, we show the short-timescale simulations for the medium-forcing regime, where the prescribed local radiative equilibrium day-night contrast is equal to half of the reference geopotential (\(\Delta\Phi_{\rm eq}/\overline{\Phi}=0.5\)). While the strong forcing regime (\(\Delta\Phi_{\rm eq}/\overline{\Phi}=1\)) has been widely explored in literature in the context of hot Jupiters (e.g. Perez-Becker & Showman, 2013; Showman & Polvani, 2011), the medium forcing regime has not been investigated as thoroughly (but appears, e.g., in Liu & Showman, 2013). This regime is relevant to the modeling of exoplanet Figure 6: Latitude-longitude maps of the steady-state geopotential anomaly and wind vector fields for the shortest timescales regime (radiative timescale \(\tau_{\rm rad}=0.1\) days, rotation period \(P_{\rm rot}=1\) day), averaged over the last 100 days of the simulation. Each panel in the grid corresponds to a unique combination of forcing strength \(\Delta\Phi_{\rm eq}/\overline{\Phi}\) and reference geopotential \(\overline{\Phi}\). The wind vectors are normalized independently in each panel. The substellar point is located at \((0^{\circ},0^{\circ})\) for each panel. The colorbar limits have been adjusted individually for each panel to ease comparison of atmospheric patterns across orders of magnitude. atmospheres since many sub-Neptunes orbit closely around cooler stars, such as M-dwarfs, and are synchronous rotators (Bean et al., 2021). Such sub-Neptunes are also advantageous targets for atmospheric characterization with observational facilities due to the favorable ratio between the planet and star radii (Triaud, 2021). The trends in qualitative behavior of these simulations are similar to those for the high contrast (strong forcing) regime. The shortest radiative and rotational timescales result in high equator-to-pole and day-night contrast and highest winds, while the longest-timescale regime exhibits low geopotential contrast and low winds. The amplitude of the resulting geopotential profile appears to scale linearly with the day-night contrast of the local radiative equilibrium. While strong-forcing simulations exhibit a single equatorial hotspot, medium-forcing simulations tend to exhibit two Figure 7: Mean-zonal winds \(\overline{U}\) for for the shortest timescales regime (radiative timescale \(\tau_{\rm rad}=0.1\) days, rotation period \(P_{\rm rot}=1\) day), averaged over the last 100 days of the simulation. Each panel in the grid corresponds to a unique combination of forcing strength \(\Delta\Phi_{\rm eq}/\overline{\Phi}\) and reference geopotential \(\overline{\Phi}\). Note that the axis limits have been adjusted individually for each column in order to highlight the features for the smaller wind speeds that arise in the weak-forcing regime. regions of highest geopotential in the mid-latitudes. Despite qualitative similarities, there are distinct differences that arise when comparing to the strong forcing regime. Compared to strongly forced regimes at the same scale height, the medium-forcing simulations tend to exhibit similar qualitative trends in response to changing radiative timescale and rotation period. However, there are several quantitative differences. The overall geopotential contrast tends to be smaller. The greatest geopotential contrast is \(\approx 6.4\times 10^{5}\) m\({}^{2}\)s\({}^{-2}\) (for \(\tau_{\rm rad}=0.1\) days, \(P_{\rm rot}=1\) day). Compared to the strong-forcing regime, the hotspot for the medium-forcing regime extends farther west at mid-latitudes (\(30^{\circ}\)-\(60^{\circ}\)). The lowest geopotential contrast is \(\approx 2\times 10^{2}\)m\({}^{2}\)s\({}^{-2}\) (for \(\tau_{\rm rad}=10\) days, \(P_{\rm rot}=5\) days). The qualitative wind speed trends for medium-forcing models follow those for strong forcing, but the speeds are significantly lower (with the mean zonal wind speeds ranging from \(\approx 500\) ms\({}^{-1}\) to \(\approx 50\) ms\({}^{-1}\)). Overall, the atmospheric dynamics of the medium forcing regime exhibit lower day-night contrasts, weaker equatorial winds speeds, and different location of vortices, compared to the strong forcing regime. However, the transitions between day-to-night flow and jet-dominated flow occur at the same points in the \(\tau_{\rm rad}\)--\(P_{\rm rot}\) parameter space as for the strong forcing regime. As scale height decreases, for shortest timescales (for \(\tau_{\rm rad}=0.1\) days, \(P_{\rm rot}=1\) day), the hotspot is elongated in the east-west direction at mid-latitudes, and secondary eastward jets emerge at high latitudes in addition to the equatorial jet. The elongation is due to the decrease in the Rossby radius of deformation, which is proportional to \(\sqrt{\Phi}\). Compared to the same scale height regime at strong forcing (left column of Figure 6), both geopotential contrast and wind speeds are lower, while the jet structure and transitions from day-night to jet-dominated flow are preserved. In comparison to the strong forcing regime at the same scale height, this regime exhibits lower geopotential contrast (\(\approx 50\)% decrease) and lower wind speeds (ranging from \(\approx 20\) ms\({}^{-1}\) for the longest timescales to \(\approx 150\) ms\({}^{-1}\) for the shortest timescales). In the right column of Figure 6, we present the weak-forcing simulations for the short-timescales regime. For these simulations, the local radiative equilibrium amplitude is equal to 0.1 of the mean geopotential (\(\Delta\Phi_{\rm eq}/\overline{\Phi}=0.1\)), approaching the linear regime commonly assumed for Earth. Low-contrast regimes have been previously explored in the context of hot Jupiters (e.g. Perez-Becker & Showman, 2013; Shell & Held, 2004) due to their analytical tractability. Similarly to the medium-forcing regime, this regime models sub-Neptunes orbiting closely around cooler stars. For these weak-forcing simulations, the rotational effects become more pronounced, and the transitions between day-to-night flow and jet-dominated flow occur at shorter radiative timescales compared to medium- and strong-forcing regimes. For large scale height (\(\overline{\Phi}=4\times 10^{6}\) m\({}^{2}\)s\({}^{-2}\)), the maximal geopotential contrast is \(\approx 1.75\times 10^{5}\) m\({}^{2}\)s\({}^{-2}\), and mean zonal wind speeds are in the \(\approx 30-100\) ms\({}^{-1}\) range. When \(\tau_{\rm rad}=0.1\) days, we observe day-to-night flow, similar to strong and medium forcing regimes, but instead of a single hotspot east of the substellar point we observe two dayside anticyclonic vortices west of the substellar point. The short-timescales regime (\(\tau_{\rm rad}=0.1\) days, \(P_{\rm rot}=1\) day) shown in the right column of Figure 6 exhibits behavior qualitatively similar to the regime where \(\tau_{\rm rad}=1\) day, \(P_{\rm rot}=1\) day for strong and medium forcing (e.g. middle column of Figure 4). This regime is a transition between day-to-night and jet-dominated flow. The similarity occurs because increasing \(\tau_{\rm rad}\) has the same effect on Equation 3 as decreasing \(\Delta\Phi_{\rm eq}/\overline{\Phi}\). As we decrease the scale height in the weak-forcing regime, we observe similar qualitative behavior, with transitions occurring at the same points in the parameter space. For \(\overline{\Phi}=3\times 10^{6}\) m\({}^{2}\)s\({}^{-2}\), the highest geopotential contrast decreases to \(\approx 1.5\times 10^{5}\), and the mean zonal wind speeds decrease slightly, resulting in \(\approx 25-100\) ms\({}^{-1}\) range. For \(\overline{\Phi}=2\times 10^{6}\) m\({}^{2}\)s\({}^{-2}\), the maximal geopotential contrast is \(\approx 1.1\times 10^{5}\) m\({}^{2}\)s\({}^{-2}\). The mean zonal winds decrease to \(\approx 10\) ms\({}^{-1}\) in the shortest timescales regime (\(\tau_{\rm rad}=0.1\) days and \(P_{\rm rot}=1\) day) and \(\approx 80\) ms\({}^{-1}\) in the longest timescales regime (\(\tau_{\rm rad}=10\) days and \(P_{\rm rot}=10\) days). Finally, at the lowest scale height in our ensemble, \(\overline{\Phi}=10^{6}\) m\({}^{2}\)s\({}^{-2}\), the maximal geopotential contrast for the weak forcing regime is \(\approx 7\times 10^{4}\) m\({}^{2}\)s\({}^{-2}\). The mean zonal winds are in the \(\approx 10-80\) ms\({}^{-1}\) range. Overall, compared to strong and medium forcing simulations, the weak forcing regime exhibits lower geopotential contrast and lower wind speeds. The largest geopotential heights are in mid-latitudes, not at the equator, which is distinct from the strong- and medium-forcing regimes. The transitions between day-night flow and jet-dominated flow occur at shorter radiative timescales compared to the strong- and medium-forcing regimes. Note also that for strong and medium forcing, the highest mean zonal wind speeds occur when both radiative timescale and rotation period are short, and the lowest wind speeds occur when both timescales are long. This pattern is reversed in the weak forcing regime. We discuss this transition further in 4.2. Weak forcing induces lower wind speeds, which allow non-advective properties such as rotational effects to dominate, especially at long radiative timescale \(\tau_{\rm rad}=10\) days. Here we have noted several patterns that take place in the atmospheres of these planets. In the following section, we discuss general trends and special cases found in our simulations. ## 4 Discussion ### Holistic discussion of trends in the regimes In this section, we discuss the overall trends in the atmospheric patterns in our simulations, focusing on trends in geopotential height distributions. Since atmospheric heating will generally increase the geopotential \(\Phi\) and atmospheric cooling will generally reduce \(\Phi\), the qualitative changes in geopotential can be thought of as changes in temperature, with higher thickness of the upper layer corresponding to higher temperatures. Using the ideal gas relationship \(\Phi\approx RT\), where \(R\) is the specific gas constant, we can approximate changes in temperature. For our range of mean molecular weights, the approximate range for values of \(R\) is 750--1800 J kg\({}^{-1}\)K\({}^{-1}\). We can estimate that, for example, a change of \(10^{5}\) m\({}^{2}\)s\({}^{-2}\) in the geopotential can correspond to an approximate temperature change in the \(50-130\) K range, depending on assumed mean molecular weight. The geopotential and horizontal winds are driven by the interacting processes of irradiation, rotation, and advection. The strength of stellar irradiation in our simulation can be analyzed through two parameters: the radiative timescale \(\tau_{\rm rad}\) and the day/night temperature contrast of the local radiative equilibrium. The radiative timescale is one of the principal determiners of the resulting atmospheric patterns. Simulations at the shortest radiative timescale tend to exhibit a hot spot on the day side and cyclonic vortices on the nightside. The hot spot is shifted eastward from the substellar longitude, similar to the equatorial superrotation on hot Jupiters. High day-night heating contrasts result in stationary, planetary-scale Rossby and Kelvin waves as shown in Showman & Polvani (2011). As the timescale \(\tau_{\rm rad}\) increases to 1 day, the hot spot curves westward in the mid-latitudes under the influence of the Coriolis force, breaking up into cyclonic vortices that equilibriate west of the substellar point. As the radiative timescale \(\tau_{\rm rad}\) increases further to 10 days, the behavior transitions from a day-to-night flow to a jet-dominated flow. Rossby gyres tend to be present for small and medium values of \(\tau_{\rm rad}\) but not for \(\tau_{\rm rad}=10\) days. Time variability in our simulations occurs when either the radiative timescale or the rotation period is long. The other parameter controlling the strength of irradiation is the prescribed equilibrium day/night temperature contrast. The resulting day-night geopotential thickness contrast appears to scale linearly with the fractional day-night equilibrium difference \(\Delta\Phi_{\rm eq}/\overline{\Phi}\). This behavior makes sense as a response to linear Newtonian relaxation. The fractional day-night difference controls the amplitude of the dayside cosine bell of the prescribed local radiative equilibrium. Similarly, as we lower the scale height, lower mean geopotential \(\overline{\Phi}\) corresponds to lower resulting day-night thickness contrasts. The equator-to-pole contrast decreases as the radiative timescale or the rotation period decrease. In general, simulations with larger rotation periods tend to exhibit lower day/night geopotential contrast. Table 1 shows the difference between average dayside and average nightside geopotential. When \(\tau_{\rm rad}=1\), for medium and weak forcing, minimum day/night contrast \(\Delta\Phi\) occurs for \(P_{\rm rot}=5\) days rather than \(P_{\rm rot}=10\) days. This behavior occurs because the contributions from the hotspot or cyclonic vortices are split more evenly between the dayside and the nightside in these regimes. For shorter and longer \(\tau_{\rm rad}\), the hotspot is predominantly on the dayside. However, \(\Delta\Phi\) provides a useful heuristic for order-or-magnitude comparisons of day/night contrasts in our ensemble. The response in our simulations to changes in radiative forcing is consistent with the argument given in Showman et al. (2013). When radiative forcing is weak (e.g. in the weak forcing regime or due to longer \(\tau_{\rm rad}\)), the heating of air parcels at they cross from the day side to the night side will be too small to induce significant day-night contrast, and the dominant driver of the flow will be the meridional (latitudinal) gradient in the zonal-mean radiative heating. As expected, we generally observe zonal symmetry with primary temperature variations occurring between the poles and the equator. The horizontal wind speeds in our simulations are controlled by radiative forcing. In the strong and medium forcing regimes, the largest mean zonal wind speeds tend to occur when both the radiative time scale and the rotation period are short. In this case, the mean zonal wind speeds form a global-scale eastward equatorial jet that moves with the speed of a few hundred ms\({}^{-1}\). For longer timescales \(\tau_{\rm rad}\) and \(P_{\rm rot}\), the mean zonal wind speeds are \(\overline{U}\approx 100\) ms\({}^{-1}\). In the weak forcing regime, the mean zonal wind speeds are generally lower and can drop to \(\overline{U}\approx 10\) ms\({}^{-1}\). In the weak forcing regime, the mean zonal winds tend to be lower when the planetary rotation period is short. We discuss this behavior in more detail in Section 4.2. We now turn to consider the balance of advective and radiative forces using scale analysis. Here we compare the radiative timescale \(\tau_{\rm rad}\) and the advective timescale \(\tau_{\rm adv}=L/U\), where \(L\) is a characteristic horizontal length scale. We use \(L\sim a\) for a global-scale flow. For regimes where \(\tau_{\rm rad}=0.1\) days and \(\tau_{\rm rad}=1\) day, \(\tau_{\rm rad}<\tau_{\rm adv}\), and so the day-night differences tend to be higher. However, when \(\tau_{\rm rad}=10\), we tend to observe \(\tau_{\rm rad}>\tau_{\rm adv}\), where the advective behavior dominates exhibiting planetary-scale westward Rossby waves. These waves are especially dominant for regimes where \(P_{\rm rot}=\tau_{\rm rad}=10\) days. For the intermediate scale \(\tau_{\rm rad}=1\) day, the strong forcing regime tends to have \(\tau_{\rm rad}>\tau_{\rm adv}\), while the weak forcing regime tends to have \(\tau_{\rm rad}<\tau_{\rm adv}\), and the medium forcing regime is transitional. This behavior is summarized in Figure 8. In black, the figure shows estimated contours where \(\tau_{\rm rad}=\tau_{\rm adv}\) computed via interpolation of the values for each of the strong-forcing (solid line), medium-forcing (dashed line), and weak-forcing (dash-dotted line) regimes. The values were averaged over all reference values of \(\overline{\Phi}\). When the advective timescale is longer than the radiative timescale, the simulations exhibit high day-night contrasts. When the advective timescale is shorter than the radiative timescale, the simulations exhibit little longitudinal variation. The importance of rotation varies across our simulations. The strong and medium forcing regimes tend to be driven primarily by insolation, while in the weak forcing regime, rotation has a more pronounced effect. We quantify the dominance of rotation using the Rossby number \(Ro=U/2\Omega L\), which gives the ratio of advective to Coriolis forces away from the equator in the horizontal momentum equation. \(U\) is the characteristic windspeed and \(L\) is the characteristic horizontal length scale. Rapidly rotating planets with \(P_{\rm rot}=1\) day and planets with \(\tau_{\rm rad}=10\) days exhibit \(Ro\ll 1\) regardless of irradiation. The highest Rossby numbers (\(\sim 2\)) are exhibited by slowly rotating planets with short radiative timescales. When rotation period is 5 or 10 days and the radiative timescale \(\tau_{\rm rad}=1\) day, irradiation plays more of a role in influencing the Rossby number. Strongly forced planets tend to exhibit \(\text{Ro}\sim 1\), weakly forced planets exhibit \(Ro\ll 1\), and planets with medium forcing exhibit a transition between these two regimes. Figure 8 shows Figure 8: The estimated contours where \(\tau_{\rm adv}=\tau_{\rm rad}\) (black) and the estimated contours where \(Ro=1\) (gray) for different day/night contrasts of the local radiative equilibrium for the strong (solid line), medium (dashed line), and weak (dot-dashed line) forcing regimes. The blue crosses mark the simulated regimes. In order to compute the contours, the values for \(\tau_{\rm rad}\), \(\tau_{\rm adv}\), and \(Ro\) were averaged across the four scale heights in our ensemble. Example patterns are presented in the four corners of the parameter space. The color scale is normalized individually for each panel in order to highlight atmospheric patterns. the estimated contours where \(Ro=1\) computed via interpolation of the values for each of the strong-forcing (solid line), medium-forcing (dashed line), and weak-forcing (dash-dotted line) regimes in gray. The values were averaged over all reference values of \(\overline{\Phi}\). As expected, the effects of rotation are more dominant in the atmospheres of rapidly rotating planets and planets with lower insolation, resulting in a chevron-shaped hotspot when combined with short radiative timescale \(\tau_{\rm rad}\), and in jet-dominated flow when combined with long \(\tau_{\rm rad}\). When the radiative timescale is short and the rotation period is long, the steady-state geopotential resembles the local radiative equilibrium. Overall, the strong and medium forcing regimes are similar to each other, while the weak forcing regime exhibits lower Rossby numbers. Changes in scale height do not affect the Rossby number significantly. Rossby numbers for a representative scale height corresponding to \(\overline{\Phi}=4\times 10^{6}\) m\({}^{2}\)s\({}^{-2}\) are represented in Table 1. Due to low insolation, the dynamics of the weak forcing regime tend to be qualitatively different from the dynamics of the strong and medium forcing regimes at the same timescales. Low stellar irradiation results in lower wind speeds and lower contrasts, as well as the formation of jet-dominated flow. Instead of eastward superrotation, the weak forcing regimes tend to exhibit westward shifts of maximal geopotential driven by Rossby waves. The interaction of low insolation and rotation effects results in lower longitudinal variation than the strong- and medium-forcing regimes. Rossby radius of deformation is greater than the planetary radius for \(P_{\rm rot}=5\) days and \(P_{\rm rot}=10\) days.This estimate predicts the formation of global scale flows in the corresponding regimes. This prediction is confirmed by our simulations, which exhibit global scale flows in both geopotential and horizontal wind patterns. For \(P_{\rm rot}=1\) day, the Rossby radius of deformation ranges from \(6.8\times 10^{6}\) for \(\overline{\Phi}=10^{6}\) to \(1.3\times 10^{7}\) for \(\overline{\Phi}=4\times 10^{6}\). This corresponds to lower widths of the equatorial jets, which extend up to \(20^{\circ}-45^{\circ}\) latitude. Compared with our simulations, this estimate tends to hold for \(\tau_{\rm rad}=0.1\) days, \(P_{\rm rot}=1\) day, but as the radiative timescale increases, the equatorial jet becomes suppressed. We discuss this behavior in further detail in Section 4.2. ### Trends in the zonal wind flow In our simulations, we have identified several important trends in the zonal wind flow, which we discuss here. In the strong forcing, high scale height regime, the shape of the mean zonal winds depends strongly on the interaction of the rotation period and radiative timescale. For example, for high scale height and strong forcing, in the top row of Figure 5, the shortest timescales regime (\(P_{\rm rot}=1\) day, \(\tau_{\rm rad}=0.1\) days) exhibits a strong equatorial jet. However, as the radiative timescale increases to \(\tau_{\rm rad}=1\) day and \(\tau_{\rm rad}=10\) days, the equatorial jet becomes suppressed and replaced by two symmetrical jets with peaks at \(\approx 40^{\circ}\) latitude. As \(\tau_{\rm rad}\) increases, the forcing weakens. This in turn weakens the equatorial jet, which typically emerges as a response to longitudinal variation in radiative forcing (e.g. Showman & Polvani, 2011). In these conditions, strong Coriolis effect results in the emergence of symmetric tropical waves, similar to what is observed in idealized models of Earth troposphere (e.g. Kraucunas & Hartmann, 2007). The transition from a single equatorial jet to symmetric tropical jets appears only in the presence of a short rotation period. Note that in the middle row of Figure 5 (where the rotation period \(P_{\rm rot}=5\) days), the symmetric tropical jet pattern is weaker. In the bottom row (at rotation period \(P_{\rm rot}=10\) days), the equatorial jet weakens but does not disappear as \(\tau_{\rm rad}\) increases. Our simulations suggest that in the sub-Neptune regime, the rotation period must be sufficiently short and the radiative timescale must be sufficiently long in order to see symmetric tropical jets. The emergence of symmetric tropical jets persists across scale height variations in the high-forcing and medium-forcing regimes. As scale height decreases, the overall mean zonal wind speed decreases, and the transitions in qualitative behavior occur at longer values of \(\tau_{\rm rad}\). For example, the simulation for strong forcing at low scale height exhibits an equatorial jet when \(\tau_{\rm rad}=1\) day, \(P_{\rm rot}=5\) days, while the strong forcing simulations at high scale height exhibit tropical jets at these timescales. The medium-forcing regime exhibits similar qualitative behavior as the strong-forcing at low scale height at all the simulated scale heights, except for \(\tau_{\rm rad}=10\) days and \(P_{\rm rot}=5\) or 10 days, where we observe a pattern which is neither a strong equatorial jet nor symmetric tropical jets. Here a single westward jet has a flat peak that extends across the equator, exhibiting an intermediate, transitional regime. As in the geopotential patterns, the wind patterns for high- and medium-forcing regimes exhibit more similarities to each other than to the winds in the weak-forcing regime. The shape of the jet pattern at shortest timescales (\(\tau_{\rm rad}=0.1\) days, \(P_{\rm rot}=1\) day) changes in response to changes in forcing. In the strong-forcing regime, when the scale height is large (at \(\overline{\Phi}=4\times 10^{6}\) m\({}^{2}\)/s\({}^{2}\)), the simulation exhibits westward jets north and south of the eastward equatorial jet (shown in the top left panel of Figure 7). The peaks of the westward jets correspond to the minima of mean zonal winds \(\overline{U}\), which occur at \(\approx 50^{\circ}\) latitude. However, as scale height decreases, the minima move toward the equator. This behavior is a response to the narrowing of the equatorial jet, since decreasing the scale height also decreases the Rossby radius of deformation (\(\lambda_{R}=\sqrt{gH}/f=\sqrt{\Phi}/f\)). For \(\overline{\Phi}=10^{6}\) m\({}^{2}\)/s\({}^{2}\), the minima occur at \(\approx 35^{\circ}\) latitude (bottom left panel of Figure 7). In this regime, secondary eastward jets emerge polarward of the westward jets. The eastward jets are similar in speed to the westward jets (\(\approx 35\) ms\({}^{-1}\)). In the medium-forcing regime ( middle column of Figure 7), the secondary jets are present at both \(\overline{\Phi}=10^{6}\) (\(\approx 35\) ms\({}^{-1}\)) and \(\overline{\Phi}=2\times 10^{6}\) (with speeds of \(\approx 20\) ms\({}^{-1}\)) and are much slower than the eastward equatorial jet. In contrast, in the weak-forcing regime ( right column of Figure 7), at shortest timescales, the secondary jets are of the same order of magnitude as the eastward equatorial jet. The combination of low rotation period and weak forcing produces a zonal wind pattern similar to that exhibited by the models of hot Jupiter WASP-43b in presence of high drag (Kataria et al., 2015). ### Spin-up behavior for long timescales When both \(\tau_{\rm rad}\) and \(P_{\rm rot}\) are long, the model exhibits long spin-up behavior driven by the westward Rossby wave. During spin-up, the hot spot can be advected around the planet several times before the oscillations subside. This behavior is illustrated in Figure 9 for \(\overline{\Phi}=4\times 10^{6}\) in a strong forcing regime, but occurs for long radiative timescales for a variety of scale heights in the strong and medium forcing regimes. The left column of Figure 9 captures the Rossby wave advecting the hotspot westward all the way around the planet with a period of \(\sim 450\) hours. However, this oscillation is transient, and eventually the hot spot is west of the substellar point and is no longer advecting around the planet. This behavior highlights the need for longer simulations, especially at long timescales and more temperate forcings, where time variability is more likely. ### North-south hemisphere asymmetry and apparent pitchfork bifurcation Figure 9: Snapshots of spin-up behavior for strong-forcing regime with \(\overline{\Phi}=4\times 10^{6}\) at rotation period \(P_{\rm rot}=10\) days and radiative timescale \(\tau_{\rm rad}=10\) days. The hotspot is advected fully around the planet during spin-up (left column shows cyclic behavior with a period of \(\approx 450\) hours). The right column shows the equilibrating behavior. Time variability diminishes. Several simulations in our ensemble exhibit north-south hemisphere asymmetry, which typically manifests as differences between Rossby gyres. In some regimes, the asymmetric Rossby gyres are stationary, while other regimes exhibit oscillations, or migrating gyres. Both of these phenomena tend to occur when the radiative timescale is short and the rotation period is long, and in the transitional regimes between the short-timescale day-to-night flow and the long-timescale jet-dominated flow. The Rossby gyres are stationary for \(\tau_{\rm rad}=0.1\) days and \(P_{\rm rot}=10\) days for medium forcing at \(\overline{\Phi}=3\times 10^{6}\) and \(4\times 10^{6}\) m\({}^{2}\)s\({}^{-2}\), and for strong forcing at \(\overline{\Phi}=4\times 10^{6}\) m\({}^{2}\)s\({}^{-2}\). During spinup, these regimes first approach a symmetric pattern, but at \(\sim 2000\) simulated hours, the asymmetry emerges. The transition to the asymmetrical pattern begins near the convergence zone, directly east of the gyres, where the eastward equatorial jet between the gyres meets the westward day-to-night flow. In the convergence zone, a minimal perturbation to the eastward jet is sufficient to begin the transfer of momentum across the equator and destabilize the symmetry. This behavior is consistent with a supercritical pitchfork bifurcation, shown schematically in Figure 10. When the bifurcation occurs, the symmetric equilibrium (with symmetric nightside gyres) becomes unstable, and two stable asymmetric equilibria (one with a northern, one with a southern gyre) emerge. We have validated the stability of both asymmetric equilibria by reflecting the profile around the equator and using it as the initial condition for another simulation. The bifurcation point appears time-dependent. Jiang et al. (1995) describe a qualitatively similar asymmetrical double-gyre behavior in a shallow-water model for an Earth-like context, with wind stress as the bifurcation parameter. However, the planetary parameters are sufficiently different in our study that further mathematical investigation is needed to identify a bifurcation parameter and map the full phase space of these equilibria. Several regimes in our ensemble exhibit persistent oscillations with migrating gyres. These oscillations are present for the strong forcing regime when \(P_{\rm rot}=5\) or \(10\) days, and \(\tau_{\rm rad}=1\) day at most sampled scale heights, and for the weak forcing regime when when \(P_{\rm rot}=10\) days, and \(\tau_{\rm rad}=0.1\) days at all but the smallest scale height. Through Lomb-Scargle analysis, we established that the period of these oscillations is \(\approx 50\) days for the strong forcing regime, and \(\approx 30\) days for the weak forcing regime, equal to several multiples of the orbital period. Note that the ratio \(\Delta\Phi_{\rm eq}/\tau_{\rm rad}\) is the same for strong forcing when \(\tau_{\rm rad}=1\) day and for weak forcing when \(\tau_{\rm rad}=0.1\) days. While the emergence of these oscillations is distinct from the apparent pitchfork bifurcation, the fact that many of the resulting oscillations exhibit hemispheric asymmetry and that they tend to occur at similar combinations of radiative timescale Figure 10: Supercritical pitchfork bifurcation diagram illustrating a potential mechanism for the emergence of asymmetric gyres. Prior to the bifurcation, the symmetric equilibrium is stable. At the bifurcation point, the previously stable symmetrical equilibrium becomes unstable, and two new equilibrium branches appear, with one cyclonic gyre in either north or south hemisphere. and rotation period suggests that these phenomena are connected. The advection in these regimes appears to oscillate through the two stable asymmetric patterns shown in Figure 10. This behavior is distinct from the typical behavior of 2D exoplanet simulations. Hot Jupiters tend to exhibit a Matsuno-Gill (Matsuno, 1966; Gill, 1980) type stationary-wave balance between Rossby and Kelvin waves (which can be seen, for example, in the left and center panels of Figure 1). Showman et al. (2009) find a coherent oscillation with a period of 43 days in their simulation the hot Jupiter HD 189733b using SPARC/MITgcm, suggesting a global sloshing mode with a period similar to that of the strong-forcing 50 day oscillations in our study. Further, both variability and north-south asymmetry have been documented in the both 2D and 3D GCM studies of exoplanets. We expand on connections to previous studies in Section 4.6. In both static and oscillatory asymmetrical regimes, the radiative timescale is short and the rotation period is long. Compared to the shorter rotation period of hot Jupiters, the Coriolis force is weaker in this region of our parameter space. The weaker rotational effects appear to allow easier transfer of momentum across the equator. Further investigation of the parameter space in conjunction with computational bifurcation theory tools could illuminate this phenomenon further but falls outside the scope of this work. The simulation snapshots and zonal wind flows presented here are representative of large, global-scale flow patterns. ### Equatorial and tropical band oscillation For simulations with a short rotation period (\(P_{\rm rot}=1\) day) and long radiative timescale (\(\tau_{\rm rad}=10\) days), we observe a persistent oscillation in the strong, medium, and weak forcing regimes with the largest scale height corresponding to \(\overline{\Phi}=4\times 10^{4}\) m\({}^{2}\)s\({}^{-2}\). Figure 11 shows snapshots of this phenomenon in the low-forcing regime: a hot equatorial band (top panel) alternates with two hot tropical bands (bottom panel). Note that Figure 11 shows snapshots corresponding to the average geopotential profile shown in Figure 4. The period of these oscillations is \(\approx 40\) hours for strong forcing, \(\approx 46\) hours for medium forcing, and \(\approx 55\) hours for weak forcing. The oscillation has a structure similar to a gravity-wave like, internal, zonally-symmetric Hough mode (Longuet-Higgins, 1968) with alternating periods of poleward advection of mass away from the equatorial band, followed by equatorward advection of mass. The Coriolis force acting on this zonally symmetric meridional flow gives rise to oscillations in the zonal mean winds as well. However, the obtained period is much longer than the period of the linear mode (about 6.4 hours). The structure of the quasistationary Kelvin and Rossby waves change substantially over the course of the oscillation in the zonal mean jet, leading us to speculate that two-way interactions between the waves and the mean flow are responsible for destabilizing the steady state solution. Further work is needed to verify this speculation. The behavior of a single hot equatorial band separating into two hot tropical bands in a persistent oscillation is unique among our simulations and only exists at the largest scale height. When \(P_{\rm rot}=1\) days and \(\tau_{\rm rad}=10\) days, simulations in all forcing regimes at smaller scale heights show similar qualitative behavior during spinup, but retain the hot equatorial band throughout and are eventually dampened. A large scale height appears necessary for these oscillations to persist. ### Comparisons with Three-Dimensional Model Studies In this study we explored sub-Neptunes, which bridge the population of hot Jupiters and terrestrial exoplanets. Some of the same phenomena that we see in our simulations have also been observed in 3D GCM simulations of sub-Neptunes as well as hotter gas giant and cooler terrestrial planets. In this section we discuss the connections between our study and existing literature regarding longitudinal variation, equator-to-pole contrast, variability, and the emergence of north-south asymmetry. Many of the same phenomena and trends that we see evolve in our shallow-water simulations of sub-Neptune atmospheres have appeared in previous studies of sub-Neptunes with 3D GCMs. Many such studies explored the atmospheric physics of one of the earliest discovered warm sub-Neptunes GJ 1214b (e.g. Menou, 2012; Kataria et al., 2012; Charnay et al., 2015; Drummond et al., 2018). Although the above models were tuned to a specific planet under specific insolation conditions, both Charnay et al. (2015) and Kataria et al. (2012) note that Neptunes with higher molecular weight atmospheres tend to exhibit narrower equatorial jets. Zhang & Showman (2017) confirm this trend in a parameter sweep investigating the effects of molecular weight and molar heat capacity. This behavior is consistent with our simulations, where high molecular weight corresponds to lower scale heights and lower values for the Rossby radius of deformation. Innes and Pierrehumbert (2022) also aim at mapping the parameter space and diagnosing the underlying dynamical mechanisms taking place in sub-Neptune atmospheres. Innes and Pierrehumbert (2022) explore the dry dynamics of temperate sub-Neptune planets with rotation periods of 6 and 33 days and insolation ranging from half to twice the insolation of Earth. Consistent with our results, their simulations exhibit higher equator-to-pole gradient and lower longitudinal variation for shorter rotation periods. The long (33-day) rotation period simulations also exhibit a cold spot near (but offset from) the poles, which we observe in the weak forcing regime for \(\tau_{\rm rad}=1\) day. This correspondence suggests that this pattern is due to a balance between rotational effects and insolation. Several atmospheric circulation patterns that we have identified in our ensemble appear in 3D GCM simulations of terrestrial planets. First, in our study, we observe the emergence of symmetric tropical jets when the radiative constant is long and the rotation period is short, which is the closest to terrestrial forcing among our simulations. The emergence of symmetric tropical jets has been noted in Kraucunas and Hartmann (2007) using a shallow-water model of an idealized Earth-like planet, as well as in Sergeev et al. (2022) in the context of bistability of the TRAPPIST-1e system modeled with a 3D GCM. Second, in our study, we observe north-south asymmetry which tends to emerge for short radiative timescale and medium rotation period, as a transitional regime between a chevron-shape characteristic for a shorter rotation period and a day-to-night flow typical for a longer rotation period. Noda et al. (2017) find similar long-timescale north-south asymmetry as a transitional regime between eastward equatorial jet pattern and mid-latitude westerly jet pattern in the simulated dynamics of a synchronously rotating terrestrial aquaplanet. The north-south asymmetry regime in our study exhibits bistability consistent with a pitchfork bifurcation behavior, in contrast to studies like Edson et al. (2011, e.g.) where the emergence of multiple equilibria is associated with hysteresis. Finally, migrating vortices similar to the behavior in our ensemble have likewise been documented in the context of 3D simulations of tidally synchronized exoplanets in, Skinner and Cho (2022); Cohen et al. (2022, e.g.). In moist simulations of TRAPPIST-1e, Cohen et al. (2022) find migrating Rossby gyres with an oscillation period of \(\approx 20\) days, which is approaching the 20- to 30-day period exhibited by our weak-forcing simulations. The migrating vortices we observe Figure 11: Snapshots of oscillatory behavior for a regime corresponding to a weakly forced planet with a scale height corresponding to \(\overline{\Phi}=4\times 10^{6}\) at rotation period \(P_{\rm rot}=1\) days and radiative timescale \(\tau_{\rm rad}=10\) days. Over the course of the oscillation, a hot equatorial band (top panel) alternates with two hot tropical bands (bottom panel) with a period of \(\approx 55\) hours. are cyclonic, and consistently with Skinner and Cho (2022), they dissipate and re-form. While Skinner and Cho (2022) describe quasi-periodic behavior, the oscillations in our ensemble are fully periodic after spin-up. ## 5 Conclusions In this study we have explored radiative forcing and rotational regimes relevant to sub-Neptune atmospheres that gave rise to new patterns of interest in the global-scale flow of these atmospheres. In our model we have varied the radiative forcing as a function of equilibrium geopotential contrast as well as a radiative timescale \(\tau_{\rm rad}\). We have additionally varied the planetary rotation period \(P_{\rm rot}\). We found that decreasing the planetary radius from a hot Jupiter to a sub-Neptune leads to lower geopotential contrast, even at equivalent radiative forcing. This phenomenon can partly be explained by the following mechanism. Reduced planetary radius leads to shorter advective timescales: a parcel of gas moving at the same zonal speed will move around the planet and pass through the dayside more frequently on a smaller planet rather than a larger one. Shorter advective timescales result in more uniform geopotential. We have identified the regions in the parameter space where qualitative transitions in global flow patterns take place. The strong and medium forcing regimes tend to exhibit qualitatively similar behavior: day-night circulation for short radiative timescales and jet-dominated flow with little longitudinal variation for long radiative timescales. In contrast, in the weak forcing regime, due to weaker insolation, the resulting patterns are more driven by the Coriolis force and are more likely to exhibit a westward shift of the hot spot as well as a jet-dominated structure. The transition from day-to-night flow to jet-dominated flow is tied predominantly to the radiative timescale \(\tau_{\rm rad}\), irrespective of rotation period \(P_{\rm rot}\). Short radiative timescales lead to a day-to-night flow, while long radiative timescales lead to jet-dominated flow. The medium value of \(\tau_{\rm rad}=1\) day is transitional. For short rotation periods, the intermediate value \(\tau_{\rm rad}=1\) day leads to jet-dominated flow, but as rotation period increases, these regimes tend to exhibit more longitudinal variation. Planets with a short rotation period have higher equator-to-pole contrasts than planets with a long rotation period, primarily due to lower Rossby radius of deformation. The day-night geopotential contrast tends to scale linearly with the amplitude of the prescribed local radiative equilibrium. The contrast is highest when both the radiative timescale and the rotation period are short, and is lowest when both are long. We have further identified the qualitative trends in the mean zonal winds. The transition from a single equatorial jet to symmetric tropical jets occurs for low rotation period as the radiative timescale increases. The emergence of secondary high-latitude jets in addition to the equatorial jet occurs in the shortest timescales regime as scale height decreases. Mean zonal wind speeds range from \(\sim 600\) ms\({}^{-1}\) for the strong-forcing regime to \(\overline{U}<100\) ms\({}^{-1}\) for the weak-forcing regime. In the weak-forcing regime, when the rotation period is short, the mean zonal wind speeds can be as low as \(\sim 10\) ms\({}^{-1}\). Using estimates for the wind speeds from our model, we compute the Rossby number, which indicates that we should expect global-scale phenomena. This behavior is consistent with our simulations. Temporal variability appears when either the radiative timescale \(\tau_{\rm rad}\) or the rotation period \(P_{\rm rot}\) is long. When both timescales are short, we see almost no variation. With an eye toward observations, \(\tau_{\rm rad}\) might be difficult to determine a priori. However, planets at shorter rotation periods are more likely to have short radiative timescales and thus to exhibit less temporal variability, and can therefore be more favorable candidates for observations over many epochs using current facilities. The code employs the following software packages for scientific computing: SciPy(Virtanen et al., 2020), NumPy(Harris et al., 2020), and Matplotlib(Hunter, 2007). This research was carried out (in part) at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration and funded through JPL's Strategic University Research Partnerships (SURP) program. Support for this work was provided out in part by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities Research in Astronomy, Inc. under NASA contract NAS5-26555. Support for program HST-GO-16194 was provided through a grant from the STScI under NASA contract NAS5-26555.
2309.07592
StarGAN-VC++: Towards Emotion Preserving Voice Conversion Using Deep Embeddings
Voice conversion (VC) transforms an utterance to sound like another person without changing the linguistic content. A recently proposed generative adversarial network-based VC method, StarGANv2-VC is very successful in generating natural-sounding conversions. However, the method fails to preserve the emotion of the source speaker in the converted samples. Emotion preservation is necessary for natural human-computer interaction. In this paper, we show that StarGANv2-VC fails to disentangle the speaker and emotion representations, pertinent to preserve emotion. Specifically, there is an emotion leakage from the reference audio used to capture the speaker embeddings while training. To counter the problem, we propose novel emotion-aware losses and an unsupervised method which exploits emotion supervision through latent emotion representations. The objective and subjective evaluations prove the efficacy of the proposed strategy over diverse datasets, emotions, gender, etc.
Arnab Das, Suhita Ghosh, Tim Polzehl, Sebastian Stober
2023-09-14T10:52:32Z
http://arxiv.org/abs/2309.07592v1
# StarGAN-VC++: Towards Emotion Preserving Voice Conversion Using Deep Embeddings ###### Abstract Voice conversion (VC) transforms an utterance to sound like another person without changing the linguistic content. A recently proposed generative adversarial network-based VC method, StarGANv2-VC is very successful in generating natural-sounding conversions. However, the method fails to preserve the emotion of the source speaker in the converted samples. Emotion preservation is necessary for natural human-computer interaction. In this paper, we show that StarGANv2-VC fails to disentangle the speaker and emotion representations, pertinent to preserve emotion. Specifically, there is an emotion leakage from the reference audio used to capture the speaker embeddings while training. To counter the problem, we propose novel emotion-aware losses and an unsupervised method which exploits emotion supervision through latent emotion representations. The objective and subjective evaluations prove the efficacy of the proposed strategy over diverse datasets, emotions, gender, etc. Arnab Das\({}^{1,3}\)+, Suhita Ghosh\({}^{1}\)+, Tim Polzehl\({}^{3}\), Ingo Siegert\({}^{2}\), Sebastian Stober\({}^{1}\)\({}^{1}\)Artificial Intelligence Lab (AILab), Otto-von-Guericke-University, Magdeburg, Germany \({}^{2}\)Mobile Dialog Systems, Otto-von-Guericke-University, Magdeburg, Germany \({}^{3}\)Speech and Language Technology, German Research Center for Artificial Intelligence (DFKI) {suhita.ghosh,ingo.siegert,stober}@ovgu.de {arnab.das,tim.polzehl}@dfki.de Footnote †: These authors contributed equally to this work **Index Terms**: voice conversion, emotion preservation, StarGAN ## 1 Introduction Voice conversion (VC) is a technique that transforms the speech of one speaker to make it sound like another speaker's voice, while keeping the linguistic content intact and ensuring that the quality, naturalness and comprehensibility of the converted speech remain high [1]. VC systems have numerous practical use cases in various domains [2]. For example, in speech therapy, a VC system could be used to record sessions for later analysis while ensuring that patient confidentiality is maintained, a mandatory request resulting from the requirement to adhere to the guidelines of the General Data Protection Regulation (GDPR) [3]. VC systems could also be useful in the entertainment industry to perform tasks like voice dubbing. Furthermore, preserving the emotional state of the speaker parallel to providing anonymized speech is necessary in all VC applications where the emotional information is necessary for further processing, such as when analyzing the speech for mental health issues or interaction with an affect-aware avatar. Therefore, VC methods should ensure that the original speaker's emotional state is not lost during the conversion process. Many deep learning-based approaches have been proposed for voice conversion [4]. Early deep neural network (DNN)-based methods [5] largely concentrated on the concept of frame-wise spectral feature conversion. Soon after, long short-term memory (LSTM) based sequence-to-sequence models [6] were employed for the task and produced high-quality converted samples. Although the converted samples generated by sequence-to-sequence models are more natural, they suffer from mispronunciation and training instability [7]. Moreover, these methods need numerous parallel utterances to learn [8], which is a very expensive, time-consuming, and burdensome task in itself. To alleviate the issue of obtaining expensive parallel training data, several non-parallel-based DNN methods were proposed, which are based on variational autoencoder (VAE) [9, 10, 11] and cycle-consistent generative adversarial network (CycleGAN) [12, 13, 14]. The VC methods utilizing VAE typically attempt to extract disentangled speaker and content embeddings from utterances through a reconstruction loss. The VAE-based approaches produce over-smoothed conversions, which leads to a poor-quality, buzzy-sounding speech [7, 8]. The CycleGAN-based frameworks use cycle consistency loss, which learns both forward and reverse conversion of samples between two speakers using non-parallel training utterances. A major problem with CycleGAN-based frameworks is that they require training of one generator for each pair of speakers, which makes it impractical for many-to-many VC use cases. The CycleGAN-based frameworks are also criticized for producing low-quality converted samples [7]. A few recent text-to-speech (TTS) based voice conversion methods [15] extract the content through two modules, automatic speech recognition (ASR) and TTS. The linguistic content is first extracted from the source speech through an ASR. Further, a TTS is fed with the ASR generated transcription and a target speaker's embedding to generate the converted utterance. The naturalness and intelligibility of the converted speech using such ASR and TTS-based systems are typically high [16]. However, they fail to preserve the prosody or affective state of the source speaker, as they use only the linguistic content of the source utterance to generate the converted speech. Further, the performance of such VC systems is dependent on the quality of transcriptions produced by the ASR. Recently, several StarGAN-based [17] non-parallel many-to-many VC frameworks [18, 19, 20] have been proposed. Among those, the StarGANv2-VC [20] framework is especially interesting as it generates fundamental frequency (F0) consistent, natural sounding and highly intelligible converted samples [21]. Moreover, the architecture design makes the framework very scalable, making it suitable for utterances of any length and, its fast conversion capability makes it suitable for real-time applications. However, the model fails to preserve the affective state of the source speaker when the source utterance has a large variation in the acoustic parameters. In this paper, we investigate the reason for StarGANv2-VC's failure in preserving the source speaker's emotion in converted speech. We also propose a novel method to circumvent the said problem, by using an unsupervised emotion supervision technique through latent emotion representations. Further, we propose losses which prevent emotion leakage from the reference audio used to generate the speaker embeddings, which also leads to a better disentanglement of speaker and emotion representations. We evaluate the proposed method extensively with three datasets for various emotions, different gender, and accent groups. The objective and subjective evaluation results depict that the proposed method significantly improves the emotion preservation capability over vanilla StarGANv2-VC for all cases. ## 2 StarGANv2-VC and Emotion Leakage ### Architecture The architecture diagram of StarGANv2-VC [20] is presented in Figure 1, along with the proposed emotion supervision components. The generator G produces the converted utterance \(X_{trg}\), which comprises an encoder (EN) and a decoder. The generator is fed with an utterance \(X_{src}\) belonging to the source speaker \(y_{src}\) and a speaker style-embedding \(h_{sty}\) belonging to the target speaker \(y_{trg}\), where \(y_{src},y_{trg}\in Y\). As described by the authors, the generator also consumes the source F0 embeddings \(h_{f0}\) produced by a pre-trained network, which enables the model to produce F0 consistent conversions [20]. The speaker-style-encoder (SE) generates the speaker-style-embedding \(h_{sty}\) from a randomly selected reference utterance \(X_{ref}\) of the target speaker \(y_{trg}\), given the target speaker-code. The embedding \(h_{sty}\) represents speaker-style information, such as accent. Hence, the converted sample \(X_{trg}\!=\!G(X_{src}\!,\!h_{f0}\!,\!h_{sty})\), contains the speaker characteristics of the target speaker, but the intonation and linguistic content of the source utterance. To encourage G in producing unique samples, a separate mapping network (M) is also trained along with SE, as done in [20]. During training, speaker-style embeddings are generated using both SE or M alternatively in each optimization step. The module M produces target speaker-embedding from a random latent vector sampled from a normal Gaussian distribution and target speaker-code. The discriminator module consists of two adversarial classifiers: a quality classifier (C) and a speaker classifier (C\({}_{\text{sp}}\)). The C classifier classifies the real and fake samples conditioned on the speaker-code. The classifier (C\({}_{\text{sp}}\)) classifies the source speaker of the converted sample during the discriminator training phase and classifies the target speaker during the generator training phase, as proposed in [20]. This further encourages G to suppress the source speaker's traits in converted samples. ### Emotion Leakage by Speaker Embedding The speaker-style-encoder (SE) extracts the target speaker's style-embedding from the reference utterance, shown as \(X_{ref}\) in Figure 1. For the emotion-leakage introspection, we train a vanilla StarGANv2-VC with English utterances from Emotional Speech Database (ESD) corpus [22], which has emotional utterances. After the training, we extract the speaker embeddings. For illustration, we choose two different speakers, 0012 (male) and 0016 (female) from ESD. The model is trained using utterances from these speakers. The speaker embeddings are projected onto a 2D space using tSNE transformation, and the results are presented in Figure 2. The speaker embeddings for utterances from a single speaker should conform to a compact region in space, regardless of the emotion of the utterances. On the contrary, Figure 2 reveals that the embeddings form unnecessary grouping based on emotion. This shows that the SE fails to disentangle emotional cues from the reference utterances when it generates target speaker embedding from those utterances. Consequently, this unintended emotional information leaks to the decoder along with the target speaker's style representation. This confuses the decoder while generating the utterance using the target speaker's voice. This leakage occurs due to two reasons: (i) the absence of a training objective that encourages the SE to perform the disentanglement between speaker-dependent and speaker-independent features. (b) the speaker-style reconstruction loss \(\mathcal{L}_{style}\)[20] used in the vanilla StarGANv2-VC, shown in Eqn. 1. \[\mathcal{L}_{style}\!=\!\big{[}|SE(X_{ref}\!,\!y_{trg})\!-\!SE(X_{trg}\!,\!y_{ trg})|\big{]} \tag{1}\] The speaker-style reconstruction loss ensures that the style embeddings can be regenerated from the generated samples. This loss intends to bring the speaking style of the converted sample closer to the reference sample, as both of them belong to the same (target) speaker. This further exacerbates the domain leakage problem, as the target speaker-embedding is not disentangled from the emotional cues of the reference utterance. Therefore, by minimizing this loss, the converted speech tends to be closer to the reference also in terms of emotion, which subdues the emotion of the source speech. This cripiles the emotion preservation capability of the vanilla StarGANv2-VC. ## 3 Methods The availability of a large amount of good quality emotion labels is difficult. Further, in non-parallel voice conversion, the emotion label of the converted samples is not available. Therefore, to circumvent the emotion leakage problem, we propose an unsupervised emotion supervision technique using emotion representations. The emotion representations are deep emotion-embeddings, which contain the information about the affective state of the utterance. The proposed method encourages the generator to preserve the affective state of the source in the converted samples. ### Deep Emotion Embedding and Emotion Supervision One way of providing emotion supervision is to extract latent emotion representations from the source and the converted samples, and then minimize the distance between them. To this end, an emotion-embedding extraction network is needed Figure 1: StarGANv2-VC architecture for VC. The dashed parts belong to the proposed emotion supervision method. Figure 2: 2D tSNE plot for speaker embedding generated by the style encoder of the vanilla StarGANv2-VC. to produce the latent emotion representations directly from the utterances. A two-stage training approach is proposed to create this emotion-embedding extraction network. At Stage-I, an emotion conversion framework is trained, where the emotion is converted instead of speaker-traits. We used the vanilla StarGANv2-VC framework for the emotion conversion task, as shown in Figure 3. In this case, the style encoder captures the representations of emotion instead of speaker. The working principle of the style encoder in the emotion conversion task is depicted in Figure 4. The shallow shared convolution layers extract a 512 dimensional shared latent representation from the reference mel-spectrogram. Consequently, the fully connected (FC) layers project this shared embedding to emotion-specific 64 dimensional emotion-style embeddings, \(h_{emo1},...,h_{emoN}\), where \(N\) is the number of emotion classes. Finally, an emotion style-embedding \(h_{emo}\) corresponding to the utterance needs to be selected based on the emotion-code \(e_{trg}\). The emotion-code denotes the emotion class of the utterance. Therefore, when the emotion ground truth is not available, the selection of the emotion-code is not possible. This makes the current technique unusable for emotion extraction for VC, as the converted samples do not have emotion ground truth. Therefore, we need a mechanism which does not depend on the availability of emotion ground truth, which is achieved at Stage-II. At Stage-II, the linear projecting FC layers are removed from the pre-trained SE and a classification head consisting of three FC layers is placed on top of the shared convolution layers, as shown in Figure 5. The classifier is then trained for a supervised emotion classification task. During training, the weights for the shallow pre-trained convolution layers remain fixed and only the weights for the classification head are optimized. After Stage-II training, the 64 dimensional output features of the second FC layer of the classification head can be used as the latent emotion representation. Now, this network serves as a differentiable black-box emotion-embedding extractor module \(C_{emo}\) for the VC task, as shown in Figure 1. The module can be used to extract deep emotion representations from any utterance, irrespective of the presence of emotion ground truth. A loss is computed by equating the emotion representations from the source and converted, as shown in Eqn. 2. \[\mathcal{L}_{demo}\!=\!\mathbb{E}_{X_{src},X_{trg}}\big{[}|C_{emo}(X_{src}) \!-\!C_{emo}(X_{trg})|\big{]} \tag{2}\] We minimize this additional \(\mathcal{L}_{demo}\) loss while training the StarGANv2-VC for the VC task, which encourages emotion preservation and suppress any emotion leakage from the reference utterance. ### Style Reconstruction Loss To encourage the disentanglement of the target speaker's style-embedding from any unintended emotion-indicating information of the reference utterance, we augment the style reconstruction loss \(\mathcal{L}_{style}\) as shown in Eqn. 3. Here, \(X_{ref2}\) is another randomly selected reference utterance for the same target speaker. The loss facilitates the model to bring the target speaker style embeddings extracted from \(X_{ref}\) and \(X_{ref2}\) closer. \[\mathcal{L}_{style}\!=\!\big{[}|SE(X_{ref},\!y_{trg})\!-\!SE(X_{ trg},\!y_{trg})|\big{]}\] \[+\!\big{[}|SE(X_{ref},\!y_{trg})\!-\!SE(X_{trg},\!y_{trg})|\big{]}\] \[+\!\big{[}|SE(X_{ref},\!y_{trg})\!-\!SE(X_{ref2},\!y_{trg})|\big{]} \tag{3}\] ### Conversion Invariant Feature Preservation Loss In vanilla StarGANv2-VC, all the losses are applied directly to the generator output, and gradients are back-propagated all the way through the decoder and encoder. Consequently, the shallow convolution layers of the encoder get very little supervision about the conversion invariant features. These conversion invariant features, such as content and emotion-related features, need to be preserved as much as possible in the latent output of the encoder. To this end, we propose a new loss \(\mathcal{L}_{inv}\) as shown in Eqn. 4, applied directly to the encoder EN. The loss minimizes the distance between the latent code generated from the source and the converted sample, forcing the encoder to preserve more content and emotion-related features in its latent output. This loss further helps in disentanglement of content features from speaker-specific features. \[\mathcal{L}_{inv}\!=\!\mathbb{E}_{X_{src},X_{trg}}\big{[}|EN(X_{src})\!-\!EN( X_{trg})|\big{]} \tag{4}\] ### Overall Training Objective The overall training objective for the generator is presented in Eqn. 5, where \(\mathcal{L}_{demo}\) and \(\mathcal{L}_{inv}\) are the proposed losses, and the rest are taken from [20]. \(\mathcal{L}_{adv}\) is the typical adversarial loss. \(\mathcal{L}_{spk}\) is adversarial speaker classifier loss. \(\mathcal{L}_{div}\) encourages the generator to Figure 4: Architecture and the working mechanism of the style encoder for emotion conversion. Figure 5: Block diagram of the emotion classifier. \(X\) is any input mel-spectrogram. The trainable emotion classification head is added on top of pre-trained shared convolution layers. The classification head predicts the emotion class \(\hat{e}\). Figure 3: StarGANv2-VC used in emotion conversion. For example, a happy utterance from speaker 1 is converted to a sad utterance. The emotion-style encoder generates sad emotion embedding from a sad reference utterance of speaker 2. produce diversified samples when given different style-embeddings. \(\mathcal{L}_{asr}\) is linguistic content preservation loss. \(\mathcal{L}_{norm}\) is the norm consistency loss which ensures the preservation of voiced/unvoiced intervals. \(\lambda_{cycle}\) is cycle consistency loss to encourage the generator to learn a bijective mapping between source and target speakers and, \(\mathcal{L}_{F0}\) is F0 consistency loss helps in generating F0-consistent samples. \[\min_{G,SE,M} \mathcal{L}_{adv}\!+\!\lambda_{spk}\mathcal{L}_{spk}\!+\!\lambda_ {style}\mathcal{L}_{style}\!-\!\lambda_{div}\mathcal{L}_{div}\] \[+\lambda_{asr}\mathcal{L}_{asr}\!+\!\lambda_{norm}\mathcal{L}_{ norm}\!+\!\lambda_{cycle}\mathcal{L}_{cycle}\] \[+\lambda_{F0}\mathcal{L}_{F0}\!+\!\lambda_{demo}\mathcal{L}_{demo} \!+\!\lambda_{inv}\mathcal{L}_{inv} \tag{5}\] The training objective for the adversarial classifiers is presented in Eqn. 6. The adversarial quality classifier maximizes the adversarial loss \(\mathcal{L}_{adv}\). The speaker classifier minimizes \(\mathcal{L}_{aspk}\) through the classification of the source speaker. Each loss is weighted with a corresponding \(\lambda\) hyperparameter. \[\min_{C,C_{sp}} -\mathcal{L}_{adv}\!+\!\lambda_{aspk}\mathcal{L}_{aspk} \tag{6}\] ## 4 Datasets, Experiments and Results ### Training Details We term our approach as StarGAN-VC++ and the vanilla StarGANv2-VC framework as Baseline. We train our models with a random split of 0.8/0.1/0.1 for train/validation/test. All the utterances are re-sampled to \(24\,\mathrm{kHz}\). Each model is trained for 60 epochs on log mel-spectrograms with a batch size of 16 using a Tesla V100 (32 GB) GPU, with a training time of around 26 hours. To train our model we set, \(\lambda_{spk}\!=\!0.5\), \(\lambda_{aspk}\!=\!0.1\), \(\lambda_{style}\!=\!1\), \(\lambda_{div}\!=\!1\), \(\lambda_{asr}\!=\!10\), \(\lambda_{norm}\!=\!1\), \(\lambda_{cycle}\!=\!5\), \(\lambda_{FO}\!=\!5\), \(\lambda_{demo}\!=\!2\), and \(\lambda_{inv}\!=\!5\). AdamW [23] is used with the learning rate of \(10^{-4}\). A HiFiGAN [24] vocoder is trained with datasets mentioned in Table 1. The vocoder generates a 1 minute long waveform from the converted mel-spectrogram in 0.1 seconds. For the assessment of emotion preservation, we train a support vector machine (SVM) based emotion classifier as done in [25]. ### Datasets Four datasets have been used in this work, which have been used for different purposes, as shown in Table 1. We consider 5 emotion classes \(e\!\in\!\{\text{happy},\,\text{sad},\,\text{anger},\,\text{neutral},\text{surprise}\}\) for the emotion preservation evaluation. **Emotional Speech Database (ESD)**[22]: The corpus contains English and Chinese emotional utterances belonging to the 5 emotion classes in \(e\). We consider only English utterances in our work, where 350 utterances per emotion class are spoken by 5 male and 5 female native English speakers. **Voice Cloning Toolkit (VCTK)**[26]: The dataset contains English utterances from 109 speakers having various accents. For training of VC models, we consider utterances having English accent from 5 randomly selected males and females. The evaluation is performed on utterances having English, American and Canadian accents. **Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS)**[27]: This corpus comprises 2880 utterances spoken by 24 professional actors from North America. The dataset has ground truth for 7 emotion classes, but we consider the ones mentioned in \(e\). **Crowd-sourced Emotional Multimodal Actors Dataset (CREMA-D)**[28]: This is another corpus having emotional utterances having the mentioned 5 emotion classes in \(e\). The dataset contains 7442 clips from 91 actors belonging to diverse age groups and ethnicity. ### Evaluation Setup To evaluate the efficacy of the proposed methods, we perform both objective and subjective evaluations. For all assessments, we choose an equal number of male and female source and target speakers from each dataset. **Objective Evaluation**: We choose test utterances from 10 source speakers and 6 target speakers from the ESD dataset, leading to 2100 ESD\(\rightarrow\)ESD conversions. For VCTK\(\rightarrow\)VCTK, we form 2000 conversions by selecting test utterances from 10 source speakers and 6 target speakers. To evaluate emotion preservation, we compare the models using 3 metrics: i) \(ACC_{sp}\): accuracy between the source emotion ground truth and the SVM predicted emotion label of the converted utterances. ii) \(ACC_{nu}\): accuracy between the predicted labels of the source and the converted samples, where both are predicted by the SVM. The metric is especially useful when no emotion ground truth is available, as in VCTK dataset. 3) \(MAE_{ambad}\): mean absolute error (MAE) between the emotion-embedding of the source and converted utterances, where the emotion embedding is extracted using the automatic embedding extractor. We report the pitch correlation coefficient (\(PCC\)) [29] as a measure of intonation preservation, predicted mean opinion score (\(pMOS\)) [30] as a measure of the quality of converted samples, and character error rate (\(CER\)) as a content preservation evaluation measure. As a measure of speaker anonymization, we report speaker similarity score (SSS) between the source and the converted speech. The score is generated using a speaker verification toolkit [31] from Hugging-Face repository1. Footnote 1: [https://github.com/hugging-Face/](https://github.com/hugging-Face/) **Subjective Evaluation**: We randomly choose 100 conversions for the subjective evaluation, as it is expensive and time-consuming to assess all of them. A user study was conducted on Crowdee2 platform, where 200 native English speakers participated. Each subject performs two types of assessments: 1) verify emotion leakage from reference: a triplet of converted, source and reference utterances were provided. The subjects were asked to select between source and reference utterances, the one which has a similar emotion as the converted utterance (ignoring content or voice similarity). ii) voice quality: assess the naturalness on a 5-point scale (1: bad to 5: excellent). The users were not apprised about whether the converted utterance is produced by Baseline or by the proposed model. Each of the tasks was rated by at least 5 subjects. The subjects were provided with an anchor question and also hidden trapping questions for quality check. The subjects failing the trapping questions were not considered for the analysis. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Dataset** & **VC Training** & **VC Evaluation** & **Vocoder Training** & **Emotion** \\ \hline ESD & Yes & Yes & Yes & Yes \\ VCTK & Yes & Yes & Yes & No \\ RAVDESS & No & No & Yes & Yes \\ CREMA-D & No & No & Yes & No \\ \hline \hline \end{tabular} \end{table} Table 1: _Depicts usage of datasets for different purposes. The column Emotion denotes whether the dataset is used to train the emotion conversion network and, the emotion classifier for evaluation._ ### Results and Discussion The results of the objective evaluations are summarized in Table 2. For all conversions involving ESD and VCTK corpora, the proposed StarGAN-VC++ outperforms Baseline with respect to all emotion-preservation metrics. In terms of ACC\({}_{\text{sum}}\), Baseline only manages to achieve 25.9% emotion preservation accuracy, whereas StarGAN-VC++ achieves 52.7%. As per MAE\({}_{\text{embed}}\), StarGAN-VC++ achieves a significantly lower mean score of 45.7 compared to Baseline's score of 58.2. Also for pitch correlation, StarGAN-VC++ achieves a higher value of PCC (78.3) compared to Baseline (77.3). These results are statistically significant, as paired t-test achieves \(p\textless 0.0001\) on PCC and MAE\({}_{\text{embed}}\) metrics. For naturalness, both methods show similar pMOS values, which indicate that the proposed emotion-preservation techniques do not degrade the naturalness of the conversions. As far as content-preservation is concerned, StarGAN-VC++ shows a statistically significantly (\(p<0.05\)) lower CER value (3.2%) than Baseline (3.5%). This improvement might be attributed to the proposed \(\mathcal{L}_{inv}\) loss. The results also reveal that the speaker anonymization capability is not hampered by the introduction of emotion-preserving losses, as the mean SSS values for Baseline and StarGAN-VC++ are 24.4 and 23.9 respectively. This result is also statistically significant as \(p\textless 0.001\) in paired t-test. We also evaluate the models emotion-wise. For all 5 emotions in \(e\), StarGAN-VC++ outperforms Baseline with respect to emotion. Interestingly, StarGAN-VC++ shows significantly higher mean PCC values for all of these emotions compared to Baseline. This reveals that for emotional utterances, the variations of pitch are also better preserved by StarGAN-VC++, where pitch variations are indicative of emotional cues in speech [32]. The results also show that Baseline deals with _neutral_ utterances better than other emotions, as it attains a very high score of 79.7% for ACC\({}_{\text{sum}}\), whereas for other emotions ACC\({}_{\text{sum}}\) lies in the range of 1.5% - 19.1%. Furthermore, it appears that _surprise_ is the most difficult emotion to preserve, as both Baseline and StarGAN-VC++ models attain a 0% score for ACC\({}_{\text{pl}}\). However, in terms of ACC\({}_{\text{sum}}\), our StarGAN-VC++ model gets 62.9% accuracy against 1.5% by Baseline. As per [33], _surprise_ is also the most difficult emotion for emotion recognition tasks as well. With respect to gender-wise evaluation for ESD\(\rightarrow\)ESD, StarGAN-VC++ shows a similar trend of better emotion-preservation as per the metrics. StarGAN-VC++ shows significant improvement in PCC values over Baseline and, also achieves a lower CER. StarGAN-VC++ model achieves the highest ACC\({}_{\text{sum}}\) value of 75.1% for F\(\rightarrow\)F conversions against 23.3% by Baseline. However, in terms of ACC\({}_{\text{pl}}\), StarGAN-VC++ achieves the highest score of 54.1% for F\(\rightarrow\)M conversions, against a score of 23.0% by Baseline. For VCTK\(\rightarrow\)VCTK conversions, the improvement in PCC and MAE\({}_{\text{embed}}\) are not that significant. This is because the utterances in VCTK do not have emotional utterances with high pitch and intention variations and, the change in F0 contour is relatively less compared to ESD utterances. With respect to accent-wise evaluation, we assess for English\(\rightarrow\)English, American\(\rightarrow\)English, and Canadian\(\rightarrow\)English sub-groups. The models were not trained with utterances from American and Canadian speakers from the VCTK dataset, which forms the unseen\(\rightarrow\)seen speaker evaluation scenario as well. StarGAN-VC++ model outperforms Baseline in all three accent scenarios with respect to emotion preservation, scoring significantly higher ACC\({}_{\text{sum}}\) values and lower MAE\({}_{\text{embed}}\) values. Both Baseline and StarGAN-VC++ models achieve the highest ACC\({}_{\text{sum}}\) values for American\(\rightarrow\)English conversions, 34.3 % and 44.2% respectively. StarGAN-VC++ does not manage to achieve higher PCC values for these accent-based conversion sub-groups. We also performe an ablation study on the proposed losses and the results are presented in Table 3. The ablation study reveals that both of the proposed losses, deep embeddings-based and the augmented style reconstruction losses have a significant impact on emotion preservation. In the absence of \(\mathcal{L}_{demo}\) loss, ACC\({}_{\text{sum}}\) value comes down to 36.4% from 52.7% for StarGAN-VC++. Similarly, when vanilla \(\mathcal{L}_{style}\) is used instead on the proposed augmented version, the ACC\({}_{\text{sum}}\) value comes down to 34.3%. The drop is higher than that of \(\mathcal{L}_{demo}\) loss, which implies that the augmented style reconstruction loss results in a better disentanglement of target speakers embeddings, and that facilitates emotion preservation. The use of augmented style reconstruction loss also improves the pitch correlation, as it achieves the mean PCC value of 79.8. In terms of quality, the pMOS value remains unchanged in all cases. The results for the subjective evaluation are presented in Table 4. In terms of MOS, our proposed method does not degrade \begin{table} \begin{tabular}{l|l|l|l|l|l|l|l|l|l|l|l|l|l|l|l|l} \hline \multicolumn{2}{l|}{**Source -**} & \multicolumn{1}{c|}{**Target**} & \multicolumn{1}{c|}{**Type**} & \multicolumn{1}{c|}{**\(\mathbf{ACC}_{\text{eq}}[9]\uparrow\)**} & \multicolumn{1}{c|}{**\(\mathbf{ACC}_{\text{eq}}[9]\uparrow\)**} & \multicolumn{1}{c|}{**\(\mathbf{MAE}_{\text{embed}}[10]\uparrow\)**} & \multicolumn{1}{c|}{**\(\mathbf{PC}[10]\uparrow\)**} & \multicolumn{1}{c|}{**\(\mathbf{pdMOS}\uparrow\)**} & \multicolumn{1}{c|}{**\(\mathbf{CSR}[9]\downarrow\)**} & \multicolumn{1}{c|}{**\(\mathbf{SNR}[10]\downarrow\)**} & \multicolumn{1}{c}{**\(\mathbf{SNR}[10]\downarrow\)**} & \multicolumn{1}{c}{**\(\mathbf{SNR}[10]\downarrow\)**} & \multicolumn{1}{c}{**\(\mathbf{SNR}[10]\downarrow\)**} & \multicolumn{1}{c}{**\(\mathbf{\mathbf{S}}[10]\downarrow\)**} & \multicolumn{1}{c}{**\(\mathbf{\mathbf{S}}[10]\downarrow\)**} & \multicolumn{1}{c}{**\(\mathbf{SNR}[10]\downarrow\)**} & \multicolumn{1}{c}{**\(\mathbf{\mathbf{S}}[10]\downarrow\)**} & \multicolumn{1}{c}{**\(\mathbf{\
2309.06854
Nonlinear network identifiability: The static case
We analyze the problem of network identifiability with nonlinear functions associated with the edges. We consider a static model for the output of each node and by assuming a perfect identification of the function associated with the measurement of a node, we provide conditions for the identifiability of the edges in a specific class of functions. First, we analyze the identifiability conditions in the class of all nonlinear functions and show that even for a path graph, it is necessary to measure all the nodes except by the source. Then, we consider analytic functions satisfying $f(0)=0$ and we provide conditions for the identifiability of paths and trees. Finally, by restricting the problem to a smaller class of functions where none of the functions is linear, we derive conditions for the identifiability of directed acyclic graphs. Some examples are presented to illustrate the results.
Renato Vizuete, Julien M. Hendrickx
2023-09-13T10:01:25Z
http://arxiv.org/abs/2309.06854v1
# Nonlinear network identifiability: The static case ###### Abstract We analyze the problem of network identifiability with nonlinear functions associated with the edges. We consider a static model for the output of each node and by assuming a perfect identification of the function associated with the measurement of a node, we provide conditions for the identifiability of the edges in a specific class of functions. First, we analyze the identifiability conditions in the class of all nonlinear functions and show that even for a path graph, it is necessary to measure all the nodes except by the source. Then, we consider analytic functions satisfying \(f(0)=0\) and we provide conditions for the identifiability of paths and trees. Finally, by restricting the problem to a smaller class of functions where none of the functions is linear, we derive conditions for the identifiability of directed acyclic graphs. Some examples are presented to illustrate the results. ## I Introduction Networked systems composed by nodes or subsystems interacting with each other are ubiquitous [1]. In several of these systems, the knowledge of the dynamics associated with edges is essential for the analysis of the system and design of control algorithms. However, the identification of the networked systems from partial measurements without disconnections of some parts of the network can be really challenging since a measured signal depends on the combination of the dynamics of potentially many edges. There has been some recent works in the linear case on the conditions of identifiability: when is it possible to unambiguously recover local dynamics from a set of measured nodes? This question is important in order to design experiments and position sensors and excitations [2, 3, 4]. This depends mainly on the topology of the network and on the position of the excitation and measured signal. Graph theoretical conditions are available in full measurement case or full excitation [5], but not in the general case yet [6, 7, 8, 9]. However, most actual systems of interest are nonlinear, including many different research fields like coupled oscillators [10], gene regulatory networks [11], biochemical reaction networks [12], social networks [13], among others. While linear systems usually provide a local approximation of nonlinear phenomena, no one to the best of our knowledge has studied the identifiability question for nonlinear systems. The identification of a nonlinear system is itself a challenging problem due to the variety of potential models (e.g., Hammerstein, Wiener, Volterra series) and the constant arising of new formulations for particular applications. Depending on the type of nonlinearities and its location (i.e., at the level of inputs, outputs or in the middle of interactions), certain models can be more suited for specific applications, while others could not give a good description of some systems [14, 15, 16]. In addition to the complexity of a single nonlinear model, a network involves several nonlinear systems associated with the edges, which generates complex collective behaviors and increase considerably the difficulties of the identification problem. In the nonlinear case, the conditions for identifiability of networks do not depend only on the network topology but also on the types of nonlinear functions. For instance, trigonometric functions in coupled oscillators [10] are very different from the activation functions in neural networks that can be nondifferentiable [17]. Furthermore, in heterogeneous networks, different types of functions could be associated with edges in the same network. In addition, the class of functions considered for the problem of identifiability could be determinant. It is clear that if we restrict the problem to a small class of functions, the conditions for identifiability of a network could be relaxed, but the functions in the class could not fit real models. Moreover, properties of functions such as continuity, differentiability, analyticity, etc., could play an important role in the determination of conditions for the identifiability of networks. We study here the question of identifiability in the nonlinear setting, assuming in this first work that the local dynamics have very simple structure (i.e., the output of a node is entirely determined by static interactions with the neighbors). We show that, surprisingly, the conditions for identifiability in directed acyclic graphs are weaker than in the linear case, provided that the dynamics are indeed not linear, and do not involve constant output component (when they do, the problem is indeed unsolvable). We explain this by the fact that in the linear case, the loss of identifiability often results from ambiguities made possible by the superposition principle/superposition of signal, which is no longer possible in the nonlinear case. In this work, we provide a formulation of the network identifiability problem in the nonlinear case, by considering a static model in the edges. By restricting the problem to a specific class of functions, we provide identifiability conditions for paths and trees. Furthermore, by considering a smaller class of functions, we derive conditions for the identifiability of directed acyclic graphs. ## II Problem formulation ### _Model class_ Since the type of nonlinear dynamics in a network can be really complex, in this preliminary work, we will consider a static additive model to focus on the effect of the nonlinearities. Therefore, we exclude dynamical processes that involve any memory at the level of nodes or edges. Our objective is to generalize the results of this paper to more complex dynamical models in future works. For a network composed by \(n\) nodes, we consider that the output of each node \(i\) is given by: \[y_{i}^{k}=\sum_{j\in\mathcal{N}_{i}}f_{i,j}(y_{j}^{k-1})+u_{i}^{k-1},\ \ \text{for all}\ i\in\{1,\ldots,n\}, \tag{1}\] where the superscripts denote the value of the inputs and outputs at the specific time instant, \(f_{i,j}\) is a nonlinear function, \(\mathcal{N}_{i}\) is the set of in-neighbors of node \(i\), and \(u_{i}\) is an external excitation signal. The node \(i\) is not included in \(\mathcal{N}_{i}\), since it would imply a dynamical process at the level of the node. The model (1) corresponds to a nonlinear static version of the model considered in [5, 6, 7], where the nonlinearities are located in the edges. In this case, the output of a node \(i\) is determined by its own excitation signal \(u_{i}\), and the outputs of the neighbors \(y_{j}\) affected by a nonlinear function \(f_{i,j}\) associated with the edge that connects the neighbor. Notice that when the functions \(f_{i,j}\) in (1) are linear, the conditions for identifiability of linear networks derived in [5, 6, 7] also hold. In this work, we will consider analytic functions with a Taylor series that converges to the function for all \(x\in\mathbb{R}\). This representation as power series will allow us to derive conditions for the identifiability of nonlinear networks. Model (1) corresponds to the full excitation case where all the nodes are excited. The nonzero functions \(f_{i,j}\) between the agents define the topology of the network \(G\), forming the set of edges \(E\). In this work, we do not consider multi-edges between two nodes. **Assumption 1**: _The topology of the network is known, where the presence of an edge implies a nonzero function._ Assumption 1 implies that we know which nodes are connected by nonzero functions. The objective is to determine which nodes need to be measured to identify all the nonlinear functions in the network. Similarly to [5, 6, 7], for the identification process we assume that in an ideal scenario the relations between excitations and outputs of the nodes have been perfectly identified. In this work, we restrict our attention to networks that do not contain any cycle (i.e., directed acyclic graphs). This implies that when we measure a node \(i\), we identify the function \(F_{i}^{k}\): \[y_{i}^{k} = u_{i}^{k-1}+F_{i}^{k}(u_{1}^{k-2},\ldots,u_{1}^{k-m_{1}},\ldots, u_{n_{i}}^{k-2},\ldots,u_{n_{i}}^{k-m_{n_{i}}}), \tag{2}\] \[1,\ldots,n_{i}\in\mathcal{N}_{i}^{p},\] where \(\mathcal{N}_{i}^{p}\) denotes the set of nodes that have a path to the measured node \(i\). The function \(F_{i}^{k}\) is implicitly defined by (1) and only depends on a finite number of inputs due to the absence of memory on the edges and nodes, and the absence of cycles. With a slight abuse of notation, we use the superscript in the function \(F_{i}^{k-s}\) to indicate that all the inputs in (2) are delayed by \(s\). **Example 1**: _Let us consider the graph in Fig. 1 where the measurement of the node 3 provides the output:_ \[y_{3}^{k} = u_{3}^{k-1}+F_{3}^{k} \tag{3}\] \[= u_{3}^{k-1}+f_{3,2}(u_{2}^{k-2}+f_{2,1}(u_{1}^{k-3}))+f_{3,1}(u_ {1}^{k-2}).\] _We can observe that the function \(F_{3}^{k}\) depends on the inputs of the nodes 1 and 2 that have a path to the node 3._ ### _Identifiability_ The identifiability problem is related to the possibility of identifying the functions \(f_{i,j}\) based on several measurements. For this, we introduce the following relationship between the measurements and the functions \(f_{i,j}\). **Definition 1** (Set of measured functions): _Given a set of measured nodes \(\mathcal{N}^{m}\), the set of measured functions \(F(\mathcal{N}^{m})\) associated with \(\mathcal{N}^{m}\) is given by:_ \[F(\mathcal{N}^{m}):=\{F_{i}^{k}\ |\ i\in\mathcal{N}^{m}\}.\] _We say that a function \(f_{i,j}\) associated with an edge satisfies \(F(\mathcal{N}^{m})\) if \(f_{i,j}\) can lead to \(F(\mathcal{N}^{m})\) through (1)._ For completely arbitrary functions, the identifiability problem can be really challenging or even unrealistic. For this reason, we restrict the identifiability problem to a certain class of functions \(\mathcal{F}\), which implies that the functions associated with the edges belong to \(\mathcal{F}\) and that the identifiability is considered only among the functions belonging to \(\mathcal{F}\). The different classes of functions will be specified depending on the results. **Definition 2** (Edge identifiable): _In a network \(G\), an edge \(f_{i,j}\) is identifiable in a class \(\mathcal{F}\) if given a set of measured functions \(F(\mathcal{N}^{m})\), every set of functions in \(\mathcal{F}\) leading to \(F(\mathcal{N}^{m})\) has the same \(f_{i,j}\)._ **Definition 3** (Network identifiable): _A network \(G\) is identifiable in a class \(\mathcal{F}\) if all the edges are identifiable in the class \(\mathcal{F}\)._ The function \(F_{i}^{k}\) in (2) is the most complete information that we can obtain when we measure a node \(i\). This implies that if it is not possible to identify the functions \(f_{i,j}\) with \(F_{i}^{k}\), these edges are unidentifiable. On the contrary, if the functions \(f_{i,j}\) are identifiable, it seems reasonable that under some conditions, the function \(F_{i}^{k}\) can be well approximated after sufficiently long experiments, which could allow us to identify the functions \(f_{i,j}\) approximately. Fig. 1: The function \(F_{3}^{k}\) associated with the measurement of the node 3 depends on the past inputs of the nodes 1 and 2 that have a path to the node 3. ### _First results_ We provide a result about the information that we can obtain with the measurement of sinks and sources1. Footnote 1: A source is a node with no incoming edges. A sink is a node with no outgoing edges. **Proposition 1** (Sinks and sources): _The measurement of the sources is never necessary for the identifiability of the network. The measurement of all the sinks is necessary for the identifiability of the network._ First, the measurement of any source \(j\) generates the output \(y_{j}^{k}=u_{j}^{k-1}\), which does not provide any information about functions associated with edges in the network. Next, let us consider a sink \(i\) with \(m\) incoming edges. The measurement of this sink provides an output: \[y_{i}^{k}=u_{i}^{k-1}+f_{i,1}(y_{1}^{k-1})+\cdots+f_{i,m}(y_{m}^{k-1}), \tag{4}\] and it is the only way of obtaining information of the functions \(f_{i,1},\ldots,f_{i,m}\). Thus, the measurement of all the sinks is necessary. The following lemma provides a result about the structure of the function \(F_{i}^{k}\) associated with the measurement of a node \(i\) with respect to the excitation signals of the in-neighbors. **Lemma 1**: _Let \(j\) be an in-neighbor of a measured node \(i\) and the function \(F_{i}^{k}\). Then, by assuming all the variables but \(u_{j}^{k-2}\) constant, we have:_ \[F_{i}^{k}=\alpha+f_{i,j}(u_{j}^{k-2}+\beta),\] _where \(\alpha\) and \(\beta\) are constants with respect to \(u_{j}^{k-2}\)._ According to (2), the function \(F_{i}^{k}\) of a measured node \(i\) is given by: \[F_{i}^{k} =\sum_{\ell=1}^{m}f_{i,\ell}(y_{\ell}^{k-1})\] \[=\sum_{\ell=1}^{m}f_{i,\ell}(u_{\ell}^{k-2}+F_{\ell}^{k-1}), \tag{5}\] where \(m\) is the number of in-neighbors of the node \(i\). All the functions \(F_{\ell}^{k-1}\) depend on inputs delayed by 1, which implies that no \(F_{\ell}^{k-1}\) depends on \(u_{j}^{k-2}\). Finally, no \(f_{i,p}\) with \(p\neq j\) can be a function of \(u_{j}^{k-2}\) since there are no multi-edges. Lemma 1 implies that \(f_{i,j}\) in (5) is the only function that depends on \(u_{j}^{k-2}\), and \(F_{j}^{k-1}\) does not depend on \(u_{j}^{k-2}\). ## III Paths and trees ### _Strong requirements for general nonlinear functions_ Since the conditions for the identifiability of linear networks are based on the existence of paths in the network that carry information from the excited nodes to the measured nodes [5, 7], we first focus on the conditions for the identifiability of a path graph 2 in the nonlinear case. Footnote 2: A path graph is a graph that can be drawn so that all the nodes and edges lie on a single straight line. In the linear case, for this graph topology we only need to measure the sink to identify all the transfer functions of the network thanks to the superposition principle [5]. However, this is not true for the nonlinear case. **Example 2** (Path graph): _Fig. 2 presents a simple path graph with 3 nodes where the measurement of the sink is not enough to identify the network when general nonlinear functions are considered._ **Proposition 2** (General nonlinear functions): _For identifiability of a path graph in the class of general nonlinear functions, it is necessary to measure all the nodes except by the source._ Let us consider a path graph with \(n>2\) nodes and a node \(i\) in the middle, which is neither the source nor the sink. The output of the node \(i+1\) is given by: \[y_{i+1}^{k} =u_{i+1}^{k-1}+F_{i+1}^{k} \tag{6}\] \[=u_{i+1}^{k-1}+f_{i+1,i}(u_{i}^{k-2}+f_{i,i-1}(u_{i-1}^{k-3}+F_{i -1}^{k-2})).\] If the node \(i\) is not measured and we consider the functions \(\tilde{f}_{i,i-1}(x)=f_{i,i-1}(x)+\gamma\) and \(\tilde{f}_{i+1,i}(x)=f_{i+1,i}(x-\gamma)\) with \(\gamma\neq 0\), the function \(F_{i+1}^{k}\) in (6) is the same, which implies that the path graph cannot be identified. On the other hand, if we measure all the nodes, we know the function \(F_{j}^{k}\) associated with any node \(j\) in the network. Let us consider a node \(i\) with an in-neighbor and we set all the inputs to 0 except \(u_{i}^{k-1}\). Then, the measurement of the node \(i\) gives us: \[y_{i}^{k} =u_{i}^{k-1}+F_{i}^{k} \tag{7}\] \[=u_{i}^{k-1}+f_{i,i-1}(u_{i}^{k-2}+F_{i-1}^{k-1}(0))\] If there is another function \(\tilde{f}_{i,i-1}\) satisfying \(F_{i}^{k}\) in (7), we would have for all \(u_{i}^{k-2}\in\mathbb{R}\): \[f_{i,i-1}(u_{i}^{k-2}+F_{i-1}^{k}(0))=\tilde{f}_{i,i-1}(u_{i}^{k-2}+F_{i-1}^{ k}(0)),\] which implies that \(f_{i,i-1}=\tilde{f}_{i,i-1}\) and we can identify \(f_{i,i-1}\). Following a similar approach for the other nodes, we can identify all the nonlinear functions in the path graph. Finally, by Proposition 1, it is never necessary to measure the source. Notice that in the proof of Proposition 2 we do not use properties of analytic functions and the results are also valid for nonlinear functions that are not analytic. Proposition 2 shows that even in a simple graph topology like a path graph, which is the key for the identification of more complex network topologies, the identification of general nonlinear functions cannot be performed by only measuring the sink. This is due to the constant factor associated with the static behavior of the function. Fig. 2: A path graph with 3 nodes and different nonlinear functions that satisfy \(F_{3}^{k}=\tilde{F}_{3}^{k}\). For any \(\gamma\neq 0\), the measurement of the sink is not enough for the identification of the network. ### _Identifiability conditions for functions with no constant effect_ We restrict the identifiability problem to a smaller class of functions without a static component. **Definition 4** (Class of functions \(\mathcal{F}_{Z}\)): _Let \(\mathcal{F}_{Z}\) be the class of functions \(f:\mathbb{R}\rightarrow\mathbb{R}\) with the following properties:_ 1. \(f\) _is analytic in_ \(\mathbb{R}\)_._ 2. \(f(0)=0\)_._ Thus, we consider that for any function in \(\mathcal{F}_{Z}\), the Taylor series at 0 converges to the function for all \(x\in\mathbb{R}\). Since the power series is unique, there is no loss of generality by considering the Taylor series at 0 and not centered at a different point in \(\mathbb{R}\). Notice that the class \(\mathcal{F}_{Z}\) encompasses numerous nonlinear functions [18], including polynomial functions which are used for the approximation of continuous functions through the Weierstrass Approximation theorem [19]. Also, there is no loss of generality with the class \(\mathcal{F}_{Z}\) since the results are valid for the variable part of the functions. **Lemma 2**: _For identifiability in the class \(\mathcal{F}_{Z}\), the measurement of a node provides the identification of all the incoming edges of the node._ Let us consider a node \(i\) with \(m\) incoming edges. The output of the node \(i\) is given by: \[y_{i}^{k}=u_{i}^{k-1}+F_{i}^{k}(u_{1}^{k-2},\ldots,u_{m}^{k-2},\ldots), \tag{8}\] where \(F_{i}^{k}\) is determined by the set of functions \(\{f\}\) associated with the edges of the network. Let us assume that there exists another set of functions \(\{\tilde{f}\}\neq\{f\}\) such that: \[F_{i}^{k}(u_{1}^{k-2},\ldots,u_{m}^{k-2},\ldots)=\tilde{F}_{i}^{k}(u_{1}^{k-2},\ldots,u_{m}^{k-2},\ldots),\] where \(\tilde{F}_{i}^{k}\) is composed by the functions in the set \(\{\tilde{f}\}\). Let us choose a point \((u_{j}^{k-2},0,\ldots,0)\) with \(j=1,\ldots,m\), such that all the inputs are set to zero except one of the inputs of the incoming edges of the node \(i\). Then, since each function is in \(\mathcal{F}_{Z}\) and by Lemma 1 we have: \[F_{i}^{k}=f_{i,j}(u_{j}^{k-2})\quad\text{and}\quad\tilde{F}_{i}^{k}=\tilde{f}_ {i,j}(u_{j}^{k-2}).\] Since we assume that \(F_{i}^{k}=\tilde{F}_{i}^{k}\), it yields: \[f_{i,j}(u_{j}^{k-2})=\tilde{f}_{i,j}(u_{j}^{k-2}),\text{ for all }u_{j}^{k-2}\in \mathbb{R},\] which implies that \(f_{i,j}=\tilde{f}_{i,j}\). Following a similar argument for each incoming edge of node \(i\), we prove that all the functions associated with the incoming edges of the node \(i\) are unique and can be identified. Notice that due to other possible paths from an in-neighbor \(j\) of the node \(i\), additional terms of the form \(u_{j}^{k-r},r>2\) could appear in (8). However, they will always be delayed by virtue of Lemma 1. The following lemmas involve properties of analytic functions that will be used in the proof of the results in this section. **Lemma 3**: _(Periodic functions) If for some \(p_{0}\in\mathbb{R}\), an analytic function \(f:\mathbb{R}\rightarrow\mathbb{R}\) is periodic for periods \(p\in[p_{0}-\epsilon,p_{0}+\epsilon]\) with \(\epsilon>0\), then the function \(f\) is constant._ The proof is left to Appendix A. **Lemma 4**: _Given three non-zero analytic functions \(f:\mathbb{R}\rightarrow\mathbb{R}\) and \(g,\tilde{g}:\mathbb{R}^{m}\rightarrow\mathbb{R}\) satisfying \(g(0)=\tilde{g}(0)=0\). If for all \(x\in\mathbb{R}\), \(y\in\mathbb{R}^{m}\), the functions \(f\), \(g\) and \(\tilde{g}\) satisfy:_ \[f(x+g(y_{1},\ldots,y_{m}))=f(x+\tilde{g}(y_{1},\ldots,y_{m})),\] _then either \(g=\tilde{g}\) or \(f\) is constant._ The proof is left to Appendix B. **Corollary 1**: _Under the same conditions as in Lemma 4, if \(f(0)=0\), then_ \[g=\tilde{g}.\] **Proposition 3** (Paths): _For identifiability of a path graph in the class \(\mathcal{F}_{Z}\), it is necessary and sufficient to measure the sink._ Let us consider a path with \(n\) nodes. The measurement of the sink gives us the output: \[y_{n}^{k} =u_{n}^{k-1}+F_{n}^{k}\] \[=u_{n}^{k-1}+f_{n,n-1}(u_{n-1}^{k-2}+F_{n-1}^{k-1}). \tag{9}\] Let us assume that there is a set \(\{\tilde{f}\}\neq\{f\}\) such that \(F_{n}^{k}=\tilde{F}_{n}^{k}\), which by (9) implies: \[f_{n,n-1}(u_{n-1}^{k-2}+F_{n-1}^{k-1})=\tilde{f}_{n,n-1}(u_{n-1}^{k-2}+\tilde{ F}_{n-1}^{k-1}).\] By Lemma 2, we can guarantee that \(f_{n,n-1}=\tilde{f}_{n,n-1}\), and we have: \[f_{n,n-1}(u_{n-1}^{k-2}+F_{n-1}^{k-1})=f_{n,n-1}(u_{n-1}^{k-2}+\tilde{F}_{n-1} ^{k-1}).\] Then, we use Corollary 1 to guarantee that \(F_{n-1}^{k-1}=\tilde{F}_{n-1}^{k-1}\). Notice that now the identifiability problem is equivalent to having measured the node \(n-1\) and by following a similar approach, we can continue with the identification of all the edges and guarantee that \(\{f\}=\{\tilde{f}\}\), such that all the path can be identified. **Proposition 4** (Trees): _For identifiability of a tree in the class \(\mathcal{F}_{Z}\), it is necessary and sufficient to measure all the sinks._ From Proposition 1, it is necessary to measure all the sinks. Let us consider an arbitrary tree and the measurement of a sink \(i\). Let us assume that there are \(m\) in-neighbors of the sink \(i\) and there is a set \(\{\tilde{f}\}\neq\{f\}\) such that \(F_{i}^{k}=\tilde{F}_{i}^{k}\), which implies: \[\sum_{\ell=1}^{m}f_{i,\ell}(u_{\ell}^{k-2}+F_{\ell}^{k-1})=\sum_{\ell=1}^{m} \tilde{f}_{i,\ell}(u_{\ell}^{k-2}+\tilde{F}_{\ell}^{k-1}). \tag{10}\] Since in a tree, the functions \(F_{\ell}^{k-1}\) do not have common inputs because they come from different branches, we can select a in-neighbor \(j\) and set to zero the inputs of all the nodes that do not have a path to \(j\), such that we have: \[f_{i,j}(u_{j}^{k-2}+F_{j}^{k-1})=\tilde{f}_{j,j}(u_{j}^{k-2}+\tilde{F}_{j}^{k-1}).\] Then, by using Lemma 2 and Corollary 1 we can guarantee that \(f_{i,j}=\tilde{f}_{i,j}\) and \(F_{j}^{k-1}=\tilde{F}_{j}^{k-1}\) for all \(j=1,\ldots,m\), which is equivalent to having measured the in-neighbors of \(i\). Then, we can continue with the identification of each branch independently and by following the same approach we can identify all the paths that finish in the sink \(i\). Finally, by measuring the other sinks and following a similar approach, we can identify all the edges in the tree. **Remark 1** (Linear functions): _Notice that Propositions 3 and 4 are also valid if all or some of the edges in the network contain pure linear functions. In the next section, we will provide stronger results in the identification of nonlinear networks when linear functions are excluded._ ## IV Directed acyclic graphs Directed acyclic graphs encompass a large number of graph topologies that present specific characteristics that can be used for the derivation of conditions for identifiability [20]. Unlike a tree, in a directed acyclic graph, the functions \(F_{\ell}^{k-1}\) in (10) can have common variables due to several possible paths between two nodes with the same length, which makes impossible the application of Corollary 1. In order to obtain a result similar to Corollary 1 that allow us to identify a directed acyclic graph, we consider a smaller class of functions. **Definition 5** (Class of functions \(\mathcal{F}_{Z,NL}\)): _Let \(\mathcal{F}_{Z,NL}\) be the class of functions \(f:\mathbb{R}\rightarrow\mathbb{R}\) with the following properties:_ 1. \(f\) _is analytic in_ \(\mathbb{R}\)_._ 2. \(f(0)=0\)_._ 3. _The associated Taylor series_ \(f(x)=\sum_{n=1}^{\infty}a_{n}x^{n}\) _contains at least one coefficient_ \(a_{n}\neq 0\) _with_ \(n>1\)_._ The third property of the functions in \(\mathcal{F}_{Z,NL}\) implies that none of the functions is linear. Clearly \(\mathcal{F}_{Z,NL}\) is a subclass of \(\mathcal{F}_{Z}\) and all the results of the previous section for functions in \(\mathcal{F}_{Z}\) are also valid for functions in \(\mathcal{F}_{Z,NL}\). **Lemma 5**: _Given the non-zero analytic functions \(f_{i}:\mathbb{R}\rightarrow\mathbb{R}\) and \(g_{i},\tilde{g}_{i}:\mathbb{R}^{m}\rightarrow\mathbb{R}\) satisfying \(f_{i}(0)=g_{i}(0)=\tilde{g}_{i}(0)=0\) for \(i=1,\ldots,n\). Let us assume that none of the functions \(f_{i}\) is linear. If for all \(x\in\mathbb{R}\), \(y\in\mathbb{R}^{m}\), the functions \(f_{i}\), \(g_{i}\) and \(\tilde{g}_{i}\) satisfy:_ \[\sum_{i=1}^{n}f_{i}(x_{i}+g_{i}(y_{1},\ldots,y_{m}))=\sum_{i=1}^{n}f_{i}(x_{i} +\tilde{g}_{i}(y_{1},\ldots,y_{m})),\] _then \(g_{i}=\tilde{g}_{i}\) for all \(i=1,\ldots,n\)._ The proof is left to Appendix C. Notice that when \(n=1\), Lemma 5 is also covered by Corollary 1. **Proposition 5**: _For the functions in \(\mathcal{F}_{Z,NL}\), in a directed acyclic graph, the measurement of a node provides the identification of all the nonlinear functions of any path that finishes in the measured node._ Let us assume an arbitrary directed acyclic graph. The measurement of a node \(i\) provides an output of the type: \[y_{i}^{k} =u_{i}^{k-1}+F_{i}^{k}\] \[=u_{i}^{k-1}+\sum_{j=1}^{m}f_{i,j}(u_{j}^{k-2}+F_{j}^{k-1}),\] where \(m\) is the number of in-neighbors of \(i\). Let us assume that there is a set \(\{\tilde{f}\}\neq\{f\}\) such that \(F_{i}^{k}=\tilde{F}_{i}^{k}\), which implies: \[\sum_{j=1}^{m}f_{i,j}(u_{j}^{k-2}+F_{j}^{k-1})=\sum_{j=1}^{m}\tilde{f}_{i,j}( u_{j}^{k-2}+\tilde{F}_{j}^{k-1}),\] By applying Lemma 2 we have \(f_{i,j}=\tilde{f}_{i,j}\) for all \(j=1,\ldots,m\) and: \[\sum_{j=1}^{m}f_{i,j}(u_{j}^{k-2}+F_{j}^{k-1})=\sum_{j=1}^{m}f_{i,j}(u_{j}^{k -2}+\tilde{F}_{j}^{k-1}),\] and by using Lemma 5 we guarantee: \[F_{j}^{k-1}=\tilde{F}_{j}^{k-1}\quad\text{for all }j=1,\ldots,m.\] Notice that the identification of each \(F_{j}^{k-1}\) is equivalent to having measured the node \(j\) and can be treated in a similar way to the node \(i\), independently of other paths corresponding to the other in-neighbors of \(i\). By following a similar approach, we can guarantee that \(\{f\}=\{\tilde{f}\}\) for every path that ends in the node \(i\). **Theorem 1** (Directed acyclic graph): _For identifiability of a directed acyclic graph in the class \(\mathcal{F}_{Z,NL}\), it is necessary and sufficient to measure all the sinks._ From Proposition 1, it is necessary to measure all the sinks. In a directed acyclic graph, we can always find a path from any node \(i\) to some sink [21]. Therefore, according to Proposition 5, it is sufficient to measure the sinks to identify all the paths in a directed acyclic graph. Unlike the linear case, where the measurement of the sinks is not enough to guarantee identifiability of directed acyclic graphs [5], Theorem 1 provides weaker conditions for the identifiability in the nonlinear case when linear functions are excluded. **Example 3** (Directed acyclic graph): _Let us consider the graph in Fig. 3. In the linear case, this network cannot be identified by only measuring the sink since the functions \(f_{2,1}\) and \(f_{3,1}\) cannot be distinguished. However, in the nonlinear case, the measurement of the sink is enough to identify all the network._ Fig. 3: Nonlinear network that can be identified by only measuring the sink. In the linear case, the measurement of the sink is not enough to identify this network. ## V Conclusions and future work We have derived identifiability conditions for a network characterized by nonlinear interactions through a static model. We showed that in a path graph it is necessary to measure all the nodes, except by the source, when the nonlinear functions have a static component. Then, by restricting the identifiability problem to a specific class of functions, we showed that the measurement of the sinks is necessary and sufficient to identify all the edges in paths and trees. Finally, by considering a smaller class of functions, we showed that the measurement of the sinks is necessary and sufficient for the identifiability of directed acyclic graphs. This simple model of nonlinear interactions allowed us to highlight fundamental differences with respect to the linear case. For future work, it would be interesting to extend the results to the case of general digraphs with cycles where the function \(F_{i}^{k}\) in (2) depends on an infinite number of inputs. Also, it would be important to consider dynamical models that include past inputs. In this case, the Volterra series seems to be the more adequate model since it only depends on past inputs, and many of our results could still hold.
2309.15196
Resolvability and convexity properties in the Sierpiński product of graphs
Let $G$ and $H$ be graphs and let $f \colon V(G)\rightarrow V(H)$ be a function. The Sierpi\'{n}ski product of $G$ and $H$ with respect to $f$, denoted by $G \otimes _f H$, is defined as the graph on the vertex set $V(G)\times V(H)$, consisting of $|V(G)|$ copies of $H$; for every edge $gg'$ of $G$ there is an edge between copies $gH$ and $g'H$ of $H$ associated with the vertices $g$ and $g'$ of $G$, respectively, of the form $(g,f(g'))(g',f(g))$. The Sierpi\'{n}ski metric dimension and the upper Sierpi\'{n}ski metric dimension of two graphs are determined. Closed formulas are determined for Sierpi\'{n}ski products of trees, and for Sierpi\'{n}ski products of two cycles where the second factor is a triangle. We also prove that the layers with respect to the second factor in a Sierpi\'{n}ski product graph are convex.
Michael A. Henning, Sandi Klavžar, Ismael G. Yero
2023-09-26T18:54:30Z
http://arxiv.org/abs/2309.15196v1
# Resolvability and convexity properties in the Sierpinski product of graphs ###### Abstract Let \(G\) and \(H\) be graphs and let \(f\colon V(G)\to V(H)\) be a function. The Sierpinski product of \(G\) and \(H\) with respect to \(f\), denoted by \(G\otimes_{f}H,\) is defined as the graph on the vertex set \(V(G)\times V(H),\) consisting of \(|V(G)|\) copies of \(H\); for every edge \(gg^{\prime}\) of \(G\) there is an edge between copies \(gH\) and \(g^{\prime}H\) of \(H\) associated with the vertices \(g\) and \(g^{\prime}\) of \(G\), respectively, of the form \((g,f(g^{\prime}))(g^{\prime},f(g))\). The Sierpinski metric dimension and the upper Sierpinski metric dimension of two graphs are determined. Closed formulas are determined for Sierpinski products of trees, and for Sierpinski products of two cycles where the second factor is a triangle. We also prove that the layers with respect to the second factor in a Sierpinski product graph are convex. **Keywords:** Sierpinski product of graphs; Metric dimension; Tree; Convex subgraph **AMS subject classification:** 05C12, 05C76 Introduction Sierpinski graphs represent a very interesting and widely studied family of graphs. They were introduced in 1997 in the paper [21], where the primary motivation for their introduction was the intrinsic link to the Tower of Hanoi problem, for the latter problem see the book [16]. Intensive research of Sierpinski graphs led to a review article [15] in which state of the art up to 2017 is summarized and unified approach to Sierpinski-type graph families is also proposed. Later research on Sierpinski graphs includes [3, 7, 8, 9, 19, 25, 32]. In this paper we study a recent generalization of Sierpinski graphs proposed by Kovic, Pisanski, Zemljic, and Zitnik in [23]. Let \(G\) and \(H\) be graphs and let \(f\colon V(G)\to V(H)\) be an arbitrary function. The _Sierpinski product of graphs \(G\) and \(H\) with respect to \(f\)_, denoted by \(G\otimes_{f}H\), is defined as the graph on the vertex set \(V(G)\times V(H)\) with edges of two types: * _Type-\(1\) edge_: \((g,h)(g,h^{\prime})\) is an edge of \(G\otimes_{f}H\) for every vertex \(g\in V(G)\) and every edge \(hh^{\prime}\in E(H)\), * _Type-\(2\) edge_: \((g,f(g^{\prime}))(g^{\prime},f(g))\) is an edge of \(G\otimes_{f}H\) for every edge \(gg^{\prime}\in E(G)\). We observe that the edges of Type-\(1\) induce \(n(G)=|V(G)|\) copies of the graph \(H\) in the Sierpinski product \(G\otimes_{f}H\). For each vertex \(g\in V(G)\), we let \(gH\) be the copy of \(H\) corresponding to the vertex \(g\). A Type-\(2\) edge joins vertices from different copies of \(H\) in \(G\otimes_{f}H\), and is called a _connecting edge_ of \(G\otimes_{f}H\). A vertex incident with a connecting edge is called a _connecting vertex_. We observe that two different copies of \(H\) in \(G\otimes_{f}H\) are joined by at most one edge. We denote by \(H^{G}\) be the family of functions from \(V(G)\) to \(V(H)\). It might be readily observed that the Sierpinski product is closely related to other product graphs. For instance, by considering a constant function \(f\) in the product, we obtain graphs which are indeed the same as the so-called rooted product graphs (see [12] for its definition). Also, selecting the identity function \(\operatorname{id}\in G^{G}\), the Sierpinski product \(G\otimes_{\operatorname{id}}G\) is the (first iteration of the) generalized Sierpinski graph in the sense of [13]. Moreover, a Sierpinski product can also be considered as a subgraph of the (Cartesian, strong or lexicographic) product. Consequently, any contribution to the study of the Sierpinski product could give some more knowledge on these related products. In the next two subsections we give motivation, basic terminology, and notation concerning the classical metric dimension of graphs, and introduce the study of the Sierpinski metric dimension and the upper Sierpinski metric dimension. Thereafter in Section 2 we determine the upper Sierpinski metric dimension for Sierpinski products of arbitrary trees. A general lower bound is established for the Sierpinski metric dimension for products of two trees, and an exact formula when the first factor is a path. In Section 3 a closed formula is determined for both dimensions when the first factor in the product is an arbitrary cycle and the second factor a triangle. In Section 4 we prove that the layers with respect to the second factor in a Sierpinski product graph are convex. In Section 5 we pose several open problems. ### The metric dimension of graphs The _distance_ between two vertices \(u\) and \(v\) in a connected graph \(G\), denoted \(d_{G}(u,v)\), is the number of edges in a shortest path from \(u\) to \(v\), that is, \(d_{G}(u,v)\) is the minimum length of a \(u,v\)-path in \(G\). Given an ordered subset \(S=\{v_{1},\ldots,v_{k}\}\) of vertices in \(G\), the _metric \(S\)-representation_ of a vertex \(v\) in \(G\) is the \(k\)-tuple vector \[r_{G}(v|S)=(d_{G}(v,v_{1}),\cdots,d_{G}(v,v_{k})).\] If every two distinct vertices of \(G\) have different metric \(S\)-representations, then the set \(S\) is called a _resolving set_ of \(G\) (also called a _metric generator_). The _metric dimension_ of \(G\), denoted by \(\dim(G)\), is the cardinality of a smallest possible resolving set in \(G\). A _metric basis_ of \(G\) is a resolving set of cardinality \(\dim(G)\). A vertex \(v\) in a graph \(G\) is said to _distinguish_ (or _resolve_) two vertices \(x\) and \(y\) if \(d_{G}(v,x)\neq d_{G}(v,y)\). The concept of the metric dimension of a graph was birthed independently by Harary and Melter [14] in 1976 and by Slater [29] in 1975, and is now well studied in graph theory. To date MathSciNet lists over 380 papers on metric dimension in graphs, covering a large number of different investigations dealing with theoretical and applied results on such parameter. According to the structural properties of resolving sets in graphs, they can easily be used to model several practical situations in which uniquely recognizing points or locations is required. That was precisely one of the motivations of the seminal works [14] and [29], where resolving sets appeared to be used for the location of intruders in networks. Further on, some other related models and applications have appeared here and there. Among them, we remark the recent work [31], where the authors presented a connection between some metric dimension parameter and the representation of genomic sequences. Among the theoretical studies on this topic, the literature contains a wide range of different contributions, some recent and remarkable articles are for instance [6, 11, 28]. For more information on investigations on the classical version we suggest the fairly complete survey [30]. With respect to the theoretical studies, the metric dimension of graph products and graph operations has attracted the attention of several investigations. In this sense, we mention a few interesting contributions related with this exposition due to the relationship between the Sierpinski product and some other products previously mentioned. The metric dimension of Cartesian product graphs has been considered in several works like [4], for the general case, and among other ones, in [5, 18, 20] for some particular examples of Cartesian products. The lexicographic product of graphs has been studied with respect to its metric dimension in [17, 27], while the strong product has been considered in [1, 26]. On the other hand, the metric dimension of the rooted product has been dealt with in [10, 24]. ### Sierpinski metric dimension Let \(G\) and \(H\) be graphs and \(H^{G}\) be the family of functions from \(V(G)\) to \(V(H)\). We introduce new types of metric dimension, the _Sierpinski metric dimension_, denoted by \(\dim_{\mathrm{S}}(G,H)\), as the minimum over all functions \(f\) from \(H^{G}\) of the metric dimension of the Sierpinski product with respect to \(f\), and _upper Sierpinski metric dimension_, denoted by \(\mathrm{Dim}_{\mathrm{S}}(G,H)\), as the maximum over all functions \(f\in H^{G}\) of the metric dimension of the Sierpinski product with respect to \(f\). That is, \[\dim_{\mathrm{S}}(G,H)\coloneqq\min_{f\in H^{G}}\{\dim(G\otimes_{f}H)\}\] and \[\operatorname{Dim}_{\mathrm{S}}(G,H)\coloneqq\max_{f\in H^{G}}\{\dim(G\otimes_{ f}H)\}\,.\] We might remark that the classical metric dimension of Sierpinski graphs was already studied in [22], as well as, that of the generalized Sierpinski graphs over stars was considered in [2]. ## 2 Sierpinski products of trees A vertex of degree at least \(3\) in a tree \(T\) is called a _branch vertex_ (also called a _major vertex_ in the literature). A leaf \(u\) of \(T\) is called a _terminal leaf_ of a branch vertex \(v\) of \(T\) if \(d_{T}(u,v)<d_{T}(u,w)\) for every other branch vertex \(w\) of \(T\). The _terminal degree_ of a branch vertex \(v\) is the number of terminal leaves associated with \(v\). A branch vertex \(v\) of \(T\) is an _exterior branch vertex_ of \(T\) if it has positive terminal degree. The path from a terminal leaf to the vertex immediately preceding the branch vertex that it is closest to is called a _terminal path_. Thus, every vertex on a terminal path in \(T\) is either a leaf of \(T\) or has degree \(2\) in \(T\). A vertex on a terminal path that has degree \(2\) in \(T\) is called an _internal vertex_. Equivalently, every vertex on a terminal path that is not a terminal leaf, is an internal terminal vertex. Thus if \(u\) is an internal terminal vertex in \(T\), then the vertex \(u\) is an internal vertex of a path \(P\) that joins a leaf and a branch vertex closest to that leaf in \(T\) where every internal vertex of \(P\) has degree \(2\) in \(T\). Let \(n_{1}(T)\) denote the number of leaves of \(T\), and let \(\operatorname{ex}(T)\) denote the number of exterior branch vertices of \(T\). The formula for the metric dimension of a tree reads as follows. **Theorem 2.1**.: ([14, 29]) _If \(T\) is a tree that is not a path, then_ \[\dim(T)=n_{1}(T)-\operatorname{ex}(T). \tag{1}\] It is clear that \(\dim(P_{n})=1\). Combining this fact with Theorem 2.1 yields the following consequence. **Corollary 2.2**.: _If \(T\) is a tree, then \(\dim(T)=1\) if \(T\) is a path and \(\dim(T)\geq 2\) if \(T\) is not a path._ Let \(T\) be a tree that is not a path, and let \(v_{1},\ldots,v_{k}\) be the exterior branch vertices in \(T\) that have terminal degree at least \(2\). If the exterior branch vertex \(v_{i}\) has terminal degree \(\ell_{i}\geq 2\) and if \(L_{i}\) is a set consisting of all terminal leaves but one associated with \(v_{i}\) for all \(i\in[k]\), then (1) can be equivalently stated as: \[\dim(T)=\sum_{i=1}^{k}(\ell_{i}-1),\] and the set \[B(T)=\bigcup_{i=1}^{k}L_{i}\] is a metric basis of \(T\) (of cardinality \(\dim(T)\)). We call the basis \(B(T)\) a _standard metric basis_ of \(T\). Thus, every vertex in a standard metric basis of a tree \(T\) is a leaf, and such a basis contains all but one selected fixed leaf associated with the exterior branch vertex of terminal degree at least \(2\) in \(T\). ### Upper Sierpinski metric dimension in trees In this section we determine the upper Sierpinski metric dimension of the Sierpinski product of trees. Notice that for any trees \(T_{1}\) and \(T_{2}\) and any function \(f\in H^{G}\), the Sierpinski product \(T_{1}\otimes_{f}T_{2}\) is a tree. **Theorem 2.3**.: _If \(T_{1}\) and \(T_{2}\) are trees with \(n(T_{2})\geq 3\), then_ \[\operatorname{Dim_{S}}(T_{1},T_{2})=n(T_{1})\dim(T_{2}).\] Proof.: Let \(w\) be a branch vertex of \(T_{2}\), and let \(f_{w}\colon V(T_{1})\to V(T_{2})\) be the constant function defined by \(f_{w}(v)=w\) for every vertex \(v\in V(T_{1})\). The exterior branch vertices in \(T_{1}\otimes_{f_{w}}T_{2}\) are precisely the exterior branch vertices in each of the copies of \(T_{2}\), and so \(\operatorname{ex}(T_{1}\otimes_{f_{w}}T_{2})=n(T_{1})\text{ex}(T_{2})\). Moreover, the leaves in \(T_{1}\otimes_{f_{w}}T_{2}\) are precisely the leaves in each of the copies of \(T_{2}\), and so \(n_{1}(T_{1}\otimes_{f_{w}}T_{2})=n(T_{1})n_{1}(T_{2})\). Therefore, as \(T_{1}\otimes_{f_{w}}T_{2}\) is a tree, by (1) we have \[\dim(T_{1}\otimes_{f_{w}}T_{2}) = n_{1}(T_{1}\otimes_{f_{w}}T_{2})-\operatorname{ex}(T_{1}\otimes _{f_{w}}T_{2})\] \[= n(T_{1})n_{1}(T_{2})-n(T_{1})\text{ex}(T_{2})\] \[= n(T_{1})(n_{1}(T_{2})-\operatorname{ex}(T_{2}))\] \[= n(T_{1})\dim(T_{2}),\] implying that \[\operatorname{Dim_{S}}(T_{1},T_{2})\geq\dim(T_{1}\otimes_{f_{w}}T_{2})=n(T_{1 })\dim(T_{2}). \tag{2}\] We next show that \(\operatorname{Dim_{S}}(T_{1},T_{2})\leq n(T_{1})\dim(T_{2})\). Let \(f\colon V(T_{1})\to V(T_{2})\) be an arbitrary function. It suffices for us to show that \[\dim(T_{1}\otimes_{f}T_{2})\leq n(T_{1})\dim(T_{2}).\] Let us first consider that \(T_{2}\) is a path \(P_{n}\) with \(n\geq 3\). Hence, we here indeed need to prove that \(\dim(T_{1}\otimes_{f}P_{n})\leq n(T_{1})\) since \(\dim(P_{n})=1\). Suppose to the contrary that \(\dim(T_{1}\otimes_{f}P_{n})>n(T_{1})\). Thus, from (1) we have that \(n_{1}(T_{1}\otimes_{f}P_{n})-\operatorname{ex}(T_{1}\otimes_{f}P_{n})=\dim(T_ {1}\otimes_{f}P_{n})>n(T_{1})\). Hence, \[\operatorname{ex}(T_{1}\otimes_{f}P_{n})<n_{1}(T_{1}\otimes_{f}P_{n})-n(T_{1} )\leq 2n(T_{1})-n(T_{1})=n(T_{1}),\] which means there is a positive integer \(k\) such that \(\operatorname{ex}(T_{1}\otimes_{f}P_{n})=n(T_{1})-k\). Now, notice that if \(T_{1}\otimes_{f}P_{n}\) has \(n(T_{1})-k\) exterior branch vertices, then there must be at least \(k\) copies of \(P_{n}\) in \(T_{1}\otimes_{f}P_{n}\) not containing any exterior branch vertex. This situation can only happen when the connecting edges of \(T_{1}\otimes_{f}P_{n}\) in such copies of \(P_{n}\) are incident with at least one leaf of each of these copies. Consequently, we deduce that \(n_{1}(T_{1}\otimes_{f}P_{n})\leq 2n(T_{1})-k\). Therefore, by (1) we have \[\dim(T_{1}\otimes_{f}P_{n}) = n_{1}(T_{1}\otimes_{f}P_{n})-\mathrm{ex}(T_{1}\otimes_{f}P_{n})\] \[\leq 2n(T_{1})-k-(n(T_{1})-k)\] \[= n(T_{1}),\] which is a contradiction with our assumption, and so \(\dim(T_{1}\otimes_{f}P_{n})\leq n(T_{1})\) as required. We next consider the case when \(T_{2}\) is not a path. Let \(E_{c}=\{e_{1},e_{2},\ldots,e_{m(T_{1})}\}\) be the set of connecting edges in \(T_{1}\otimes_{f}T_{2}\). We order these connecting edges and define \(e_{i}\) as the \(i\)th _connecting edge_ of \(T_{1}\otimes_{f}T_{2}\) for \(i\in[m(T_{1})]\). We next define forests \(X_{0},X_{1},\ldots,X_{m(T_{1})}\) as follows. Let \(X_{0}\) be obtained from the tree \(T_{1}\otimes_{f}T_{2}\) by removing the connecting edges in \(E_{c}\). We note that \(X_{0}\) is the disjoint union of \(n(T_{1})\) copies of the tree \(T_{2}\). Applying (1) to each component of the forest \(X_{0}\) we have \[\dim(X_{0}) = n_{1}(X_{0})-\mathrm{ex}(X_{0})\] \[= n(T_{1})n_{1}(T_{2})-n(T_{1})\mathrm{ex}(T_{2})\] \[= n(T_{1})(n_{1}(T_{2})-\mathrm{ex}(T_{2}))\] \[= n(T_{1})\dim(T_{2}).\] We now define the forests \(X_{1},\ldots,X_{m(T_{1})}\) as follows. For \(i\in[m(T_{1})]\), let \(X_{i}\) be the forest obtained from \(X_{i-1}\) by adding the \(i\)th connecting edge, that is, \(X_{i}=X_{i-1}\cup\{e_{i}\}\). We note that \[T_{1}\otimes_{f}T_{2}=X_{m(T_{1})}.\] Applying (1) to each component of the forest \(X_{i}\) we have \[\dim(X_{i})=n_{1}(X_{i})-\mathrm{ex}(X_{i})\] for all \(i\in\{0,1,\ldots,m(T_{1})\}\). Let \[\Phi_{1}(X_{i})=n_{1}(X_{i-1})-n_{1}(X_{i})\,,\] which represents the number of vertices of degree \(1\) in \(X_{i-1}\) which are of degree at least \(2\) in \(X_{i}\). Roughly speaking, \(\Phi_{1}(X_{i})\) is the number of degree \(1\) vertices in \(X_{i-1}\) "destroyed" by adding the \(i\)th connecting edges \(e_{i}\) to \(X_{i-1}\) when constructing \(X_{i}\). Set further \[\Phi_{2}(X_{i})=-[\mathrm{ex}(X_{i-1})-\mathrm{ex}(X_{i})]\,,\] where \(\mathrm{ex}(X_{i-1})-\mathrm{ex}(X_{i})\) is the difference between the number of exterior branch vertices in \(X_{i-1}\) and the number of exterior branch vertices in \(X_{i}\). We note that \[\dim(X_{i}) = n_{1}(X_{i})-\mathrm{ex}(X_{i})\] \[= (n_{1}(X_{i-1})-\Phi_{1}(X_{i}))-(\Phi_{2}(X_{i})+\mathrm{ex}(X_{ i-1}))\] \[= \dim(X_{i-1})-(\Phi_{1}(X_{i})+\Phi_{2}(X_{i}))\] for all \(i\in[m(T_{1})]\). We would like to show that \[\Phi_{1}(X_{i})+\Phi_{2}(X_{i})\geq 0 \tag{3}\] for all \(i\in[m(T_{1})]\), which would imply that \(\dim(X_{i})\leq\dim(X_{i-1})\leq\dim(X_{0})=n(T_{1})\dim(T_{2})\), from which we deduce that \[\dim(T_{1}\otimes_{f}T_{2})=\dim(X_{m(T_{1})})\leq n(T_{1})\dim(T_{2}).\] Hence to prove the theorem, it remains to show that (3) holds. For this purpose, let the \(i\)th connecting edge \(e_{i}\) join vertices \(x_{i}\) and \(y_{i}\) in \(X_{i-1}\) when constructing \(X_{i}\). Suppose that \(x_{i}\) is neither an internal terminal vertex nor a leaf in \(X_{i-1}\). In this case, the vertex \(x_{i}\) contributes \(0\) to both terms \(\Phi_{1}(X_{i})\) and \(\Phi_{2}(X_{i})\). Suppose that \(x_{i}\) is an internal terminal vertex in \(X_{i-1}\). Let \(w_{i}\) be the exterior branch vertex associated with the vertex \(x_{i}\) in \(X_{i-1}\). In this case, when \(e_{i}\) is added to \(X_{i-1}\), the vertex \(x_{i}\) becomes an exterior branch vertex in \(X_{i}\), while the vertex \(w_{i}\) may no longer be an exterior branch vertex, implying that the vertex \(x_{i}\) contributes \(0\) to the term \(\Phi_{1}(X_{i})\), and contributes at least \(1-1=0\) to the term \(\Phi_{2}(X_{i})\). Suppose that \(x_{i}\) is a leaf in \(X_{i-1}\). As before, let \(w_{i}\) be the exterior branch vertex associated with the vertex \(x_{i}\) in \(X_{i-1}\). In this case, when \(e_{i}\) is added to \(X_{i-1}\), the leaf \(x_{i}\) in \(X_{i-1}\) is not a leaf in \(X_{i}\), and therefore the vertex \(x_{i}\) contributes \(1\) to the term \(\Phi_{1}(X_{i})\). Moreover, the effect of adding \(e_{i}\) is that the vertex \(w_{i}\) may no longer be an exterior branch vertex, implying that the vertex \(x_{i}\) contributes at least \(-1\) to the term \(\Phi_{2}(X_{i})\). In all of the above three cases, the contribution of \(x_{i}\) to \(\Phi_{1}(X_{i})+\Phi_{2}(X_{i})\) is at least \(0\). Analogous arguments hold for the vertex \(y_{i}\), showing that the contribution of \(y_{i}\) to \(\Phi_{1}(X_{i})+\Phi_{2}(X_{i})\) is at least \(0\). Therefore, (3) holds. This completes the proof of Theorem 2.3. \(\Box\) ### Sierpinski metric dimension in trees In this section we study the Sierpinski metric dimension of the Sierpinski product of trees. The Sierpinski metric dimension of two paths is given by the following result. **Proposition 2.4**.: _If \(T_{1}\) and \(T_{2}\) are both paths, then \(\dim_{\rm S}(T_{1},T_{2})=1\)._ Proof.: Let \(T_{1}=P_{n}\) and let \(T_{2}=P_{m}\). If \(n=1\) or \(m=1\), then \(T_{1}\otimes_{f}T_{2}\) is a path, and so by Corollary 2.2, \(\dim_{\rm S}(T_{1},T_{2})=1\). Hence we may assume that \(n\geq 2\) and \(m\geq 2\), for otherwise the result is immediate. Let the path \(T_{2}\) be an \(x,y\)-path that starts at vertex \(x\) and ends at vertex \(y\), and let \(T_{1}\) be the path \(v_{1}v_{2}\ldots v_{n}\). Let \(f\colon V(T_{1})\to V(T_{2})\) be the function defined by \[f(v_{i})=\left\{\begin{array}{ll}x;&i\:(\:\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! bound on the Sierpinski metric dimension, where we use the notation \(d_{T}(v)\) to represent the _degree_ of a vertex \(v\) in \(T\). **Lemma 2.5**.: _If \(T_{1}\) and \(T_{2}\) are trees, where \(T_{2}\) is not a path, then_ \[\dim_{\mathrm{S}}(T_{1},T_{2})\geq\sum_{v\in V(T_{1})}\max\{0,\dim(T_{2})-d_{ T_{1}}(v)\}.\] Proof.: Let \(f\colon V(T_{1})\to V(T_{2})\) be an arbitrary function. Recall that for each vertex \(v\in V(T_{1})\), \(vT_{2}\) denotes the copy of the tree \(T_{2}\) in \(T_{1}\otimes_{f}T_{2}\) corresponding to the vertex \(v\). We let \(C_{v}\) be the set of vertices in \(vT_{2}\) that are connecting vertices in \(T_{1}\otimes_{f}T_{2}\). Thus, each vertex in \(C_{v}\) is incident with a connecting edge in \(T_{1}\otimes_{f}T_{2}\) that joins that vertex to a vertex in a copy of \(T_{2}\) different from \(vT_{2}\). We note that \[|C_{v}|\leq d_{T_{1}}(v)\] since every edge incident with \(v\) in the tree \(T_{1}\) is associated with a connecting edge in \(T_{1}\otimes_{f}T_{2}\) that is incident with a vertex in \(vT_{2}\). Let \(B\) be a standard metric basis of \(T_{1}\otimes_{f}T_{2}\), and so \(\dim(T_{1}\otimes_{f}T_{2})=|B|\) and the basis \(B\) contains all but one leaf associated with the terminal vertices of degree at least \(2\) in \(T_{1}\otimes_{f}T_{2}\). Let \(B_{v}\) be the restriction of \(B\) to \(vT_{2}\), that is, \[B_{v}=B\cap V(vT_{2})\] for every vertex \(v\in V(T_{1})\). The set \(B_{v}\cup C_{v}\) is a resolving set in the tree \(vT_{2}\), and so \[\dim(T_{2})=\dim(vT_{2})\leq|B_{v}|+|C_{v}|\leq|B_{v}|+d_{T_{1}}(v),\] and so \(|B_{v}|\geq\dim(T_{2})-d_{T_{1}}(v)\). As clearly \(|B_{v}|\geq 0\), we get \[|B_{v}|\geq\max\{0,\dim(T_{2})-d_{T_{1}}(v)\}\] for every vertex \(v\in V(T_{1})\). Therefore, \[\dim_{\mathrm{S}}(T_{1},T_{2})=|B|=\sum_{v\in V(T_{1})}|B_{v}|\geq\sum_{v\in V (T_{1})}\max\{0,\dim(T_{2})-d_{T_{1}}(v)\}.\] This establishes the desired lower bound in the statement of the theorem. \(\Box\) Using Lemma 2.5, we have the following result. **Theorem 2.6**.: _For \(n\geq 2\), if \(T_{1}=P_{n}\) and \(T_{2}\) is a tree that is not a path, then_ \[\dim_{\mathrm{S}}(T_{1},T_{2})=n(\dim(T_{2})-2)+2.\] Proof.: Let \(T_{1}=P_{n}\) and let \(T_{2}\) be a tree that is not a path. By Corollary 2.2, \(\dim(T_{2})\geq 2\). By Lemma 2.5, \[\dim_{\mathrm{S}}(T_{1},T_{2})\geq n(\dim(T_{2})-2)+2\] noting that \(T_{1}\) contains two vertices of degree \(1\) and \(n-2\) vertices of degree \(2\). Hence, it suffices for us to show that \[\dim_{\mathrm{S}}(T_{1},T_{2})\leq n(\dim(T_{2})-2)+2.\] By assumption, \(T_{2}\) is not a path. Hence, \(T_{2}\) contains at least one exterior branch vertex with terminal degree at least \(2\). If \(T_{2}\) contains two distinct exterior branch vertices both with terminal degree at least \(2\), then let \(u_{1}\) and \(u_{2}\) be two selected (terminal) leaves associated with these two exterior branch vertices. If \(T_{2}\) contains only one exterior branch vertex, then \(T_{2}\) is a star or a subdivided star with terminal degree at least \(3\). In this case, let \(u_{1}\) and \(u_{2}\) be two arbitrary leaves in \(T_{2}\). Let \(T_{1}\) be the path \(v_{1}v_{2}\ldots v_{n}\), and define the function \(f\colon V(T_{1})\to V(T_{2})\) by \[f(v_{i})=\left\{\begin{array}{ll}u_{1};&i\,(\mbox{mod }4)\in\{1,2\},\\ u_{2};&\mbox{otherwise}.\end{array}\right.\] for all \(i\in[n]\). For notational simplicity, instead of \(v_{i}T_{2}^{i}\) we simply write \(iT_{2}\) for the copy of \(T_{2}\) in the Sierpinski product \(T_{1}\otimes_{f}T_{2}\) that corresponds to the vertex \(v_{i}\) for all \(i\in[n]\). We note that the copy \(1T_{2}\) and \(nT_{2}\) both contain one less leaf in the product \(T_{1}\otimes_{f}T_{2}\), while every copy \(iT_{2}\) where \(i\in[n-1]\setminus\{1\}\) contains two fewer leaves in the product \(T_{1}\otimes_{f}T_{2}\), implying that \[n_{1}(T_{1}\otimes_{f}T_{2})=n(T_{1})(n_{1}(T_{2})-2)+2=n(n_{1}(T_{2})-2)+2.\] On the other hand, by our choice of the vertices \(u_{1}\) and \(u_{2}\), the number of exterior branch vertices in each copy \(iT_{2}\) where \(i\in[n]\) remains unchanged in the product \(T_{1}\otimes_{f}T_{2}\), that is, \[\mbox{ex}(T_{1}\otimes_{f}T_{2})=n(T_{1})\mbox{ex}(T_{2})=n\times\mbox{ex}(T_ {2}).\] Therefore, \[\dim(T_{1}\otimes_{f}T_{2}) = n_{1}(T_{1}\otimes_{f}T_{2})-\mbox{ex}(T_{1}\otimes_{f}T_{2})\] \[= (n(n_{1}(T_{2})-2)+2)-n\times\mbox{ex}(T_{2})\] \[= n(n_{1}(T_{2})-\mbox{ex}(T_{2})-2)+2\] \[= n(\dim(T_{2})-2)+2,\] completing the proof of Theorem 2.6. \(\Box\) ## 3 Sierpinski products of cycles In this section we study the Sierpinski metric dimension of the Sierpinski product of cycles, and proved closed formulas for the cases in which at least one of the factors is a triangle. **Theorem 3.1**.: _If \(n\geq 3\), then \(\mbox{Dim}_{\rm S}(C_{n},C_{3})=n\)._ Proof.: Let \(H=C_{3}\) and \(G=C_{n}\). Let \(V(H)=[3]\), and let the vertices of the cycle \(G\) be \(g_{1},g_{2},\ldots,g_{n}\) in the natural order of adjacencies. Let \(f:V(G)\to V(H)\) be an arbitrary function and consider \(G\otimes_{f}H\). For notational simplicity, let \(iH\) denote \(g_{i}H\) for all \(i\in[n]\), that is, \(iH\) is the \(i\)th copy of \(H\) corresponding to the vertex \(g_{i}\) of \(G\). Let \(x_{i}y_{i+1}\) be the connecting edge between \(iH\) and \((i+1)H\), \(i\in[n]\), where the sum is made mod \(n\). Thus, \(x_{i}=(g_{i},f(g_{i+1}))\) and \(y_{i+1}=(g_{i+1},f(g_{i}))\). Set further \(w_{i}\) be a vertex in \(iH\) different from \(x_{i}\) and \(y_{i}\). Note that if \(x_{i}\neq y_{i}\), then \(V(iH)=\{x_{i},y_{i},w_{i}\}\). In case \(x_{i}=y_{i}\), then let \(w_{i}^{\prime}\) be the third vertex of \(V(iH)\), that is, in this case we have \(V(iH)=\{x_{i},w_{i},w_{i}^{\prime}\}\). Let \(Z=\{z_{1},z_{2},\ldots,z_{n}\}\subseteq V(G\otimes_{f}H)\) be defined as follows. Consider an arbitrary \(iH\), \(i\in[n]\). If \(x_{i}\neq y_{i}\), then let \(z_{i}=y_{i}\), and if \(x_{i}=y_{i}\), then let \(z_{i}=w_{i}\). Since the vertices \(z_{i}\) are pairwise different, \(|Z|=n\). We claim that \(Z\) is a resolving set of \(G\otimes_{f}H\). Consider first two vertices from a given \(iH\). If \(z_{i}=w_{i}\), then the vertices \(x_{i}\) and \(w_{i}^{\prime}\) are distinguished by all the vertices of \(Z\setminus\{z_{i}\}\). If \(z_{i}=y_{i}\), then \(d(w_{i},z_{i+1})=d(x_{i},z_{i+1})+1\) and therefore \(x_{i}\) and \(w_{i}\) are distinguished. Consider now two vertices \(u\in V(iH)\) and \(v\in V(jH)\), where \(i\neq j\). Assume first that \(z_{i}=w_{i}\). Then \(d(u,z_{i})\leq 1\) while \(d(v,z_{i})>1\) and so \(u\) and \(v\) are distinguished by \(z_{i}\). The case when \(z_{j}=w_{j}\) is treated exactly the same. Hence it remains to consider the case when \(z_{i}=y_{i}\) and \(z_{j}=y_{j}\). If \(|i-j|>1\), then as before \(d(u,z_{i})<d(v,z_{i})\). Assume finally that \(j=i+1\). But then \(d(v,z_{i})>d(u,z_{i})\) and so again \(u\) and \(v\) are distinguished. We have thus proved that \(Z\) is a resolving set and hence \(\operatorname{Dim}_{\mathrm{S}}(C_{n},C_{3})\leq|Z|=n\). To prove that \(\operatorname{Dim}_{\mathrm{S}}(C_{n},C_{3})\geq n\), consider the constant function \(f_{1}(g_{i})=1\) for every \(i\in[n]\). Then in every \(iH\) the vertices corresponding to \(2\) and \(3\) are twins and hence at least one of them must be included in every resolving set. Then \(\dim(G\otimes_{f_{1}}H)\geq n\) and therefore \(\operatorname{Dim}_{\mathrm{S}}(C_{n},C_{3})\geq n\). \(\Box\) We remark that, in the proof above, the situation in which we consider the constant function \(f_{1}(g_{i})=1\) for every \(i\in[n]\), leads to a graph which indeed represents a rooted product graph, and so, the value of its metric dimension can be also deduced from results appearing in [10, 24]. For a given integer \(k\geq 3\), let \(F_{k}\) be the graph obtained as follows. We begin with a cycle \(v_{0}v_{1}\cdots v_{2k-1}v_{0}\) and do all computations modulo \(2k\). Next we add \(k\) isolated vertices \(u_{0},u_{2},u_{4},\ldots,u_{2(k-1)}\) and the edges \(u_{2i}v_{2i-1},u_{2i}v_{2i}\) for every \(0\leq i\leq k-1\). **Lemma 3.2**.: _If \(k\geq 3\), then \(\dim(F_{k})=2\)._ Proof.: We claim that the set \(S=\{u_{0},u_{2(\lceil k/2\rceil-1)}\}\) is a metric basis for \(F_{k}\). Let \(x,y\) be any two distinct vertices of \(F_{k}\) with \(x,y\notin S\) (if \(x\in S\) or \(y\in S\), then they are clearly identified by \(S\)). If \(d(x,u_{0})=d(y,u_{0})\), then we have either one of the following situations. **Case 1**: \(x,y\in A_{1}=\{v_{0},v_{1},\ldots,v_{k-1}\}\cup\{u_{0},u_{2},\ldots,u_{2\lceil k /2\rceil}\}\). In such situation, it must happen that (w.l.g.) \(x=v_{2i}\) and \(y=u_{2i}\) for some \(i\in\{1,2,\ldots,\lceil k/2\rceil\}\). If \(i=\lceil k/2\rceil-1\), then \(y\in S\) and the conclusion is clear. In all the other cases we notice that \(v_{2i}\) belongs to the \(u_{2i},u_{2(\lceil k/2\rceil-1)}\)-geodesic, which means \(u_{2(\lceil k/2\rceil-1)}\) identifies \(x=v_{2i}\) and \(y=u_{2i}\). **Case 2**: \(x,y\in A_{2}=V(F_{k})\setminus A_{1}\). A similar conclusion as in Case 1 can be deduced, but taking into account that \(x=v_{2i-1}\) and \(y=u_{2i}\) for some \(i\in\{\lceil k/2\rceil+1,\ldots,k-1\}\). **Case 3**: \(x\in A_{1}\) and \(y\in A_{2}\). Hence, \(x=u_{i}\) or \(x=v_{i}\), and \(y=v_{2k-i-1}\) or \(y=u_{2k-i}\), for some \(i\in\{0,1,\ldots,k-1\}\), where if \(x=u_{i}\), then \(i\) is even. We consider now the distances between \(x,y\) and \(u_{2(\lceil k/2\rceil-1)}\). That is: \[\begin{array}{lclclcl}d(v_{i},u_{2(\lceil k/2\rceil-1)})&=&2(\lceil k/2\rceil-1)-i& =&2\lceil k/2\rceil-i-2,\\ d(u_{i},u_{2(\lceil k/2\rceil-1)})&=&2(\lceil k/2\rceil-1)-i+1&=&2\lceil k/2 \rceil-i-1,\\ d(v_{2k-i-1},u_{2(\lceil k/2\rceil-1)})&=&2k-i-2(\lceil k/2\rceil-1)&=&2\lfloor k /2\rfloor-i+2,\\ d(u_{2k-i},u_{2(\lceil k/2\rceil-1)})&=&2k-i-2(\lceil k/2\rceil-1)+1&=&2\lfloor k /2\rfloor-i+3.\end{array}\] Consequently, we now obtain a contradiction from each situation in which we would suppose that \(d(x,u_{2(\lceil k/2\rceil-1)})=d(y,u_{2(\lceil k/2\rceil-1)})\), for any possible assumption taken for \(x,y\) as considered before. Therefore, \(x,y\) are identified by \(u_{2(\lceil k/2\rceil-1)}\), and so, \(S\) is a resolving set. Since \(F_{k}\) is not a path, then \(S\) it is indeed a metric basis, as claimed. \(\Box\) **Theorem 3.3**.: _If \(n\geq 3\), then \(\dim_{\mathrm{S}}(C_{n},C_{3})=2\)._ Proof.: Clearly, \(\dim_{\mathrm{S}}(C_{n},C_{3})\geq 2\), hence we only need to prove that \(\dim_{\mathrm{S}}(C_{n},C_{3})\leq 2\). We use the notation from the first two paragraphs of the proof of Theorem 3.1. In particular, \(G=C_{n}\) and \(H=C_{3}\). For each \(n\geq 3\) we define a function \(f_{n}:V(G)\to V(H)\) as follows. To do so, let us represent a function \(f:V(G)\to V(H)\) as the vector \((f(g_{1}),f(g_{2}),\ldots,f(g_{n}))\). Then set \(f_{3}=(1,2,3)\) and \(f_{4}=(1,1,2,2)\). Let then \(n\geq 5\). **Case 1**: \(n\) is odd. Let \(B\) be the sequence \(3,1,2,3\). Let \(f_{n}\) be defined as follows. * If \(n=4(k+1)+1\), \(k\geq 0\), then \(f_{n}=(1,2,3,B,\ldots,B,3,1)\), where \(B\) appears \(k\) times. * If \(n=4(k+1)+3\), \(k\geq 0\), then \(f_{n}=(1,2,3,B,\ldots,B)\), where \(B\) appears \(k+1\) times. **Case 2**: \(n\) is even. Let \(C\) be the sequence \(2,2,3,3\). Let \(f_{n}\) be defined as follows. * If \(n=4(k+1)\), \(k\geq 0\), then \(f_{n}=(1,1,C,\ldots,C,2,2)\), where \(C\) appears \(k\) times. * If \(n=4(k+1)+2\), \(k\geq 0\), then \(f_{n}=(1,1,C,\ldots,C)\), where \(C\) appears \(k+1\) times. It is straightforward to verify that for every \(n\geq 3\), the Sierpinski product \(G\otimes_{f_{n}}H\) has the following structure. For each \(i\) we have \(x_{i}\neq y_{i}\), and hence \(w_{i}\) is the third vertex from \(V(iH)\), see Fig. 1 where \(G\otimes_{f_{5}}H\) is drawn in two different ways. It is now clear that \(G\otimes_{f_{n}}H\cong F_{n}\) and hence Lemma 3.2 completes the argument. \(\Box\) ## 4 Convexity property of Sierpinski products In this section, we establish a distance convex property of the Sierpinski product of two graphs. Recall that a subgraph \(H\) of a graph \(G\) is _convex_ if whenever \(u,v\in V(H)\) and \(P\) is a shortest \(u,v\)-path in \(G\), then \(P\) lies completely in \(H\). **Theorem 4.1**.: _If \(G\) and \(H\) be connected graphs, \(f\colon V(G)\to V(H)\), and \(g\in V(G)\), then \(gH\) is a convex subgraph of \(G\otimes_{f}H\)._ Proof.: Throughout the proof, let \(X=G\otimes_{f}H\). Suppose on the contrary that there exists vertices \(u,v\in V(gH)\) such that \(d_{X}(u,v)<d_{gH}(u,v)\). Note that this does not happen in trees, hence in the rest we may assume that \(G\) contains cycles. Suppose now that \(u\), \(v\), and \(gH\) are selected such that \(d_{X}(u,v)\) is as small as possible among all such counterexamples. Let \(u=(g,h)\), \(v=(g,h^{\prime})\), and let \(P\) be a shortest \(u,v\)-path in \(X\). Set further \(g_{1}=g\). **Claim**. The shape of the path \(P\) is as follows. Let \(g_{1},\ldots,g_{k}\), \(k\geq 2\), be the vertices of \(G\) ordered such that \(P\) passes through \(g_{1}H,\ldots,g_{k}H\) in that order. Then \(P\) starts with the connecting edge \((g_{1},f(g_{2}))(g_{2},f(g_{1}))\), proceeds with a geodesic \(P_{2}\) in \(g_{2}H\) between \((g_{2},f(g_{1}))\) and \((g_{2},f(g_{3}))\), then continuing with the connecting edge \((g_{2},f(g_{3}))(g_{3},f(g_{2}))\), and so on. Finally \(P\) arrives at \(g_{k}H\), proceeds along a geodesic in \(G_{k}\) between \((g_{k},f(g_{k-1}))\) and \((g_{k},f(g_{1}))\), and ends with the connecting edge \((g_{k},f(g_{1}))(g_{1},f(g_{k}))\), where \(f(g_{k})=h^{\prime}\). See Fig. 2. Note first that \(k\geq 3\) because \(k=2\) would imply that there are two connecting edges between \(g_{1}H\) and \(g_{2}H\). We next show that the vertices \(g_{1},\ldots,g_{k}\) are pairwise different. Suppose on the contrary that there exist \(i\) and \(j\), such that \(2\leq i<j<k\) and \(g_{j+1}=g_{i}\). Let \(P^{\prime}\) be the subpath of \(P\) between the vertices \(u^{\prime}=(g_{i},f(g_{i+1}))\) and \(v^{\prime}=(g_{j+1},f(g_{j}))=(g_{i},f(g_{j}))\). As \(P\) is a geodesic in \(X\), Bellman's principle of optimality implies that \(P^{\prime}\) is also a geodesic. In addition to its connecting edges, \(P^{\prime}\) contains geodesics \(P_{i},P_{i+1},\ldots,P_{j}\), which are respectively, projected onto \(H\), geodesics * between \(f(g_{i-1})\) and \(f(g_{i+1})\), * between \(f(g_{i})\) and \(f(g_{i+2})\), \(\vdots\) * between \(f(g_{j-2})\) and \(f(g_{j})\), and * between \(f(g_{j-1})\) and \(f(g_{i})\). Suppose first that \(j-i\) is odd. Then we have \[|P^{\prime}| >d_{H}(f(g_{i-1}),f(g_{i+1}))+d_{H}(f(g_{i+1}),f(g_{i+3}))+\cdots+d _{H}((g_{j-2}),f(g_{j}))\] \[\geq d_{H}(f(g_{i-1}),f(g_{j}))\] \[=d_{g_{i}H}(u^{\prime},v^{\prime})\,.\] This is a contradiction with the selection of \(u\) and \(v\) as a minimal counterexample. Suppose second that \(j-i\) is even. Then we have \[|P^{\prime}| >[d_{H}(f(g_{i-1}),f(g_{i+1}))+d_{H}(f(g_{i+1}),f(g_{i+3}))+\cdots +d_{H}((g_{j-1}),f(g_{i}))]+\] \[\quad[d_{H}(f(g_{i}),f(g_{i+2}))+d_{H}(f(g_{i+2}),f(g_{i+4}))+ \cdots+d_{H}((g_{j-2}),f(g_{j}))]\] \[\geq d_{H}(f(g_{i-1}),f(g_{i}))+d_{H}(f(g_{i}),f(g_{j}))\] \[\geq d_{H}(f(g_{i-1}),f(g_{j}))\] \[=d_{g_{i}H}(u^{\prime},v^{\prime})\,.\] Hence we get the same contradiction as in the previous case. Figure 2: The shape of \(P\) We have thus proved that the vertices \(g_{1},\ldots,g_{k}\) are pairwise different. To complete the proof of the claim, we need to verify that \(P\) starts and ends with a connecting edge. Suppose on the contrary that \(P\) starts with a subpath in \(g_{1}H\) from \(u\) to \(w\) and then proceed along the connecting edge between \(g_{1}H\) and \(g_{2}H\). By the minimality assumption on \(u\) and \(v\) we have \(d_{X}(u,w)=d_{g_{1}H}(u,w)\). We now have: \[d_{X}(u,w)+d_{X}(w,v) =d_{X}(u,v)\] \[<d_{g_{1}H}(u,v)\] \[\leq d_{g_{1}H}(u,w)+d_{g_{1}H}(w,v)\] \[=d_{X}(u,w)+d_{g_{1}H}(w,v)\] which yields \(d_{X}(w,v)<d_{g_{1}H}(w,v)\). This contradiction proves that \(P\) indeed starts with a connecting edge. A parallel argument yields that \(P\) also ends with a connecting edge. This proves the claim. To conclude the proof, let \(P_{2},P_{3},\ldots,P_{k}\) be the sections of \(P\) restricted to \(g_{2}H,g_{3}H,\ldots,g_{k}H\), respectively. Then we proceed similarly as we did for the subpath \(P^{\prime}\) above. More precisely, if \(k-1\) is odd, then \[|P| >d_{H}(f(g_{2}),f(g_{4}))+d_{H}(f(g_{4}),f(g_{6}))+\cdots+d_{H}(( g_{k-2}),f(g_{k}))\] \[\geq d_{H}(f(g_{2}),f(g_{k}))\] \[=d_{g_{1}H}(u,v)\,.\] And if \(k-1\) is even, then \[|P^{\prime}| >[d_{H}(f(g_{1}),f(g_{3}))+d_{H}(f(g_{3}),f(g_{5}))+\cdots+d_{H}( (g_{k-2}),f(g_{k}))]+\] \[\quad[d_{H}(f(g_{2}),f(g_{4}))+d_{H}(f(g_{4}),f(g_{6}))+\cdots+d_{ H}((g_{k-1}),f(g_{1}))]\] \[\geq d_{H}(f(g_{1}),f(g_{k}))+d_{H}(f(g_{2}),f(g_{1}))\] \[\geq d_{H}(f(g_{2}),f(g_{k}))\] \[=d_{g_{1}H}(u,v)\,.\] This final contradiction proves the theorem. \(\Box\) ## 5 Concluding remarks In Theorem 2.6 we determined \(\dim_{\mathrm{S}}(P_{n},T)\) where \(T\) is an arbitrary tree different from a path. It remains to determine \(\dim_{\mathrm{S}}(T^{\prime},T)\) where \(T^{\prime}\) and \(T\) are arbitrary trees. In Theorems 3.1 and 3.3 we determined \(\mathrm{Dim}_{\mathrm{S}}(C_{n},C_{3})\) and \(\dim_{\mathrm{S}}(C_{n},C_{3})\) for all \(n\geq 3\). It remains to determine \(\mathrm{Dim}_{\mathrm{S}}(C_{n},C_{m})\) and \(\dim_{\mathrm{S}}(C_{n},C_{m})\) for all \(n\geq 3\) and \(m\geq 4\). It would be interesting to determine \(\mathrm{Dim}_{\mathrm{S}}(G,H)\) and \(\dim_{\mathrm{S}}(G,H)\) for other classes of graphs \(G\) and \(H\). ## Acknowledgments Research of Michael Henning was supported in part by the University of Johannesburg, and obtained during his sabbatical visit at the University of Ljubljana. Sandi Klavzar was supported by the Slovenian Research Agency (ARRS) under the grants P1-0297, J1-2452, and N1-0285. Ismael G. Yero has been partially supported by the Spanish Ministry of Science and Innovation through the grant PID2019-105824GB-I00. Moreover, this investigation was developed while this author (Ismael G. Yero) was visiting the University of Ljubljana, Slovenia, supported by "Ministerio de Educacion, Cultura y Deporte", Spain, under the "Jose Castillejo" program for young researchers (reference number: CAS21/00100). ## Declaration of interests The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Data availability Our manuscript has no associated data.